Bitwidth in AIMET PyTorch Quantization

I am new to AIMET. One of the arguments of QuantizationSimModel is default_param_bw. For this argument default bitwidth is 4-31. I was wondering if I can use 2 or 1 as the bitwidth?

Hi @Aida. AIMET is not designed to simulate binary quantization meaning with bitwidth = 1. So in theory it should run and simulate quantization noise. But I am not sure if the quantized noise would be represented correctly. This is perhaps true for bitwidth = 2 as well. So definitely we have not tested bitwidth less than 4 bits.

If you have a particular use case, you could try this out and report.

Thanks for your response. When I tried the QuantizationSimModel with 4 bits on a simple network with fully connected layers, I got this warning (actually I get this warning for other networks as well):

No config file provided, defaulting to config file at /esat/quartz/aashrafi/aimet/build/staging/universal/lib/python/aimet_common/quantsim_config/default_config.json

/usr/local/lib/python3.6/dist-packages/torch/jit/init.py:702: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:

Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 2] (-7.062877655029297 vs. -8.789953231811523) and 9 other locations (100.00%)

_check_trace([example_inputs], func, executor_options, traced, check_tolerance, _force_outplace)

Could you tell me if there is an issue in the way I used the function?

At the end when I compute the accuracy of sim.model in an evaluation function it is different from full precision but when I export the sim.model and load it somewhere else the values of the weights are the same as full precision one. I tried to use the debugger to see the weights but they go into the wrapper very early and I guess the only way to see the weights is to export and load. Could you help me on this?

As I checked, cross layer equalization does not change the fully connected layers, right? I mean it only affects the convolutional layers? Thanks.

Hi @Aida - you can ignore those tracer warnings. We will figure out a way to suppress them in the future.

This is exactly what the error says. You did not pass in a config file (one of the parameters), so it defaults to the built-in one. This is good.

Just creating QuantizationSimModel and exporting it, will not show you any updated weights. However, if you run fine-tuning with the QuantizationSimModel then the weights will get updated. The purpose of the QuantizationSimModel is to simulate quantization noise for evaluation and fine-tuning (quantization-aware training), it is not intended to “quantize” a model. Hope that helps. Please let us know if you have further questions.

Hi,
Can you share some results of ResNet18\50 for lower bitt widths such as 2-5?
I manage to get very poor results for DFQ. Just want to verify.
Thanks