AI Model Efficiency Toolkit (AIMET) Forum

SNPE onnx-to-dlc quantizer reads and uses provided quantization parameters?

Hi, I know this is somehow related to another Qualcomm tool (SNPE) but I would love to know if there is a way to somehow load the quantization parameters from AIMET and use it in SNPE?
Or is there a way to modify the quantization parameters in DLC so that I can manually make use of the .encodings file exported from AIMET’s quantization simulation? (I heard from a technical meeting with the dev guys from Qualcomm that there might be a script for this?)

To make things more concrete, I have a quantized model stored in these 2 files: .onnx and .encodings, both exported from AIMET’s quantization simulation. I want to load the model directly to SNPE without SNPE doing the quantization solely based on the ONNX and another input data again.

https://developer.qualcomm.com/docs/snpe/tools.html#tools_snpe-dlc-quantize
The command does support overwriting the quantization parameter in tensorflow. I am using Pytorch and its exported ONNX models.

Greatly appreciate it if you can provide some insight!

@Qianhao Sorry for the late response.

This question is already answered here

Let me know if you have further questions.