AI Model Efficiency Toolkit (AIMET) Forum

DETR(Detection Transformer) AIMET Quantization

Quantization of DETR model from facebook repository throws the following error in compute_encodings()

The self.self_attn() refers to torch.nn.MultiheadAttention()

Any suggestions on how to overcome this issue?

Hi @mcw_qc_aimet… Transformer models are not supported at this time. We hope to work on adding support for transformer models later this year.

1 Like