AI Model Efficiency Toolkit (AIMET) Forum

Exception when using channel prunning with TF

Hi, I am trying to run channel pruning on my own TF model. The model has a encoder - decoder structure and makes use of Conv2dTranspose layers in the decoder.

I am getting the following error:

  File "", line 121, in channel_pruning_auto_mode
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_tensorflow/", line 109, in compress_model
    compressed_layer_db, stats = algo.compress_model(cost_metric, trainer)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/", line 88, in compress_model
    layer_comp_ratio_list, stats = self._comp_ratio_select_algo.select_per_layer_comp_ratios()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/", line 223, in select_per_layer_comp_ratios
    eval_scores_dict = self._construct_eval_dict()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/", line 214, in _construct_eval_dict
    eval_scores_dict = self._compute_eval_scores_for_all_comp_ratio_candidates()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/", line 400, in _compute_eval_scores_for_all_comp_ratio_candidates
    progress_bar, layer)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/", line 429, in _compute_layerwise_eval_score_per_comp_ratio_candidate
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_tensorflow/channel_pruning/", line 260, in prune_model
    in_place=True, verbose=False)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_tensorflow/winnow/", line 71, in winnow_tf_model
    in_place, verbose)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_tensorflow/winnow/", line 88, in __init__
    self._mask_propagator = MaskPropagator(self._conn_graph, model_api=ModelApi.tensorflow)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/winnow/", line 77, in __init__
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/winnow/", line 91, in _create_masks
  File "/home/ubuntu/.local/lib/python3.6/site-packages/aimet_common/winnow/", line 116, in _create_masks_for_op_and_all_ancestors
    current_op.num_in_channels = input_shape[api_channel_index_dict[self._model_api]]
  File "/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow_core/python/framework/", line 870, in __getitem__
    return self._dims[key]
IndexError: list index out of range

By adding some print outs to AIMET’s, it became clear that the problem is indeed related to the Conv2DTranspose layers. When the exception is triggered current_op.inputs[0] is pointing to convTr9/mul_1_to_convTr9/stack in my model (a Conv2dTranspose layer), and the input_shape is ().

I have tried adding all the Conv2DTranspose layers to the modules_to_ignore list but this does not seem to help.

I do realize that without the full model it will be difficult to exactly pinpoint the problem, but is there any recommendation on how to deal with this issue?



As of today, the AIMET Channel Pruning feature doesn’t support the TensorFlow Conv2DTranspose Op. We invite you to add this functionality to AIMET. There are multiple steps involved and we can guide you through these steps.

For the Channel Pruning feature, we analyze the computing graph (TensorFlow) and build our own representation of the connected graph. One TensorFlow Op is often comprised of many constituent ops, and the connected graph helps us group these ops into one cohesive unit. This later helps us to prune the correct channels in various Ops.

The Conv2DTranspose Op must first be recognized by our connected graph in order for Channel Pruning logic to be added later. To recognize the Conv2Transpose Op, the following changes must be made.

In <…>/aimet_tensorflow/common/, in the op_type_templates, add an entry for Conv2DTranspose.

The general structure of a pattern entry is as follows (Conv2D shown as an example):

‘Conv2D’: {
‘input_shape’: (1, 10, 10, 3),
‘op_type’: ‘Conv2D’,
‘constructor’: “tf.keras.layers.Conv2D(10, (1, 1), use_bias=False)(constants)”,
‘module_regex’: [’(.+/Conv2D)', '(.+/separable_conv2d)’, ‘(.+/convolution)'], 'associated_op_regex': ['Conv2D’, ‘separable_conv2d$’, ‘convolution$’]

  • The dictionary names of the pattern entries are currently not being used anywhere, and can be named in any appropriate fashion.
  • Input shape is an example valid input for an input into the op. The size of the dimensions don’t really matter, it is mainly having the correct dimensions itself that matters (1D, 2D, etc.)
  • Op type is a string representing the type which we will associate with the matched pattern with in our Connected Graph. We typically try to use the same op type as the Tensorflow op that is performing the actual computation we want (the Conv2D op in the case of Conv2D for example). Sometimes there is no one constituent op that is performing the computation, in which case we pick the best name appropriate.
  • The Constructor is a string which will be run through python’s exec() to instantiate a standalone Tensorflow graph containing whatever constituent ops are generated. This will be the constructor for Conv2DTranspose in your case. The input to the layer, constants, is a tf.constant random tensor that is created with the input shape specified above.
  • Module regex are regex patterns that strip out the unique part of layer names to distinguish between different layers of the same type. (Ex. if two Convs are created, there will be a conv2d_0/Conv2D op and a conv2d_1/Conv2D op created. We will isolate and use the names ‘conv2d_0’ and ‘conv2d_1’ to distinguish the layers).
  • Associated op regex is a regex pattern that will attempt to match the name of one constituent op that represents the layer. For Conv2D, when it is created, we get several ops like Conv2D itself, a BiasAdd op, some read ops, assign ops, etc. We will choose the Conv2D op to be the associated op for the whole layer, and write our associated op regex to match. Note that there are multiple strings here since the same constructor pattern can match with slightly different Conv2D layer patterns.

Please note that we have separate pattern entries for a Conv2D with bias and without bias, since the patterns and ops to match will be different. If your model uses Conv2DTranspose Ops both with and without Bias, two entries must be added to the op_type_templates.

In <…>aimet_common/winnow/, in tensorflow_dict, add the following item:

        'Conv2DTranspose': ConnectivityType.null

This dictionary holds connectivity information for how channel pruning should treat different types of ops. For some ops like BatchNorm, if the number of output channels are changed, the number of input channels can also be changed the same, so we mark that as In the case of Conv2D and Conv2DTranspose, the number of output channels is independent from the number of input channels, which we associate with the ConnectivityType.null trait.

After adding these changes, please run your test and let us know any errors you are observing. There will still need to be additional changes added to provide the ability to create and insert Conv2DTranspose ops with pruned channels, and we can guide you through the next changes at that point.

Thanks for using AIMET.

FYI: @quic_klhsieh, @quic_ssiddego