Quantization
QuantBackboneInternalSkipConnection
Bases: QuantSkipConnection
This is a placeholder module used by the quantization engine only. The module is to be used as a quantized substitute to a skip connection between blocks inside the backbone.
Source code in V3_2/src/super_gradients/modules/quantization/quantized_skip_connections.py
43 44 45 46 47 48 |
|
QuantCrossModelSkipConnection
Bases: QuantSkipConnection
This is a placeholder module used by the quantization engine only. The module is to be used as a quantized substitute to a skip connection between backbone and the head.
Source code in V3_2/src/super_gradients/modules/quantization/quantized_skip_connections.py
59 60 61 62 63 64 |
|
QuantHeadInternalSkipConnection
Bases: QuantSkipConnection
This is a placeholder module used by the quantization engine only. The module is to be used as a quantized substitute to a skip connection between blocks inside the head.
Source code in V3_2/src/super_gradients/modules/quantization/quantized_skip_connections.py
51 52 53 54 55 56 |
|
QuantResidual
Bases: SGQuantMixin
This is a placeholder module used by the quantization engine only. The module is to be used as a quantized substitute to a residual skip connection within a single block.
Source code in V3_2/src/super_gradients/modules/quantization/quantized_skip_connections.py
13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
QuantSkipConnection
Bases: SGQuantMixin
This is a placeholder module used by the quantization engine only. The module is to be used as a quantized substitute to a skip connection between blocks.
Source code in V3_2/src/super_gradients/modules/quantization/quantized_skip_connections.py
28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
QuantAttentionRefinementModule
Bases: SGQuantMixin
, AttentionRefinementModule
AttentionRefinementModule to apply on the last two backbone stages.
Source code in V3_2/src/super_gradients/modules/quantization/quantized_stdc_blocks.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
QuantBottleneck
Bases: SGQuantMixin
we just insert quantized tensor to the shortcut (=residual) layer, so that it would be quantized NOTE: we must quantize the float instance, so the mode should be QuantizedMetadata.ReplacementAction.RECURE_AND_REPLACE
Source code in V3_2/src/super_gradients/modules/quantization/resnet_bottleneck.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|