Transforms
AbstractDepthEstimationTransform
Bases: abc.ABC
Base class for all transforms for depth estimation sample augmentation.
Source code in src/super_gradients/training/transforms/depth_estimation/abstract_depth_estimation_transform.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
__call__(sample)
abstractmethod
Apply transformation to given depth estimation sample. Important note - function call may return new object, may modify it in-place. This is implementation dependent and if you need to keep original sample intact it is recommended to make a copy of it BEFORE passing it to transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
DepthEstimationSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
DepthEstimationSample
|
Modified sample (It can be the same instance as input or a new object). |
Source code in src/super_gradients/training/transforms/depth_estimation/abstract_depth_estimation_transform.py
11 12 13 14 15 16 17 18 19 20 21 22 |
|
AbstractDetectionTransform
Bases: abc.ABC
Base class for all transforms for object detection sample augmentation.
Source code in src/super_gradients/training/transforms/detection/abstract_detection_transform.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
__init__(additional_samples_count=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
additional_samples_count |
int
|
(int) number of samples that must be extra samples from dataset. Default value is 0. |
0
|
Source code in src/super_gradients/training/transforms/detection/abstract_detection_transform.py
15 16 17 18 19 |
|
apply_to_sample(sample)
abstractmethod
Apply transformation to given pose estimation sample. Important note - function call may return new object, may modify it in-place. This is implementation dependent and if you need to keep original sample intact it is recommended to make a copy of it BEFORE passing it to transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
DetectionSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
DetectionSample
|
Modified sample (It can be the same instance as input or a new object). |
Source code in src/super_gradients/training/transforms/detection/abstract_detection_transform.py
21 22 23 24 25 26 27 28 29 30 31 32 |
|
DetectionLongestMaxSize
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Resize data sample to guarantee that input image dimensions is not exceeding maximum width & height
Source code in src/super_gradients/training/transforms/detection/detection_longest_max_size.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
__init__(max_height, max_width, interpolation=cv2.INTER_LINEAR, prob=1.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_height |
int
|
(int) Maximum image height |
required |
max_width |
int
|
(int) Maximum image width |
required |
interpolation |
int
|
Used interpolation method for image |
cv2.INTER_LINEAR
|
prob |
float
|
Probability of applying this transform. Default: 1.0 |
1.0
|
Source code in src/super_gradients/training/transforms/detection/detection_longest_max_size.py
20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
DetectionPadIfNeeded
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Pad image and targets to ensure that resulting image size is not less than (min_width, min_height).
Source code in src/super_gradients/training/transforms/detection/detection_pad_if_needed.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
__init__(min_height, min_width, pad_value, padding_mode='bottom_right')
Parameters:
Name | Type | Description | Default |
---|---|---|---|
min_height |
int
|
Minimal height of the image. |
required |
min_width |
int
|
Minimal width of the image. |
required |
pad_value |
int
|
Padding value of image |
required |
padding_mode |
str
|
Padding mode. Supported modes: 'bottom_right', 'center'. |
'bottom_right'
|
Source code in src/super_gradients/training/transforms/detection/detection_pad_if_needed.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Apply transform to a single sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
DetectionSample
|
Input detection sample. |
required |
Returns:
Type | Description |
---|---|
DetectionSample
|
Transformed detection sample. |
Source code in src/super_gradients/training/transforms/detection/detection_pad_if_needed.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
LegacyDetectionTransformMixin
A mixin class to make legacy detection transforms compatible with new detection transforms that operate on DetectionSample.
Source code in src/super_gradients/training/transforms/detection/legacy_detection_transform_mixin.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
|
__call__(sample)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
Union[DetectionSample, Dict[str, Any]]
|
Dict with following keys: - image: numpy array of [H,W,C] or [C,H,W] format - target: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) - crowd_targets: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) |
required |
Source code in src/super_gradients/training/transforms/detection/legacy_detection_transform_mixin.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
convert_detection_sample_to_dict(detection_sample, include_crowd_target)
classmethod
Convert new DetectionSample dataclass to old-style detection sample dict. This is a reverse operation to convert_input_dict_to_detection_sample and used to make legacy transforms compatible with new detection transforms.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
detection_sample |
DetectionSample
|
Input DetectionSample dataclass. |
required |
include_crowd_target |
Union[bool, None]
|
A flag indicating whether to include crowd_target in output dictionary. Can be None - in this case crowd_target will be included only if crowd targets are present in input sample. |
required |
Returns:
Type | Description |
---|---|
Dict[str, Union[np.ndarray, Any]]
|
Output dictionary with following keys: - image: numpy array of [H,W,C] or [C,H,W] format - target: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) - crowd_targets: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) |
Source code in src/super_gradients/training/transforms/detection/legacy_detection_transform_mixin.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
|
convert_input_dict_to_detection_sample(sample_annotations)
classmethod
Convert old-style detection sample dict to new DetectionSample dataclass.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample_annotations |
Dict[str, Union[np.ndarray, Any]]
|
Input dictionary with following keys: - image: numpy array of [H,W,C] or [C,H,W] format - target: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) - crowd_targets: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) |
required |
Returns:
Type | Description |
---|---|
DetectionSample
|
An instance of DetectionSample dataclass filled with data from input dictionary. |
Source code in src/super_gradients/training/transforms/detection/legacy_detection_transform_mixin.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
AbstractKeypointTransform
Bases: abc.ABC
Base class for all transforms for keypoints augmentation. All transforms subclassing it should implement call method which takes image, mask and keypoints as input and returns transformed image, mask and keypoints.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
additional_samples_count |
int
|
Number of additional samples to generate for each image. This property is used for mixup & mosaic transforms that needs an extra samples. |
0
|
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
__call__(image, mask, joints, areas, bboxes)
Apply transformation to pose estimation sample passed as a tuple This method acts as a wrapper for apply_to_sample method to support old-style API.
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
__init__(additional_samples_count=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
additional_samples_count |
int
|
(int) number of samples that must be extra samples from dataset. Default value is 0. |
0
|
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
20 21 22 23 24 |
|
apply_to_sample(sample)
abstractmethod
Apply transformation to given pose estimation sample. Important note - function call may return new object, may modify it in-place. This is implementation dependent and if you need to keep original sample intact it is recommended to make a copy of it BEFORE passing it to transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Modified sample (It can be the same instance as input or a new object). |
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
45 46 47 48 49 50 51 52 53 54 55 56 |
|
KeypointsBrightnessContrast
Bases: AbstractKeypointTransform
Apply brightness and contrast change to the input image using following formula: image = (image - mean_brightness) * contrast_gain + mean_brightness + brightness_gain Transformation preserves input image dtype. Saturation cast is performed at the end of the transformation.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_brightness_contrast.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
__init__(prob, brightness_range, contrast_range)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
brightness_range |
Tuple[float, float]
|
Tuple of two elements, min and max brightness gain. Represents a relative range of brightness gain with respect to average image brightness. A brightness gain of 1.0 indicates no change in brightness. Therefore, optimal value for this parameter is somewhere inside (0, 2) range. |
required |
contrast_range |
Tuple[float, float]
|
Tuple of two elements, min and max contrast gain. Effective contrast_gain would be uniformly sampled from this range. Based on definition of contrast gain, it's optimal value is somewhere inside (0, 2) range. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_brightness_contrast.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
KeypointsCompose
Bases: AbstractKeypointTransform
Composes several transforms together
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
|
__call__(image, mask, joints, areas, bboxes)
Apply transformation to pose estimation sample passed as a tuple This method acts as a wrapper for apply_to_sample method to support old-style API.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(transforms, load_sample_fn=None)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transforms |
List[AbstractKeypointTransform]
|
List of keypoint-based transformations |
required |
load_sample_fn |
A method to load additional samples if needed (for mixup & mosaic augmentations). Default value is None, which would raise an error if additional samples are needed. |
None
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Applies the series of transformations to the input sample. The function may modify the input sample inplace, so input sample should not be used after the call.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Transformed sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
48 49 50 51 52 53 54 55 56 57 58 |
|
KeypointsHSV
Bases: AbstractKeypointTransform
Apply color change in HSV color space to the input image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
hgain |
float
|
Hue gain. |
required |
sgain |
float
|
Saturation gain. |
required |
vgain |
float
|
Value gain. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_hsv.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(prob, hgain, sgain, vgain)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
hgain |
float
|
Hue gain. |
required |
sgain |
float
|
Saturation gain. |
required |
vgain |
float
|
Value gain. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_hsv.py
21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
KeypointsImageNormalize
Bases: AbstractKeypointTransform
Normalize image with mean and std using formula (image - mean) / std
.
Output image will allways have dtype of np.float32.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_normalize.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(mean, std)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mean |
Union[float, List[float], ListConfig]
|
(float, List[float]) A constant bias to be subtracted from the image. If it is a list, it should have the same length as the number of channels in the image. |
required |
std |
Union[float, List[float], ListConfig]
|
(float, List[float]) A scaling factor to be applied to the image after subtracting mean. If it is a list, it should have the same length as the number of channels in the image. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_normalize.py
20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Same pose estimation sample with normalized image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_normalize.py
32 33 34 35 36 37 38 39 40 |
|
KeypointsImageStandardize
Bases: AbstractKeypointTransform
Standardize image pixel values with img/max_value formula. Output image will allways have dtype of np.float32.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_value |
float
|
Current maximum value of the image pixels. (usually 255) |
255.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_standardize.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
__init__(max_value=255.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_value |
float
|
A constant value to divide the image by. |
255.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_standardize.py
20 21 22 23 24 25 26 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Same pose estimation sample with standardized image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_standardize.py
28 29 30 31 32 33 34 35 36 |
|
KeypointsLongestMaxSize
Bases: AbstractKeypointTransform
Resize data sample to guarantee that input image dimensions is not exceeding maximum width & height
Source code in src/super_gradients/training/transforms/keypoints/keypoints_longest_max_size.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
__init__(max_height, max_width, interpolation=cv2.INTER_LINEAR, prob=1.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_height |
int
|
(int) - Maximum image height |
required |
max_width |
int
|
(int) - Maximum image width |
required |
interpolation |
int
|
Used interpolation method for image |
cv2.INTER_LINEAR
|
prob |
float
|
Probability of applying this transform |
1.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_longest_max_size.py
19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
KeypointsMixup
Bases: AbstractKeypointTransform
Apply mixup augmentation and combine two samples into one. Images are averaged with equal weights. Targets are concatenated without any changes. This transform requires both samples have the same image size. The easiest way to achieve this is to use resize + padding before this transform:
# This will apply KeypointsLongestMaxSize and KeypointsPadIfNeeded to two samples individually
# and then apply KeypointsMixup to get a single sample.
train_dataset_params:
transforms:
- KeypointsLongestMaxSize:
max_height: ${dataset_params.image_size}
max_width: ${dataset_params.image_size}
- KeypointsPadIfNeeded:
min_height: ${dataset_params.image_size}
min_width: ${dataset_params.image_size}
image_pad_value: [127, 127, 127]
mask_pad_value: 1
padding_mode: center
- KeypointsMixup:
prob: 0.5
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mixup.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
__init__(prob)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mixup.py
42 43 44 45 46 47 48 |
|
apply_to_sample(sample)
Apply the transform to a single sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
An input sample. It should have one additional sample in |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample that represents the mixup sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mixup.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
KeypointsMosaic
Bases: AbstractKeypointTransform
Assemble 4 samples together to make 2x2 grid. This transform stacks input samples together to make a square with padding if necessary. This transform does not require input samples to have same size. If input samples have different sizes (H1,W1), (H2,W2), (H3,W3), (H4,W4), then resulting mosaic will have height of max(H1,H2) + max(H3,H4) and width of max(W1+W2, W2+W3), assuming the first sample is located in top left corner, second sample is in top right corner, third sample is in bottom left corner and fourth sample is in bottom right corner of mosaic.
The location of mosaic transform in the transforms list matter. It affects what transforms will be applied to all 4 samples.
In the example below, KeypointsMosaic goes after KeypointsRandomAffineTransform and KeypointsBrightnessContrast. This means that all 4 samples will be transformed with KeypointsRandomAffineTransform and KeypointsBrightnessContrast.
# This will apply KeypointsRandomAffineTransform and KeypointsBrightnessContrast to four sampls individually
# and then assemble them together in mosaic
train_dataset_params:
transforms:
- KeypointsRandomAffineTransform:
min_scale: 0.75
max_scale: 1.5
- KeypointsBrightnessContrast:
brightness_range: [ 0.8, 1.2 ]
contrast_range: [ 0.8, 1.2 ]
prob: 0.5
- KeypointsMosaic:
prob: 0.5
Contrary, if one puts KeypointsMosaic before KeypointsRandomAffineTransform and KeypointsBrightnessContrast, then 4 original samples will be assembled in mosaic and then transformed with KeypointsRandomAffineTransform and KeypointsBrightnessContrast:
# This will first assemble 4 samples in mosaic and then apply KeypointsRandomAffineTransform and KeypointsBrightnessContrast to the mosaic.
train_dataset_params:
transforms:
- KeypointsRandomAffineTransform:
min_scale: 0.75
max_scale: 1.5
- KeypointsBrightnessContrast:
brightness_range: [ 0.8, 1.2 ]
contrast_range: [ 0.8, 1.2 ]
prob: 0.5
- KeypointsMosaic:
prob: 0.5
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mosaic.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
__init__(prob, pad_value=(127, 127, 127))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mosaic.py
68 69 70 71 72 73 74 75 76 |
|
apply_to_sample(sample)
Apply transformation to given estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample. The sample must have 3 additional samples in it. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample that represents the final mosaic. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mosaic.py
78 79 80 81 82 83 84 85 86 87 88 |
|
KeypointsPadIfNeeded
Bases: AbstractKeypointTransform
Pad image and mask to ensure that resulting image size is not less than output_size
(rows, cols).
Image and mask padded from right and bottom, thus joints remains unchanged.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_pad_if_needed.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|
__init__(min_height, min_width, image_pad_value, mask_pad_value, padding_mode='bottom_right')
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_size |
Desired image size (rows, cols) |
required | |
image_pad_value |
int
|
Padding value of image |
required |
mask_pad_value |
float
|
Padding value for mask |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_pad_if_needed.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
KeypointsRandomAffineTransform
Bases: AbstractKeypointTransform
Apply random affine transform to image, mask and joints.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
|
__init__(max_rotation, min_scale, max_scale, max_translate, image_pad_value, mask_pad_value, interpolation_mode=cv2.INTER_LINEAR, prob=0.5)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_rotation |
float
|
Max rotation angle in degrees |
required |
min_scale |
float
|
Lower bound for the scale change. For +- 20% size jitter this should be 0.8 |
required |
max_scale |
float
|
Lower bound for the scale change. For +- 20% size jitter this should be 1.2 |
required |
max_translate |
float
|
Max translation offset in percents of image size |
required |
image_pad_value |
Union[int, float, List[int]]
|
Value to pad the image during affine transform. Can be single scalar or list. If a list is provided, it should have the same length as the number of channels in the image. |
required |
mask_pad_value |
float
|
Value to pad the mask during affine transform. |
required |
interpolation_mode |
Union[int, List[int]]
|
A constant integer or list of integers, specifying the interpolation mode to use. Possible values for interpolation_mode: cv2.INTER_NEAREST = 0, cv2.INTER_LINEAR = 1, cv2.INTER_CUBIC = 2, cv2.INTER_AREA = 3, cv2.INTER_LANCZOS4 = 4 To use random interpolation modes on each call, set interpolation_mode = (0,1,2,3,4) |
cv2.INTER_LINEAR
|
prob |
float
|
Probability to apply the transform. |
0.5
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
apply_to_areas(areas, mat)
classmethod
Apply affine transform to areas.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
areas |
np.ndarray
|
[N] Single-dimension array of areas |
required |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[N] Single-dimension array of areas |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
134 135 136 137 138 139 140 141 142 143 144 |
|
apply_to_bboxes(bboxes_xywh, mat)
classmethod
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
(N,4) array of bboxes in XYWH format |
required | |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
(N,4) array of bboxes in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
|
apply_to_image(image, mat, interpolation, padding_value, padding_mode)
classmethod
Apply affine transform to image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
interpolation |
int
|
Interpolation mode. See cv2.warpAffine for details. |
required |
padding_value |
Union[int, float, Tuple]
|
Value to pad the image during affine transform. See cv2.warpAffine for details. |
required |
padding_mode |
int
|
Padding mode. See cv2.warpAffine for details. |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Transformed image of the same shape as input image. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
|
apply_to_keypoints(keypoints, mat, image_shape)
classmethod
Apply affine transform to keypoints.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
[N,K,3] array of keypoints in (x,y,visibility) format |
required |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
image_shape |
Tuple[int, int]
|
Image shape after applying affine transform (height, width). Used to update visibility status of keypoints. |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[N,K,3] array of keypoints in (x,y,visibility) format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample. Since this transformation apply affine transform some keypoints/bboxes may be moved outside the image. After applying the transform, visibility status of joints is updated to reflect the new position of joints. Bounding boxes are clipped to image borders. If sample contains areas, they are scaled according to the applied affine transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A transformed pose estimation sample |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
|
KeypointsRandomHorizontalFlip
Bases: AbstractKeypointTransform
Flip image, mask and joints horizontally with a given probability.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
__init__(flip_index, prob=0.5)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flip_index |
List[int]
|
Indexes of keypoints on the flipped image. When doing left-right flip, left hand becomes right hand. So this array contains order of keypoints on the flipped image. This is dataset specific and depends on how keypoints are defined in dataset. |
required |
prob |
float
|
Probability of flipping |
0.5
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
17 18 19 20 21 22 23 24 25 26 27 |
|
apply_to_bboxes(bboxes, cols)
Flip boxes horizontally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input boxes of [N,4] shape in XYWH format |
required |
cols |
int
|
Image width |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped boxes of [N,4] shape in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
72 73 74 75 76 77 78 79 80 81 82 83 |
|
apply_to_image(image)
Flip image horizontally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Horizontally flipped image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
50 51 52 53 54 55 56 57 |
|
apply_to_keypoints(keypoints, cols)
Flip keypoints horizontally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
Input keypoints of [N,K,3] shape |
required |
cols |
int
|
Image width |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped keypoints of [N,K,3] shape |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
59 60 61 62 63 64 65 66 67 68 69 70 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input pose estimation sample. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
KeypointsRandomRotate90
Bases: AbstractKeypointTransform
Apply 90 degree rotations to the sample.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
__init__(prob=0.5)
Initialize transform
Parameters:
Name | Type | Description | Default |
---|---|---|---|
(float) |
prob
|
Probability of applying the transform |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
17 18 19 20 21 22 23 24 25 26 27 |
|
apply_to_bboxes(bboxes_xywh, factor, rows, cols)
classmethod
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
(N, 4) array of bboxes in XYWH format |
required | |
factor |
Number of 90 degree rotations to apply. Order or rotation matches np.rot90 |
required | |
rows |
int
|
Number of rows (image height) of the original (input) image |
required |
cols |
int
|
Number of cols (image width) of the original (input) image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Transformed bboxes in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
apply_to_image(image, factor)
classmethod
Rotate image by 90 degrees
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
factor |
int
|
Number of 90 degree rotations to apply. Order or rotation matches np.rot90 |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Rotated image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
49 50 51 52 53 54 55 56 57 58 |
|
apply_to_keypoints(keypoints, factor, rows, cols)
classmethod
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
Input keypoints array of [Num Instances, Num Joints, 3] shape. Keypoints has format (x, y, visibility) |
required |
factor |
Number of 90 degree rotations to apply. Order or rotation matches np.rot90 |
required | |
rows |
int
|
Number of rows (image height) of the original (input) image |
required |
cols |
int
|
Number of cols (image width) of the original (input) image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Transformed keypoints array of [Num Instances, Num Joints, 3] shape. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|
apply_to_sample(sample)
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Result of applying the transform |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
KeypointsRandomVerticalFlip
Bases: AbstractKeypointTransform
Flip image, mask and joints vertically with a given probability.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
__init__(prob=0.5)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability of flipping |
0.5
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
16 17 18 19 20 21 22 |
|
apply_to_bboxes(bboxes, rows)
Flip boxes vertically
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input boxes of [N,4] shape in XYWH format |
required |
rows |
int
|
Image height |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped boxes of [N,4] shape in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
66 67 68 69 70 71 72 73 74 75 76 77 |
|
apply_to_image(image)
Flip image vertically
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Vertically flipped image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
45 46 47 48 49 50 51 52 |
|
apply_to_keypoints(keypoints, rows)
Flip keypoints vertically
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
Input keypoints of [N,K,3] shape |
required |
rows |
int
|
Image height |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped keypoints of [N,K,3] shape |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
54 55 56 57 58 59 60 61 62 63 64 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input pose estimation sample. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
KeypointsRemoveSmallObjects
Bases: AbstractKeypointTransform
Remove pose instances from data sample that are too small or have too few visible keypoints.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_remove_small_objects.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
__init__(min_visible_keypoints=0, min_instance_area=0, min_bbox_area=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
min_visible_keypoints |
int
|
Minimum number of visible keypoints to keep the sample. Default value is 0 which means that all samples will be kept. |
0
|
min_instance_area |
int
|
Minimum area of instance area to keep the sample Default value is 0 which means that all samples will be kept. |
0
|
min_bbox_area |
int
|
Minimum area of bounding box to keep the sample Default value is 0 which means that all samples will be kept. |
0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_remove_small_objects.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Filtered sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_remove_small_objects.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
KeypointsRescale
Bases: AbstractKeypointTransform
Resize image, mask and joints to target size without preserving aspect ratio.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|
__init__(height, width, interpolation=cv2.INTER_LINEAR, prob=1.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
height |
int
|
Target image height |
required |
width |
int
|
Target image width |
required |
interpolation |
int
|
Used interpolation method for image. See cv2.resize for details. |
cv2.INTER_LINEAR
|
prob |
float
|
Probability of applying this transform. Default value is 1, meaning that transform is always applied. |
1.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
18 19 20 21 22 23 24 25 26 27 28 29 |
|
apply_to_areas(areas, sx, sy)
classmethod
Resize areas to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
areas |
np.ndarray
|
[N] Array of instance areas |
required |
sx |
float
|
Scale factor along the horizontal axis |
required |
sy |
float
|
Scale factor along the vertical axis |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[N] Array of resized instance areas |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
95 96 97 98 99 100 101 102 103 104 |
|
apply_to_bboxes(bboxes, sx, sy)
classmethod
Resize bounding boxes to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input bounding boxes in XYWH format |
required |
sx |
float
|
Scale factor along the horizontal axis |
required |
sy |
float
|
Scale factor along the vertical axis |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Resized bounding boxes in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
apply_to_image(img, dsize, interpolation)
classmethod
Resize image to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img |
Input image |
required | |
dsize |
Tuple[int, int]
|
Target size (width, height) |
required |
interpolation |
int
|
OpenCV interpolation method |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Resize image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
54 55 56 57 58 59 60 61 62 63 64 |
|
apply_to_keypoints(keypoints, sx, sy)
classmethod
Resize keypoints to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
[Num Instances, Num Joints, 3] Input keypoints |
required |
sx |
float
|
Scale factor along the horizontal axis |
required |
sy |
float
|
Scale factor along the vertical axis |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[Num Instances, Num Joints, 3] Resized keypoints |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
66 67 68 69 70 71 72 73 74 75 76 77 78 |
|
apply_to_sample(sample)
Apply transform to sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Output sample |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
|
AbstractKeypointTransform
Bases: abc.ABC
Base class for all transforms for keypoints augmentation. All transforms subclassing it should implement call method which takes image, mask and keypoints as input and returns transformed image, mask and keypoints.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
additional_samples_count |
int
|
Number of additional samples to generate for each image. This property is used for mixup & mosaic transforms that needs an extra samples. |
0
|
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
__call__(image, mask, joints, areas, bboxes)
Apply transformation to pose estimation sample passed as a tuple This method acts as a wrapper for apply_to_sample method to support old-style API.
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
__init__(additional_samples_count=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
additional_samples_count |
int
|
(int) number of samples that must be extra samples from dataset. Default value is 0. |
0
|
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
20 21 22 23 24 |
|
apply_to_sample(sample)
abstractmethod
Apply transformation to given pose estimation sample. Important note - function call may return new object, may modify it in-place. This is implementation dependent and if you need to keep original sample intact it is recommended to make a copy of it BEFORE passing it to transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Modified sample (It can be the same instance as input or a new object). |
Source code in src/super_gradients/training/transforms/keypoints/abstract_keypoints_transform.py
45 46 47 48 49 50 51 52 53 54 55 56 |
|
KeypointsBrightnessContrast
Bases: AbstractKeypointTransform
Apply brightness and contrast change to the input image using following formula: image = (image - mean_brightness) * contrast_gain + mean_brightness + brightness_gain Transformation preserves input image dtype. Saturation cast is performed at the end of the transformation.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_brightness_contrast.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
__init__(prob, brightness_range, contrast_range)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
brightness_range |
Tuple[float, float]
|
Tuple of two elements, min and max brightness gain. Represents a relative range of brightness gain with respect to average image brightness. A brightness gain of 1.0 indicates no change in brightness. Therefore, optimal value for this parameter is somewhere inside (0, 2) range. |
required |
contrast_range |
Tuple[float, float]
|
Tuple of two elements, min and max contrast gain. Effective contrast_gain would be uniformly sampled from this range. Based on definition of contrast gain, it's optimal value is somewhere inside (0, 2) range. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_brightness_contrast.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
KeypointsCompose
Bases: AbstractKeypointTransform
Composes several transforms together
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
|
__call__(image, mask, joints, areas, bboxes)
Apply transformation to pose estimation sample passed as a tuple This method acts as a wrapper for apply_to_sample method to support old-style API.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(transforms, load_sample_fn=None)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transforms |
List[AbstractKeypointTransform]
|
List of keypoint-based transformations |
required |
load_sample_fn |
A method to load additional samples if needed (for mixup & mosaic augmentations). Default value is None, which would raise an error if additional samples are needed. |
None
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Applies the series of transformations to the input sample. The function may modify the input sample inplace, so input sample should not be used after the call.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Transformed sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_compose.py
48 49 50 51 52 53 54 55 56 57 58 |
|
KeypointsHSV
Bases: AbstractKeypointTransform
Apply color change in HSV color space to the input image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
hgain |
float
|
Hue gain. |
required |
sgain |
float
|
Saturation gain. |
required |
vgain |
float
|
Value gain. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_hsv.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(prob, hgain, sgain, vgain)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
hgain |
float
|
Hue gain. |
required |
sgain |
float
|
Saturation gain. |
required |
vgain |
float
|
Value gain. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_hsv.py
21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
KeypointsImageNormalize
Bases: AbstractKeypointTransform
Normalize image with mean and std using formula (image - mean) / std
.
Output image will allways have dtype of np.float32.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_normalize.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(mean, std)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mean |
Union[float, List[float], ListConfig]
|
(float, List[float]) A constant bias to be subtracted from the image. If it is a list, it should have the same length as the number of channels in the image. |
required |
std |
Union[float, List[float], ListConfig]
|
(float, List[float]) A scaling factor to be applied to the image after subtracting mean. If it is a list, it should have the same length as the number of channels in the image. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_normalize.py
20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Same pose estimation sample with normalized image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_normalize.py
32 33 34 35 36 37 38 39 40 |
|
KeypointsImageStandardize
Bases: AbstractKeypointTransform
Standardize image pixel values with img/max_value formula. Output image will allways have dtype of np.float32.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_value |
float
|
Current maximum value of the image pixels. (usually 255) |
255.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_standardize.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
__init__(max_value=255.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_value |
float
|
A constant value to divide the image by. |
255.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_standardize.py
20 21 22 23 24 25 26 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Same pose estimation sample with standardized image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_standardize.py
28 29 30 31 32 33 34 35 36 |
|
KeypointsImageToTensor
Convert image from numpy array to tensor and permute axes to [C,H,W].
This transform works only for old-style transform API and will raise an exception when used in strongly-typed data samples transform API.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_to_tensor.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
__call__(image, mask, joints, areas, bboxes)
Convert image from numpy array to tensor and permute axes to [C,H,W].
Source code in src/super_gradients/training/transforms/keypoints/keypoints_image_to_tensor.py
23 24 25 26 27 28 29 30 |
|
KeypointsLongestMaxSize
Bases: AbstractKeypointTransform
Resize data sample to guarantee that input image dimensions is not exceeding maximum width & height
Source code in src/super_gradients/training/transforms/keypoints/keypoints_longest_max_size.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
__init__(max_height, max_width, interpolation=cv2.INTER_LINEAR, prob=1.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_height |
int
|
(int) - Maximum image height |
required |
max_width |
int
|
(int) - Maximum image width |
required |
interpolation |
int
|
Used interpolation method for image |
cv2.INTER_LINEAR
|
prob |
float
|
Probability of applying this transform |
1.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_longest_max_size.py
19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
KeypointsMixup
Bases: AbstractKeypointTransform
Apply mixup augmentation and combine two samples into one. Images are averaged with equal weights. Targets are concatenated without any changes. This transform requires both samples have the same image size. The easiest way to achieve this is to use resize + padding before this transform:
# This will apply KeypointsLongestMaxSize and KeypointsPadIfNeeded to two samples individually
# and then apply KeypointsMixup to get a single sample.
train_dataset_params:
transforms:
- KeypointsLongestMaxSize:
max_height: ${dataset_params.image_size}
max_width: ${dataset_params.image_size}
- KeypointsPadIfNeeded:
min_height: ${dataset_params.image_size}
min_width: ${dataset_params.image_size}
image_pad_value: [127, 127, 127]
mask_pad_value: 1
padding_mode: center
- KeypointsMixup:
prob: 0.5
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mixup.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
__init__(prob)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mixup.py
42 43 44 45 46 47 48 |
|
apply_to_sample(sample)
Apply the transform to a single sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
An input sample. It should have one additional sample in |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample that represents the mixup sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mixup.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
KeypointsMosaic
Bases: AbstractKeypointTransform
Assemble 4 samples together to make 2x2 grid. This transform stacks input samples together to make a square with padding if necessary. This transform does not require input samples to have same size. If input samples have different sizes (H1,W1), (H2,W2), (H3,W3), (H4,W4), then resulting mosaic will have height of max(H1,H2) + max(H3,H4) and width of max(W1+W2, W2+W3), assuming the first sample is located in top left corner, second sample is in top right corner, third sample is in bottom left corner and fourth sample is in bottom right corner of mosaic.
The location of mosaic transform in the transforms list matter. It affects what transforms will be applied to all 4 samples.
In the example below, KeypointsMosaic goes after KeypointsRandomAffineTransform and KeypointsBrightnessContrast. This means that all 4 samples will be transformed with KeypointsRandomAffineTransform and KeypointsBrightnessContrast.
# This will apply KeypointsRandomAffineTransform and KeypointsBrightnessContrast to four sampls individually
# and then assemble them together in mosaic
train_dataset_params:
transforms:
- KeypointsRandomAffineTransform:
min_scale: 0.75
max_scale: 1.5
- KeypointsBrightnessContrast:
brightness_range: [ 0.8, 1.2 ]
contrast_range: [ 0.8, 1.2 ]
prob: 0.5
- KeypointsMosaic:
prob: 0.5
Contrary, if one puts KeypointsMosaic before KeypointsRandomAffineTransform and KeypointsBrightnessContrast, then 4 original samples will be assembled in mosaic and then transformed with KeypointsRandomAffineTransform and KeypointsBrightnessContrast:
# This will first assemble 4 samples in mosaic and then apply KeypointsRandomAffineTransform and KeypointsBrightnessContrast to the mosaic.
train_dataset_params:
transforms:
- KeypointsRandomAffineTransform:
min_scale: 0.75
max_scale: 1.5
- KeypointsBrightnessContrast:
brightness_range: [ 0.8, 1.2 ]
contrast_range: [ 0.8, 1.2 ]
prob: 0.5
- KeypointsMosaic:
prob: 0.5
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mosaic.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
__init__(prob, pad_value=(127, 127, 127))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mosaic.py
68 69 70 71 72 73 74 75 76 |
|
apply_to_sample(sample)
Apply transformation to given estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample. The sample must have 3 additional samples in it. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample that represents the final mosaic. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_mosaic.py
78 79 80 81 82 83 84 85 86 87 88 |
|
KeypointsPadIfNeeded
Bases: AbstractKeypointTransform
Pad image and mask to ensure that resulting image size is not less than output_size
(rows, cols).
Image and mask padded from right and bottom, thus joints remains unchanged.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_pad_if_needed.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|
__init__(min_height, min_width, image_pad_value, mask_pad_value, padding_mode='bottom_right')
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_size |
Desired image size (rows, cols) |
required | |
image_pad_value |
int
|
Padding value of image |
required |
mask_pad_value |
float
|
Padding value for mask |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_pad_if_needed.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
KeypointsRandomAffineTransform
Bases: AbstractKeypointTransform
Apply random affine transform to image, mask and joints.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
|
__init__(max_rotation, min_scale, max_scale, max_translate, image_pad_value, mask_pad_value, interpolation_mode=cv2.INTER_LINEAR, prob=0.5)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_rotation |
float
|
Max rotation angle in degrees |
required |
min_scale |
float
|
Lower bound for the scale change. For +- 20% size jitter this should be 0.8 |
required |
max_scale |
float
|
Lower bound for the scale change. For +- 20% size jitter this should be 1.2 |
required |
max_translate |
float
|
Max translation offset in percents of image size |
required |
image_pad_value |
Union[int, float, List[int]]
|
Value to pad the image during affine transform. Can be single scalar or list. If a list is provided, it should have the same length as the number of channels in the image. |
required |
mask_pad_value |
float
|
Value to pad the mask during affine transform. |
required |
interpolation_mode |
Union[int, List[int]]
|
A constant integer or list of integers, specifying the interpolation mode to use. Possible values for interpolation_mode: cv2.INTER_NEAREST = 0, cv2.INTER_LINEAR = 1, cv2.INTER_CUBIC = 2, cv2.INTER_AREA = 3, cv2.INTER_LANCZOS4 = 4 To use random interpolation modes on each call, set interpolation_mode = (0,1,2,3,4) |
cv2.INTER_LINEAR
|
prob |
float
|
Probability to apply the transform. |
0.5
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
apply_to_areas(areas, mat)
classmethod
Apply affine transform to areas.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
areas |
np.ndarray
|
[N] Single-dimension array of areas |
required |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[N] Single-dimension array of areas |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
134 135 136 137 138 139 140 141 142 143 144 |
|
apply_to_bboxes(bboxes_xywh, mat)
classmethod
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
(N,4) array of bboxes in XYWH format |
required | |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
(N,4) array of bboxes in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
|
apply_to_image(image, mat, interpolation, padding_value, padding_mode)
classmethod
Apply affine transform to image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
interpolation |
int
|
Interpolation mode. See cv2.warpAffine for details. |
required |
padding_value |
Union[int, float, Tuple]
|
Value to pad the image during affine transform. See cv2.warpAffine for details. |
required |
padding_mode |
int
|
Padding mode. See cv2.warpAffine for details. |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Transformed image of the same shape as input image. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
|
apply_to_keypoints(keypoints, mat, image_shape)
classmethod
Apply affine transform to keypoints.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
[N,K,3] array of keypoints in (x,y,visibility) format |
required |
mat |
np.ndarray
|
[2,3] Affine transformation matrix |
required |
image_shape |
Tuple[int, int]
|
Image shape after applying affine transform (height, width). Used to update visibility status of keypoints. |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[N,K,3] array of keypoints in (x,y,visibility) format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample. Since this transformation apply affine transform some keypoints/bboxes may be moved outside the image. After applying the transform, visibility status of joints is updated to reflect the new position of joints. Bounding boxes are clipped to image borders. If sample contains areas, they are scaled according to the applied affine transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
A pose estimation sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A transformed pose estimation sample |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_affine.py
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
|
KeypointsRandomHorizontalFlip
Bases: AbstractKeypointTransform
Flip image, mask and joints horizontally with a given probability.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
__init__(flip_index, prob=0.5)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flip_index |
List[int]
|
Indexes of keypoints on the flipped image. When doing left-right flip, left hand becomes right hand. So this array contains order of keypoints on the flipped image. This is dataset specific and depends on how keypoints are defined in dataset. |
required |
prob |
float
|
Probability of flipping |
0.5
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
17 18 19 20 21 22 23 24 25 26 27 |
|
apply_to_bboxes(bboxes, cols)
Flip boxes horizontally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input boxes of [N,4] shape in XYWH format |
required |
cols |
int
|
Image width |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped boxes of [N,4] shape in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
72 73 74 75 76 77 78 79 80 81 82 83 |
|
apply_to_image(image)
Flip image horizontally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Horizontally flipped image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
50 51 52 53 54 55 56 57 |
|
apply_to_keypoints(keypoints, cols)
Flip keypoints horizontally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
Input keypoints of [N,K,3] shape |
required |
cols |
int
|
Image width |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped keypoints of [N,K,3] shape |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
59 60 61 62 63 64 65 66 67 68 69 70 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input pose estimation sample. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_horisontal_flip.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
KeypointsRandomRotate90
Bases: AbstractKeypointTransform
Apply 90 degree rotations to the sample.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
__init__(prob=0.5)
Initialize transform
Parameters:
Name | Type | Description | Default |
---|---|---|---|
(float) |
prob
|
Probability of applying the transform |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
17 18 19 20 21 22 23 24 25 26 27 |
|
apply_to_bboxes(bboxes_xywh, factor, rows, cols)
classmethod
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
(N, 4) array of bboxes in XYWH format |
required | |
factor |
Number of 90 degree rotations to apply. Order or rotation matches np.rot90 |
required | |
rows |
int
|
Number of rows (image height) of the original (input) image |
required |
cols |
int
|
Number of cols (image width) of the original (input) image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Transformed bboxes in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
apply_to_image(image, factor)
classmethod
Rotate image by 90 degrees
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
factor |
int
|
Number of 90 degree rotations to apply. Order or rotation matches np.rot90 |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Rotated image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
49 50 51 52 53 54 55 56 57 58 |
|
apply_to_keypoints(keypoints, factor, rows, cols)
classmethod
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
Input keypoints array of [Num Instances, Num Joints, 3] shape. Keypoints has format (x, y, visibility) |
required |
factor |
Number of 90 degree rotations to apply. Order or rotation matches np.rot90 |
required | |
rows |
int
|
Number of rows (image height) of the original (input) image |
required |
cols |
int
|
Number of cols (image width) of the original (input) image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Transformed keypoints array of [Num Instances, Num Joints, 3] shape. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|
apply_to_sample(sample)
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Result of applying the transform |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_rotate90.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
KeypointsRandomVerticalFlip
Bases: AbstractKeypointTransform
Flip image, mask and joints vertically with a given probability.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
__init__(prob=0.5)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability of flipping |
0.5
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
16 17 18 19 20 21 22 |
|
apply_to_bboxes(bboxes, rows)
Flip boxes vertically
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input boxes of [N,4] shape in XYWH format |
required |
rows |
int
|
Image height |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped boxes of [N,4] shape in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
66 67 68 69 70 71 72 73 74 75 76 77 |
|
apply_to_image(image)
Flip image vertically
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Vertically flipped image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
45 46 47 48 49 50 51 52 |
|
apply_to_keypoints(keypoints, rows)
Flip keypoints vertically
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
Input keypoints of [N,K,3] shape |
required |
rows |
int
|
Image height |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Flipped keypoints of [N,K,3] shape |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
54 55 56 57 58 59 60 61 62 63 64 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input pose estimation sample. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
A new pose estimation sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_random_vertical_flip.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
KeypointsRemoveSmallObjects
Bases: AbstractKeypointTransform
Remove pose instances from data sample that are too small or have too few visible keypoints.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_remove_small_objects.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
__init__(min_visible_keypoints=0, min_instance_area=0, min_bbox_area=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
min_visible_keypoints |
int
|
Minimum number of visible keypoints to keep the sample. Default value is 0 which means that all samples will be kept. |
0
|
min_instance_area |
int
|
Minimum area of instance area to keep the sample Default value is 0 which means that all samples will be kept. |
0
|
min_bbox_area |
int
|
Minimum area of bounding box to keep the sample Default value is 0 which means that all samples will be kept. |
0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_remove_small_objects.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
Apply transformation to given pose estimation sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Filtered sample. |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_remove_small_objects.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
KeypointsRescale
Bases: AbstractKeypointTransform
Resize image, mask and joints to target size without preserving aspect ratio.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|
__init__(height, width, interpolation=cv2.INTER_LINEAR, prob=1.0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
height |
int
|
Target image height |
required |
width |
int
|
Target image width |
required |
interpolation |
int
|
Used interpolation method for image. See cv2.resize for details. |
cv2.INTER_LINEAR
|
prob |
float
|
Probability of applying this transform. Default value is 1, meaning that transform is always applied. |
1.0
|
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
18 19 20 21 22 23 24 25 26 27 28 29 |
|
apply_to_areas(areas, sx, sy)
classmethod
Resize areas to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
areas |
np.ndarray
|
[N] Array of instance areas |
required |
sx |
float
|
Scale factor along the horizontal axis |
required |
sy |
float
|
Scale factor along the vertical axis |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[N] Array of resized instance areas |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
95 96 97 98 99 100 101 102 103 104 |
|
apply_to_bboxes(bboxes, sx, sy)
classmethod
Resize bounding boxes to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input bounding boxes in XYWH format |
required |
sx |
float
|
Scale factor along the horizontal axis |
required |
sy |
float
|
Scale factor along the vertical axis |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Resized bounding boxes in XYWH format |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
apply_to_image(img, dsize, interpolation)
classmethod
Resize image to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img |
Input image |
required | |
dsize |
Tuple[int, int]
|
Target size (width, height) |
required |
interpolation |
int
|
OpenCV interpolation method |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Resize image |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
54 55 56 57 58 59 60 61 62 63 64 |
|
apply_to_keypoints(keypoints, sx, sy)
classmethod
Resize keypoints to target size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
keypoints |
np.ndarray
|
[Num Instances, Num Joints, 3] Input keypoints |
required |
sx |
float
|
Scale factor along the horizontal axis |
required |
sy |
float
|
Scale factor along the vertical axis |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[Num Instances, Num Joints, 3] Resized keypoints |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
66 67 68 69 70 71 72 73 74 75 76 77 78 |
|
apply_to_sample(sample)
Apply transform to sample.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
PoseEstimationSample
|
Input sample |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Output sample |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_rescale.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
|
KeypointsReverseImageChannels
Bases: AbstractKeypointTransform
Randomly reverse channel order with given probability. Given an image with RGB channels, when applied with probability 1, it returns an image with BGR channels. With probability 0.5 there is 50/50 chance to return BGR or RGB image. It usually helps to improve model's ability to generalize under different color channels order.
Source code in src/super_gradients/training/transforms/keypoints/keypoints_reverse_image_channels.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
__init__(prob)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
Source code in src/super_gradients/training/transforms/keypoints/keypoints_reverse_image_channels.py
21 22 23 24 25 26 27 |
|
AbstractSegmentationTransform
Bases: abc.ABC
Base class for all transforms for object detection sample augmentation.
Source code in src/super_gradients/training/transforms/segmentation/abstract_segmentation_transform.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
apply_to_sample(sample)
abstractmethod
Apply transformation to given segmentation sample. Important note - function call may return new object, may modify it in-place. This is implementation dependent and if you need to keep original sample intact it is recommended to make a copy of it BEFORE passing it to transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
SegmentationSample
|
Input sample to transform. |
required |
Returns:
Type | Description |
---|---|
SegmentationSample
|
Modified sample (It can be the same instance as input or a new object). |
Source code in src/super_gradients/training/transforms/segmentation/abstract_segmentation_transform.py
15 16 17 18 19 20 21 22 23 24 25 26 |
|
LegacySegmentationTransformMixin
A mixin class to make legacy detection transforms compatible with new detection transforms that operate on DetectionSample.
Source code in src/super_gradients/training/transforms/segmentation/legacy_segmentation_transform_mixin.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
__call__(sample)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
Union[SegmentationSample, Dict[str, Any]]
|
Dict with following keys: - image: numpy array of [H,W,C] or [C,H,W] format - target: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) - crowd_targets: numpy array of [N,5] shape with bounding box of each instance (XYXY + LABEL) |
required |
Source code in src/super_gradients/training/transforms/segmentation/legacy_segmentation_transform_mixin.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
convert_input_dict_to_segmentation_sample(sample_annotations)
classmethod
Convert old-style segmentation sample dict to new DetectionSample dataclass.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample_annotations |
Dict[str, Union[np.ndarray, Any]]
|
Input dictionary with following keys: image: Associated image with sample, in [H,W,C] (or H,W for greyscale) format. mask: Associated segmentation mask with sample, in [H,W] |
required |
Returns:
Type | Description |
---|---|
Tuple[SegmentationSample, bool]
|
A tuple of SegmentationSample and a boolean value indicating whether original input dict has images as PIL Image An instance of SegmentationSample dataclass filled with data from input dictionary. |
Source code in src/super_gradients/training/transforms/segmentation/legacy_segmentation_transform_mixin.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
convert_segmentation_sample_to_dict(sample, image_is_pil)
classmethod
Convert new SegmentationSample dataclass to old-style detection sample dict. This is a reverse operation to convert_input_dict_to_detection_sample and used to make legacy transforms compatible with new segmentation transforms.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
SegmentationSample
|
Transformed sample |
required |
image_is_pil |
bool
|
A boolean value indicating whether original input dict has images as PIL Image If True, output dict will also have images as PIL Image, otherwise as numpy array. |
required |
Source code in src/super_gradients/training/transforms/segmentation/legacy_segmentation_transform_mixin.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
DetectionHSV
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Detection HSV transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
required |
hgain |
float
|
Hue gain. |
0.5
|
sgain |
float
|
Saturation gain. |
0.5
|
vgain |
float
|
Value gain. |
0.5
|
bgr_channels |
Channel indices of the BGR channels- useful for images with >3 channels, or when BGR channels are in different order. |
(0, 1, 2)
|
Source code in src/super_gradients/training/transforms/transforms.py
1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 |
|
DetectionHorizontalFlip
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Horizontal Flip for Detection
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability of applying horizontal flip |
required |
Source code in src/super_gradients/training/transforms/transforms.py
936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 |
|
apply_to_sample(sample)
Apply horizontal flip to sample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
DetectionSample
|
Input detection sample |
required |
Returns:
Type | Description |
---|---|
DetectionSample
|
Transformed detection sample |
Source code in src/super_gradients/training/transforms/transforms.py
948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 |
|
DetectionImagePermute
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Permute image dims. Useful for converting image from HWC to CHW format.
Source code in src/super_gradients/training/transforms/transforms.py
854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 |
|
__init__(dims=(2, 0, 1))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dims |
Tuple[int, int, int]
|
Specify new order of dims. Default value (2, 0, 1) suitable for converting from HWC to CHW format. |
(2, 0, 1)
|
Source code in src/super_gradients/training/transforms/transforms.py
860 861 862 863 864 865 866 |
|
DetectionMixup
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Mixup detection transform
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim |
Union[int, Tuple[int, int], None]
|
Input dimension. |
required |
mixup_scale |
tuple
|
Scale range for the additional loaded image for mixup. |
required |
prob |
float
|
Probability of applying mixup. |
1.0
|
enable_mixup |
bool
|
Whether to apply mixup at all (regardless of prob). |
True
|
flip_prob |
float
|
Probability to apply horizontal flip to the additional sample. |
0.5
|
border_value |
int
|
Value for filling borders after applying transform. |
114
|
Source code in src/super_gradients/training/transforms/transforms.py
660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 |
|
DetectionMosaic
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
DetectionMosaic detection transform
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim |
Union[int, Tuple[int, int]]
|
Input dimension. |
required |
prob |
float
|
Probability of applying mosaic. |
1.0
|
enable_mosaic |
bool
|
Whether to apply mosaic at all (regardless of prob). |
True
|
border_value |
Value for filling borders after applying transforms. |
114
|
Source code in src/super_gradients/training/transforms/transforms.py
491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 |
|
DetectionNormalize
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Normalize image by subtracting mean and dividing by std.
Source code in src/super_gradients/training/transforms/transforms.py
1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 |
|
DetectionPadToSize
Bases: DetectionPadIfNeeded
Preprocessing transform to pad image and bboxes to input_dim
shape (rows, cols).
Transform does center padding, so that input image with bboxes located in the center of the produced image.
Note: This transformation assume that dimensions of input image is equal or less than output_size
.
This class exists for backward compatibility with previous versions of the library.
Use DetectionPadIfNeeded
instead.
Source code in src/super_gradients/training/transforms/transforms.py
876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 |
|
__init__(output_size, pad_value)
Constructor for DetectionPadToSize transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_size |
Union[int, Tuple[int, int], None]
|
Output image size (rows, cols) |
required |
pad_value |
Union[int, Tuple[int, ...]]
|
Padding value for image |
required |
Source code in src/super_gradients/training/transforms/transforms.py
887 888 889 890 891 892 893 894 895 896 897 898 |
|
DetectionPaddedRescale
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Preprocessing transform to be applied last of all transforms for validation.
Image- Rescales and pads to self.input_dim. Targets- moves the class label to first index, converts boxes format- xyxy -> cxcywh.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim |
Union[int, Tuple[int, int], None]
|
Final input dimension (default=(640,640)) |
required |
swap |
Tuple[int, ...]
|
Image axis's to be rearranged. |
(2, 0, 1)
|
pad_value |
int
|
Padding value for image. |
114
|
Source code in src/super_gradients/training/transforms/transforms.py
901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 |
|
DetectionRGB2BGR
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Detection change Red & Blue channel of the image
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability to apply the transform. |
0.5
|
Source code in src/super_gradients/training/transforms/transforms.py
1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 |
|
DetectionRandomAffine
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
DetectionRandomAffine detection transform
:param degrees: Degrees for random rotation, when float the random values are drawn uniformly from (-degrees, degrees) :param translate: Translate size (in pixels) for random translation, when float the random values are drawn uniformly from (center-translate, center+translate) :param scales: Values for random rescale, when float the random values are drawn uniformly from (1-scales, 1+scales) :param shear: Degrees for random shear, when float the random values are drawn uniformly from (-shear, shear) :param target_size: Desired output shape. :param filter_box_candidates: Whether to filter out transformed bboxes by edge size, area ratio, and aspect ratio (default=False). :param wh_thr: Edge size threshold when filter_box_candidates = True. Bounding oxes with edges smaller than this values will be filtered out. :param ar_thr: Aspect ratio threshold filter_box_candidates = True. Bounding boxes with aspect ratio larger than this values will be filtered out. :param area_thr: Threshold for area ratio between original image and the transformed one, when filter_box_candidates = True. Bounding boxes with such ratio smaller than this value will be filtered out. :param border_value: Value for filling borders after applying transforms.
Source code in src/super_gradients/training/transforms/transforms.py
570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 |
|
DetectionRandomRotate90
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Source code in src/super_gradients/training/transforms/transforms.py
1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 |
|
apply_to_bboxes(bboxes, factor, image_shape)
Apply a factor
number of 90-degree rotation to bounding boxes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Input bounding boxes in XYXY format. |
required |
factor |
int
|
Number of CCW rotations. Must be in set {0, 1, 2, 3} See np.rot90. |
required |
image_shape |
Tuple[int, int]
|
Original image shape |
required |
Returns:
Type | Description |
---|---|
Rotated bounding boxes in XYXY format. |
Source code in src/super_gradients/training/transforms/transforms.py
1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 |
|
apply_to_image(image, factor)
Apply a factor
number of 90-degree rotation to image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
np.ndarray
|
Input image (HWC). |
required |
factor |
int
|
Number of CCW rotations. Must be in set {0, 1, 2, 3} See np.rot90. |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
Rotated image (HWC). |
Source code in src/super_gradients/training/transforms/transforms.py
1052 1053 1054 1055 1056 1057 1058 1059 1060 |
|
xyxy_bbox_rot90(bboxes, factor, rows, cols)
classmethod
Rotates a bounding box by 90 degrees CCW (see np.rot90)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
np.ndarray
|
Tensor made of bounding box tuples (x_min, y_min, x_max, y_max). |
required |
factor |
int
|
Number of CCW rotations. Must be in set {0, 1, 2, 3} See np.rot90. |
required |
rows |
int
|
Image rows of the original image. |
required |
cols |
int
|
Image cols of the original image. |
required |
Returns:
Type | Description |
---|---|
A bounding box tuple (x_min, y_min, x_max, y_max). |
Source code in src/super_gradients/training/transforms/transforms.py
1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 |
|
DetectionRescale
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Resize image and bounding boxes to given image dimensions without preserving aspect ratio
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_shape |
Union[int, Tuple[int, int]]
|
(rows, cols) |
required |
Source code in src/super_gradients/training/transforms/transforms.py
995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 |
|
DetectionStandardize
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Standardize image pixel values with img/max_val
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_val |
Current maximum value of the image pixels. (usually 255) |
required |
Source code in src/super_gradients/training/transforms/transforms.py
467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 |
|
DetectionTargetsFormatTransform
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Detection targets format transform
Convert targets in input_format to output_format, filter small bboxes and pad targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim |
Union[int, Tuple[int, int], None]
|
Shape of the images to transform. |
None
|
input_format |
ConcatenatedTensorFormat
|
Format of the input targets. For instance [xmin, ymin, xmax, ymax, cls_id] refers to XYXY_LABEL. |
XYXY_LABEL
|
output_format |
ConcatenatedTensorFormat
|
Format of the output targets. For instance [xmin, ymin, xmax, ymax, cls_id] refers to XYXY_LABEL |
LABEL_CXCYWH
|
min_bbox_edge_size |
float
|
bboxes with edge size lower than this values will be removed. |
1
|
Source code in src/super_gradients/training/transforms/transforms.py
1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 |
|
apply_on_targets(targets)
Convert targets in input_format to output_format, filter small bboxes and pad targets
Source code in src/super_gradients/training/transforms/transforms.py
1282 1283 1284 1285 1286 |
|
filter_small_bboxes(targets)
Filter bboxes smaller than specified threshold.
Source code in src/super_gradients/training/transforms/transforms.py
1288 1289 1290 1291 1292 1293 1294 1295 1296 |
|
DetectionTransform
Detection transform base class. Complex transforms that require extra data loading can use the the additional_samples_count attribute in a similar fashion to what's been done in COCODetectionDataset: self._load_additional_inputs_for_transform(sample, transform)
after the above call, sample["additional_samples"] holds a list of additional inputs and targets.
sample = transform(sample)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
additional_samples_count |
int
|
Additional samples to be loaded. |
0
|
non_empty_targets |
bool
|
Whether the additional targets can have empty targets or not. |
False
|
Source code in src/super_gradients/training/transforms/transforms.py
422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 |
|
apply_to_sample(sample)
Apply transformation to the input detection sample. This method exists here for compatibility reasons to ensure a custom transform that inherits from DetectionSample would still work.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
DetectionSample
|
Input detection sample. |
required |
Returns:
Type | Description |
---|---|
DetectionSample
|
Output detection sample. |
Source code in src/super_gradients/training/transforms/transforms.py
447 448 449 450 451 452 453 454 455 456 457 458 |
|
DetectionVerticalFlip
Bases: AbstractDetectionTransform
, LegacyDetectionTransformMixin
Vertical Flip for Detection
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob |
float
|
Probability of applying vertical flip |
required |
Source code in src/super_gradients/training/transforms/transforms.py
968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 |
|
SegConvertToTensor
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Converts SegmentationSample images and masks to PyTorch tensors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
(Optional[str]) |
mask_output_dtype
|
The desired output data type for the mask tensor. |
required |
(bool) |
add_mask_dummy_dim
|
Whether to add a dummy channels dimension to the mask tensor. |
required |
Source code in src/super_gradients/training/transforms/transforms.py
776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 |
|
SegCropImageAndMask
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Crops image and mask (synchronously). In "center" mode a center crop is performed while, in "random" mode the drop will be positioned around random coordinates.
Source code in src/super_gradients/training/transforms/transforms.py
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 |
|
__init__(crop_size, mode)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
crop_size |
Union[float, Tuple, List]
|
tuple of (width, height) for the final crop size, if is scalar size is a square (crop_size, crop_size) |
required |
mode |
str
|
how to choose the center of the crop, 'center' for the center of the input image, 'random' center the point is chosen randomally |
required |
Source code in src/super_gradients/training/transforms/transforms.py
241 242 243 244 245 246 247 248 249 250 251 252 253 |
|
SegNormalize
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Normalization to be applied on the segmentation sample's image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
(sequence) |
mean
|
Sequence of means for each channel. |
required |
Source code in src/super_gradients/training/transforms/transforms.py
831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 |
|
SegPadShortToCropSize
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Pads image to 'crop_size'. Should be called only after "SegRescale" or "SegRandomRescale" in augmentations pipeline. Please note that if input image size > crop size no change will be made to the image. This transform only pads the image and mask into "crop_size" if it's larger than image size
Source code in src/super_gradients/training/transforms/transforms.py
307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 |
|
__init__(crop_size, fill_mask=0, fill_image=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
crop_size |
Union[float, Tuple, List]
|
Tuple of (width, height) for the final crop size, if is scalar size is a square (crop_size, crop_size) |
required |
fill_mask |
int
|
Value to fill mask labels background. |
0
|
fill_image |
Union[int, Tuple, List]
|
Grey value to fill image padded background. |
0
|
Source code in src/super_gradients/training/transforms/transforms.py
316 317 318 319 320 321 322 323 324 325 326 327 |
|
SegRandomFlip
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Randomly flips the image and mask (synchronously) with probability 'prob'.
Source code in src/super_gradients/training/transforms/transforms.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
|
SegRandomGaussianBlur
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Adds random Gaussian Blur to image with probability 'prob'.
Source code in src/super_gradients/training/transforms/transforms.py
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 |
|
SegRandomRescale
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Random rescale the image and mask (synchronously) while preserving aspect ratio. Scale factor is randomly picked between scales [min, max]
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scales |
Union[float, Tuple, List]
|
Scale range tuple (min, max), if scales is a float range will be defined as (1, scales) if scales > 1, otherwise (scales, 1). must be a positive number. |
(0.5, 2.0)
|
Source code in src/super_gradients/training/transforms/transforms.py
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
|
check_valid_arguments()
Check the scale values are valid. if order is wrong, flip the order and return the right scale values.
Source code in src/super_gradients/training/transforms/transforms.py
181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
|
SegRandomRotate
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Randomly rotates image and mask (synchronously) between 'min_deg' and 'max_deg'.
Source code in src/super_gradients/training/transforms/transforms.py
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
|
SegRescale
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Rescales the image and mask (synchronously) while preserving aspect ratio. The rescaling can be done according to scale_factor, short_size or long_size. If more than one argument is given, the rescaling mode is determined by this order: scale_factor, then short_size, then long_size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale_factor |
Optional[float]
|
Rescaling is done by multiplying input size by scale_factor: out_size = (scale_factor * w, scale_factor * h) |
None
|
short_size |
Optional[int]
|
Rescaling is done by determining the scale factor by the ratio short_size / min(h, w). |
None
|
long_size |
Optional[int]
|
Rescaling is done by determining the scale factor by the ratio long_size / max(h, w). |
None
|
Source code in src/super_gradients/training/transforms/transforms.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
|
SegStandardize
Bases: AbstractSegmentationTransform
, LegacySegmentationTransformMixin
Standardize image pixel values with img/max_val
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_value |
Current maximum value of the image pixels. (usually 255) |
255
|
Source code in src/super_gradients/training/transforms/transforms.py
810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 |
|
Standardize
Bases: torch.nn.Module
Standardize image pixel values.
Returns:
Type | Description |
---|---|
max_val: float, value to as described above (default=255) |
Source code in src/super_gradients/training/transforms/transforms.py
1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 |
|
get_affine_matrix(input_size, target_size, degrees=10, translate=0.1, scales=0.1, shear=10)
Return a random affine transform matrix.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size |
Tuple[int, int]
|
Input shape. |
required |
target_size |
Tuple[int, int]
|
Desired output shape. |
required |
degrees |
Union[tuple, float]
|
Degrees for random rotation, when float the random values are drawn uniformly from (-degrees, degrees) |
10
|
translate |
Union[tuple, float]
|
Translate size (in pixels) for random translation, when float the random values are drawn uniformly from (-translate, translate) |
0.1
|
scales |
Union[tuple, float]
|
Values for random rescale, when float the random values are drawn uniformly from (1-scales, 1+scales) |
0.1
|
shear |
Union[tuple, float]
|
Degrees for random shear, when float the random values are drawn uniformly from (-shear, shear) |
10
|
Returns:
Type | Description |
---|---|
np.ndarray
|
affine_transform_matrix, drawn_scale |
Source code in src/super_gradients/training/transforms/transforms.py
1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 |
|
get_aug_params(value, center=0)
Generates a random value for augmentations as described below
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value |
Union[tuple, float]
|
Range of values for generation. Wen tuple-drawn uniformly between (value[0], value[1]), and (center - value, center + value) when float. |
required |
center |
float
|
Center to subtract when value is float. |
0
|
Returns:
Type | Description |
---|---|
float
|
Generated value |
Source code in src/super_gradients/training/transforms/transforms.py
1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 |
|
random_affine(img, targets=(), targets_seg=None, target_size=(640, 640), degrees=10, translate=0.1, scales=0.1, shear=10, filter_box_candidates=False, wh_thr=2, ar_thr=20, area_thr=0.1, border_value=114, crowd_targets=None)
Performs random affine transform to img, targets
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img |
np.ndarray
|
Input image of shape [h, w, c] |
required |
targets |
np.ndarray
|
Input target |
()
|
targets_seg |
np.ndarray
|
Targets derived from segmentation masks |
None
|
target_size |
tuple
|
Desired output shape |
(640, 640)
|
degrees |
Union[float, tuple]
|
Degrees for random rotation, when float the random values are drawn uniformly from (-degrees, degrees). |
10
|
translate |
Union[float, tuple]
|
Translate size (in pixels) for random translation, when float the random values are drawn uniformly from (-translate, translate) |
0.1
|
scales |
Union[float, tuple]
|
Values for random rescale, when float the random values are drawn uniformly from (0.1-scales, 0.1+scales) |
0.1
|
shear |
Union[float, tuple]
|
Degrees for random shear, when float the random values are drawn uniformly from (shear, shear) |
10
|
filter_box_candidates |
bool
|
whether to filter out transformed bboxes by edge size, area ratio, and aspect ratio. |
False
|
wh_thr |
(float) edge size threshold when filter_box_candidates = True. Bounding oxes with edges smaller then this values will be filtered out. (default=2) |
2
|
|
ar_thr |
(float) aspect ratio threshold filter_box_candidates = True. Bounding boxes with aspect ratio larger then this values will be filtered out. (default=20) |
20
|
|
area_thr |
(float) threshold for area ratio between original image and the transformed one, when when filter_box_candidates = True. Bounding boxes with such ratio smaller then this value will be filtered out. (default=0.1) |
0.1
|
|
border_value |
value for filling borders after applying transforms (default=114). |
114
|
|
crowd_targets |
np.ndarray
|
Optional array of crowd annotations. If provided, it will be transformed in the same way as targets. |
None
|
Returns:
Type | Description |
---|---|
Image and Target with applied random affine |
Source code in src/super_gradients/training/transforms/transforms.py
1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 |
|