Datasets
RandAugment RandAugment is a variant of AutoAugment which randomly selects transformations from AutoAugment to be applied on an image.
RandomAugmentation Implementation adapted from: https://github.com/rwightman/pytorch-image-models/blob/master/timm/data/auto_augment.py
Papers: RandAugment: Practical automated data augmentation... - https://arxiv.org/abs/1909.13719
AugmentOp
single auto augment operations
Source code in src/super_gradients/training/datasets/auto_augment.py
271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
|
RandAugment
Random auto augment class, will select auto augment transforms according to probability weights for each op
Source code in src/super_gradients/training/datasets/auto_augment.py
380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
|
rand_augment_transform(config_str, crop_size, img_mean)
Create a RandAugment transform
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config_str |
String defining configuration of random augmentation. Consists of multiple sections separated by dashes ('-'). The first section defines the specific variant of rand augment (currently only 'rand'). The remaining sections, not order sepecific determine 'm' - integer magnitude of rand augment 'n' - integer num layers (number of transform ops selected per image) 'w' - integer probabiliy weight index (index of a set of weights to influence choice of op) 'mstd' - float std deviation of magnitude noise applied 'inc' - integer (bool), use augmentations that increase in severity with magnitude (default: 0) Ex 'rand-m9-n3-mstd0.5' results in RandAugment with magnitude 9, num_layers 3, magnitude_std 0.5 'rand-mstd1-w0' results in magnitude_std 1.0, weights 0, default magnitude of 10 and num_layers 2 |
required | |
crop_size |
int
|
The size of crop image |
required |
img_mean |
List[float]
|
Average per channel |
required |
Returns:
Type | Description |
---|---|
A PyTorch compatible Transform |
Source code in src/super_gradients/training/datasets/auto_augment.py
398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 |
|
Cifar10
Bases: CIFAR10
, HasPreprocessingParams
CIFAR10 Dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root |
str
|
Path for the data to be extracted |
required |
train |
bool
|
Bool to load training (True) or validation (False) part of the dataset |
True
|
transforms |
Union[list, dict]
|
List of transforms to apply sequentially on sample. Wrapped internally with torchvision.Compose |
None
|
target_transform |
Optional[Callable]
|
Transform to apply to target output |
None
|
download |
bool
|
Download (True) the dataset from source |
False
|
Source code in src/super_gradients/training/datasets/classification_datasets/cifar.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
get_dataset_preprocessing_params()
Get the preprocessing params for the dataset. It infers preprocessing params from transforms used in the dataset & class names
Returns:
Type | Description |
---|---|
Dict
|
(dict) Preprocessing params |
Source code in src/super_gradients/training/datasets/classification_datasets/cifar.py
48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
Cifar100
Bases: CIFAR100
, HasPreprocessingParams
Source code in src/super_gradients/training/datasets/classification_datasets/cifar.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
|
__init__(root, train=True, transforms=None, target_transform=None, download=False)
CIFAR100 Dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root |
str
|
Path for the data to be extracted |
required |
train |
bool
|
Bool to load training (True) or validation (False) part of the dataset |
True
|
transforms |
Union[list, dict]
|
List of transforms to apply sequentially on sample. Wrapped internally with torchvision.Compose |
None
|
target_transform |
Optional[Callable]
|
Transform to apply to target output |
None
|
download |
bool
|
Download (True) the dataset from source |
False
|
Source code in src/super_gradients/training/datasets/classification_datasets/cifar.py
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
|
get_dataset_preprocessing_params()
Get the preprocessing params for the dataset. It infers preprocessing params from transforms used in the dataset & class names
Returns:
Type | Description |
---|---|
Dict
|
(dict) Preprocessing params |
Source code in src/super_gradients/training/datasets/classification_datasets/cifar.py
96 97 98 99 100 101 102 103 104 105 106 107 108 |
|
ImageNetDataset
Bases: torch_datasets.ImageFolder
, HasPreprocessingParams
ImageNetDataset dataset.
To use this Dataset you need to:
-
Download imagenet dataset (https://image-net.org/download.php) Imagenet ├──train │ ├──n02093991 │ │ ├──n02093991_1001.JPEG │ │ ├──n02093991_1004.JPEG │ │ └──... │ ├──n02093992 │ └──... └──val ├──n02093991 ├──n02093992 └──...
-
Instantiate the dataset: >> train_set = ImageNetDataset(root='.../Imagenet/train', ...) >> valid_set = ImageNetDataset(root='.../Imagenet/val', ...)
Source code in src/super_gradients/training/datasets/classification_datasets/imagenet_dataset.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
get_dataset_preprocessing_params()
Get the preprocessing params for the dataset. It infers preprocessing params from transforms used in the dataset & class names
Returns:
Type | Description |
---|---|
Dict
|
(dict) Preprocessing params |
Source code in src/super_gradients/training/datasets/classification_datasets/imagenet_dataset.py
47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
get_torchvision_transforms_equivalent_processing(transforms)
Get the equivalent processing pipeline for torchvision transforms.
Returns:
Type | Description |
---|---|
List[Dict[str, Any]]
|
List of Processings operations |
Source code in src/super_gradients/training/datasets/classification_datasets/torchvision_utils.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
Lighting
Bases: object
Lighting noise(AlexNet - style PCA - based noise) Taken from fastai Imagenet training - https://github.com/fastai/imagenet-fast/blob/faa0f9dfc9e8e058ffd07a248724bf384f526fae/imagenet_nv/fastai_imagenet.py#L103 To use: - training_params = {"imagenet_pca_aug": 0.1} - Default training_params arg is 0.0 ("don't use") - 0.1 is that default in the original paper
Source code in src/super_gradients/training/datasets/data_augmentation.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
RandomErase
Bases: RandomErasing
A simple class that translates the parameters supported in SuperGradient's code base
Source code in src/super_gradients/training/datasets/data_augmentation.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
|
BoundingBoxFormat
Abstract class for describing a bounding boxes format. It exposes two methods: to_xyxy and from_xyxy to convert whatever format of boxes we are dealing with to internal xyxy format and vice versa. This conversion from and to intermediate xyxy format has a subtle performance impact, but greatly reduce amount of boilerplate code to support all combinations of conversion xyxy, xywh, cxcywh, yxyx <-> xyxy, xywh, cxcywh, yxyx.
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/bbox_format.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
from_xyxy(bboxes, image_shape, inplace)
Convert XYXY boxes to target bboxes format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
Input bounding boxes [..., 4] in XYXY format |
required | |
image_shape |
Tuple[int, int]
|
Dimensions (rows, cols) of the original image to support normalized boxes or non top-left origin coordinate system. |
required |
Returns:
Type | Description |
---|---|
Converted bounding boxes [..., 4] in target format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/bbox_format.py
27 28 29 30 31 32 33 34 35 |
|
to_xyxy(bboxes, image_shape, inplace)
Convert input boxes to XYXY format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
Input bounding boxes [..., 4] |
required | |
image_shape |
Tuple[int, int]
|
Dimensions (rows, cols) of the original image to support normalized boxes or non top-left origin coordinate system. |
required |
Returns:
Type | Description |
---|---|
Converted bounding boxes [..., 4] in XYXY format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/bbox_format.py
17 18 19 20 21 22 23 24 25 |
|
convert_bboxes(bboxes, image_shape, source_format, target_format, inplace)
Convert bboxes from source to target format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
Tensor of shape (..., 4) with input bounding boxes |
required | |
image_shape |
Tuple[int, int]
|
Tuple of (rows, cols) corresponding to image shape |
required |
source_format |
BoundingBoxFormat
|
Format of the source bounding boxes |
required |
target_format |
BoundingBoxFormat
|
Format of the output bounding boxes |
required |
Returns:
Type | Description |
---|---|
Tensor of shape (..., 4) with resulting bounding boxes |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/bbox_format.py
49 50 51 52 53 54 55 56 57 58 59 |
|
cxcywh_to_xyxy(bboxes, image_shape)
Transforms bboxes from CX-CY-W-H format to XYXY format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in CX-CY-W-H format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/cxcywh.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
cxcywh_to_xyxy_inplace(bboxes, image_shape)
Not that bboxes dtype is preserved, and it may lead to unwanted rounding errors when computing a center of bbox.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in CX-CY-W-H format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/cxcywh.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
xyxy_to_cxcywh(bboxes, image_shape)
Transforms bboxes from xyxy format to CX-CY-W-H format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in CX-CY-W-H format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/cxcywh.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
xyxy_to_cxcywh_inplace(bboxes, image_shape)
Transforms bboxes from xyxy format to CX-CY-W-H format. This function operates in-place. Not that bboxes dtype is preserved, and it may lead to unwanted rounding errors when computing a center of bbox.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in CX-CY-W-H format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/cxcywh.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
|
NormalizedXYXYCoordinateFormat
Bases: BoundingBoxFormat
Normalized X1,Y1,X2,Y2 bounding boxes format
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/normalized_xyxy.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
|
normalized_xyxy_to_xyxy(bboxes, image_shape)
Convert unit-normalized XYXY bboxes to XYXY bboxes in pixel units.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY (unit-normalized) format |
required | |
image_shape |
Tuple[int, int]
|
Image shape (rows,cols) |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY (pixels) format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/normalized_xyxy.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
normalized_xyxy_to_xyxy_inplace(bboxes, image_shape)
Convert unit-normalized XYXY bboxes to XYXY bboxes in pixel units. This function operates in-place.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY (unit-normalized) format |
required | |
image_shape |
Tuple[int, int]
|
Image shape (rows,cols) |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY (pixels) format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/normalized_xyxy.py
67 68 69 70 71 72 73 74 75 76 77 |
|
xyxy_to_normalized_xyxy(bboxes, image_shape)
Convert bboxes from XYXY (pixels) format to XYXY (unit-normalized) format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
Tensor
|
BBoxes of shape (..., 4) in XYXY (pixels) format |
required |
image_shape |
Tuple[int, int]
|
Image shape (rows,cols) |
required |
Returns:
Type | Description |
---|---|
Tensor
|
BBoxes of shape (..., 4) in XYXY (unit-normalized) format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/normalized_xyxy.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
xyxy_to_normalized_xyxy_inplace(bboxes, image_shape)
Convert bboxes from XYXY (pixels) format to XYXY (unit-normalized) format. This function operates in-place.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY (pixels) format |
required | |
image_shape |
Tuple[int, int]
|
Image shape (rows,cols) |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY (unit-normalized) format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/normalized_xyxy.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
xywh_to_xyxy(bboxes, image_shape)
Transforms bboxes from XYWH format to XYXY format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYWH format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/xywh.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
|
xywh_to_xyxy_inplace(bboxes, image_shape)
Transforms bboxes from XYWH format to XYXY format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYWH format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYXY format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/xywh.py
65 66 67 68 69 70 71 72 |
|
xyxy_to_xywh(bboxes, image_shape)
Transforms bboxes inplace from XYXY format to XYWH format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYWH format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/xywh.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
xyxy_to_xywh_inplace(bboxes, image_shape)
Transforms bboxes inplace from XYXY format to XYWH format. This function operates in-place.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bboxes |
BBoxes of shape (..., 4) in XYXY format |
required |
Returns:
Type | Description |
---|---|
BBoxes of shape (..., 4) in XYWH format |
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/xywh.py
55 56 57 58 59 60 61 62 |
|
XYXYCoordinateFormat
Bases: BoundingBoxFormat
Bounding boxes format X1, Y1, X2, Y2
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/xyxy.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
YXYXCoordinateFormat
Bases: BoundingBoxFormat
Bounding boxes format Y1, X1, Y2, X1
Source code in src/super_gradients/training/datasets/data_formats/bbox_formats/yxyx.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
ConcatenatedTensorFormatConverter
Source code in src/super_gradients/training/datasets/data_formats/format_converter.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
__init__(input_format, output_format, image_shape)
Converts concatenated tensors from input format to output format.
Example: >>> from super_gradients.training.datasets.data_formats import ConcatenatedTensorFormatConverter >>> from super_gradients.training.datasets.data_formats.default_formats import LABEL_CXCYWH, LABEL_NORMALIZED_XYXY >>> h, w = 100, 200 >>> input_target = np.array([[10, 20 / w, 30 / h, 40 / w, 50 / h]], dtype=np.float32) >>> expected_output_target = np.array([[10, 30, 40, 20, 20]], dtype=np.float32) >>> >>> transform = ConcatenatedTensorFormatConverter(input_format=LABEL_NORMALIZED_XYXY, output_format=LABEL_CXCYWH, image_shape=(h, w)) >>> >>> # np.float32 approximation of multiplication/division can lead to uncertainty of up to 1e-7 in precision >>> assert np.allclose(transform(input_target), expected_output_target, atol=1e-6)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_format |
ConcatenatedTensorFormat
|
Format definition of the inputs |
required |
output_format |
ConcatenatedTensorFormat
|
Format definition of the outputs |
required |
image_shape |
Union[Tuple[int, int], None]
|
Shape of the input image (rows, cols), used for converting bbox coordinates from/to normalized format. If you're not using normalized coordinates you can set this to None |
required |
Source code in src/super_gradients/training/datasets/data_formats/format_converter.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
ConcatenatedTensorFormat
Bases: DetectionOutputFormat
Define the output format that return a single tensor of shape [N,M] (N - number of detections, M - sum of bbox attributes) that is a concatenated from bbox coordinates and other fields. A layout defines the order of concatenated tensors. For instance: - layout: (bboxes, scores, labels) gives a Tensor that is product of torch.cat([bboxes, scores, labels], dim=1) - layout: (labels, bboxes) produce a Tensor from torch.cat([labels, bboxes], dim=1)
from super_gradients.training.datasets.data_formats.formats import ConcatenatedTensorFormat, BoundingBoxesTensorSliceItem, TensorSliceItem from super_gradients.training.datasets.data_formats.bbox_formats import XYXYCoordinateFormat, NormalizedXYWHCoordinateFormat
custom_format = ConcatenatedTensorFormat( layout=( BoundingBoxesTensorSliceItem(name="bboxes", format=XYXYCoordinateFormat()), TensorSliceItem(name="label", length=1), TensorSliceItem(name="distance", length=1), TensorSliceItem(name="attributes", length=4), ) )
Source code in src/super_gradients/training/datasets/data_formats/formats.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
apply_on_bboxes(fn, tensor, tensor_format)
Apply inplace a function only on the bboxes of a concatenated tensor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn |
Callable[[Union[np.ndarray, Tensor]], Union[np.ndarray, Tensor]]
|
Function to apply on the bboxes. |
required |
tensor |
Union[np.ndarray, Tensor]
|
Concatenated tensor that include - among other - the bboxes. |
required |
tensor_format |
ConcatenatedTensorFormat
|
Format of the tensor, required to know the indexes of the bboxes. |
required |
Returns:
Type | Description |
---|---|
Union[np.ndarray, Tensor]
|
Tensor, after applying INPLACE the fn on the bboxes |
Source code in src/super_gradients/training/datasets/data_formats/formats.py
105 106 107 108 109 110 111 112 113 114 115 116 117 |
|
apply_on_layout(fn, tensor, tensor_format, layout_name)
Apply inplace a function only on a specific layout of a concatenated tensor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn |
Callable[[Union[np.ndarray, Tensor]], Union[np.ndarray, Tensor]]
|
Function to apply on the bboxes. |
required |
tensor |
Union[np.ndarray, Tensor]
|
Concatenated tensor that include - among other - the layout of interest. |
required |
tensor_format |
ConcatenatedTensorFormat
|
Format of the tensor, required to know the indexes of the layout. |
required |
layout_name |
str
|
Name of the layout of interest. It has to be defined in the tensor_format. |
required |
Returns:
Type | Description |
---|---|
Union[np.ndarray, Tensor]
|
Tensor, after applying INPLACE the fn on the layout |
Source code in src/super_gradients/training/datasets/data_formats/formats.py
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
|
filter_on_bboxes(fn, tensor, tensor_format)
Filter the tensor according to a condition on the bboxes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn |
Callable[[Union[np.ndarray, Tensor]], Union[np.ndarray, Tensor]]
|
Function to filter the bboxes (keep only True elements). |
required |
tensor |
Union[np.ndarray, Tensor]
|
Concatenated tensor that include - among other - the bboxes. |
required |
tensor_format |
ConcatenatedTensorFormat
|
Format of the tensor, required to know the indexes of the bboxes. |
required |
Returns:
Type | Description |
---|---|
Union[np.ndarray, Tensor]
|
Tensor, after applying INPLACE the fn on the bboxes |
Source code in src/super_gradients/training/datasets/data_formats/formats.py
139 140 141 142 143 144 145 146 147 148 149 150 151 |
|
filter_on_layout(fn, tensor, tensor_format, layout_name)
Filter the tensor according to a condition on a specific layout.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn |
Callable[[Union[np.ndarray, Tensor]], Union[np.ndarray, Tensor]]
|
Function to filter the bboxes (keep only True elements). |
required |
tensor |
Union[np.ndarray, Tensor]
|
Concatenated tensor that include - among other - the layout of interest. |
required |
tensor_format |
ConcatenatedTensorFormat
|
Format of the tensor, required to know the indexes of the layout. |
required |
layout_name |
str
|
Name of the layout of interest. It has to be defined in the tensor_format. |
required |
Returns:
Type | Description |
---|---|
Union[np.ndarray, Tensor]
|
Tensor, after filtering the bboxes according to fn. |
Source code in src/super_gradients/training/datasets/data_formats/formats.py
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
|
get_permutation_indexes(input_format, output_format)
Compute the permutations required to change the format layout order.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_format |
ConcatenatedTensorFormat
|
Input format to transform from |
required |
output_format |
ConcatenatedTensorFormat
|
Output format to transform to |
required |
Returns:
Type | Description |
---|---|
List[int]
|
Permutation indexes to go from input to output format. |
Source code in src/super_gradients/training/datasets/data_formats/formats.py
174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
ConvertBoundingBoxes
Bases: nn.Module
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
forward(x)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor
|
required | |
image_shape |
required |
Returns:
Type | Description |
---|---|
Tensor
|
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
56 57 58 59 60 61 62 63 64 65 66 67 |
|
DetectionOutputAdapter
Bases: nn.Module
Adapter class for converting model's predictions for object detection to a desired format. This adapter supports torch.jit tracing & scripting & onnx conversion.
from super_gradients.training.datasets.data_formats.formats import ConcatenatedTensorFormat, BoundingBoxesTensorSliceItem, TensorSliceItem from super_gradients.training.datasets.data_formats.bbox_formats import XYXYCoordinateFormat, NormalizedXYWHCoordinateFormat
class CustomDetectionHead(nn.Module): num_classes: int = 123
@property def format(self): ''' Describe the semantics of the model's output. In this example model's output consists of - Bounding boxes in XYXY format [4] - Predicted probas of N classes [N] - A distance predictions [1] - K additional labels [K] ''' return ConcatenatedTensorFormat( layout=( BoundingBoxesTensorSliceItem(name="bboxes", format=XYXYCoordinateFormat()), TensorSliceItem(name="label", length=1), TensorSliceItem(name="distance", length=1), TensorSliceItem(name="attributes", length=4), ) )
yolox = YoloX(head=CustomDetectionHead)
Suppose we want to return predictions in another format.
Let it be:
- Bounding boxes in normalized XYWH [4]
- Predicted attributes [4]
- Predicted label [1]
output_format = ConcatenatedTensorFormat( layout=( # Note: For output format it is not required to specify location attribute as it will be # computed with respect to size of "source name" and order of items in layout describe their order in the output tensor BoundingBoxesTensorSliceItem(name="bboxes", format=NormalizedXYWHCoordinateFormat()), TensorSliceItem(name="attributes", length=4), TensorSliceItem(name="label", length=1), ) )
Now we can construct output adapter and attach it to the model
output_adapter = DetectionOutputAdapter( input_format=yolox.head.format, output_format=output_format, image_shape=(640, 640) )
yolox = nn.Sequential(yolox, output_adapter)
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
|
__init__(input_format, output_format, image_shape)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_format |
ConcatenatedTensorFormat
|
Format definition of the inputs |
required |
output_format |
ConcatenatedTensorFormat
|
Format definition of the outputs |
required |
image_shape |
Union[Tuple[int, int], None]
|
Shape of the input image (rows, cols), used for converting bbox coordinates from/to normalized format. If you're not using normalized coordinates you can set this to None |
required |
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
forward(predictions)
Convert output detections to the user-specified format
Parameters:
Name | Type | Description | Default |
---|---|---|---|
predictions |
Tensor
|
required |
Returns:
Type | Description |
---|---|
Tensor
|
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
RearrangeOutput
Bases: nn.Module
Rearrange elements in last dimension of input tensor with respect to index argument
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
forward(x)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor
|
Input tensor of [..., N] shape |
required |
Returns:
Type | Description |
---|---|
Tensor
|
Output tensor of [..., N[index]] shape |
Source code in src/super_gradients/training/datasets/data_formats/output_adapters/detection_adapter.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
AbstractCollateFunction
Bases: ABC
A collate function (for torch DataLoader)
Source code in src/super_gradients/training/datasets/datasets_utils.py
76 77 78 79 80 81 82 83 |
|
AbstractPrePredictionCallback
Bases: ABC
Abstract class for forward pass preprocessing function, to be used by passing its inheritors through training_params pre_prediction_callback keyword arg.
Should implement call and return images, targets after applying the desired preprocessing.
Source code in src/super_gradients/training/datasets/datasets_utils.py
175 176 177 178 179 180 181 182 183 184 185 |
|
ComposedCollateFunction
Bases: AbstractCollateFunction
A function (for torch DataLoader) which executes a sequence of sub collate functions
Source code in src/super_gradients/training/datasets/datasets_utils.py
86 87 88 89 90 91 92 93 94 95 96 97 98 |
|
DatasetStatisticsTensorboardLogger
Source code in src/super_gradients/training/datasets/datasets_utils.py
371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 |
|
analyze(data_loader, title, all_classes, anchors=None)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_loader |
torch.utils.data.DataLoader
|
the dataset data loader |
required |
dataset_params |
the dataset parameters |
required | |
title |
str
|
the title for this dataset (i.e. Coco 2017 test set) |
required |
anchors |
list
|
the list of anchors used by the model. applicable only for detection datasets |
None
|
all_classes |
List[str]
|
the list of all classes names |
required |
Source code in src/super_gradients/training/datasets/datasets_utils.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 |
|
DetectionMultiscalePrePredictionCallback
Bases: MultiscalePrePredictionCallback
Mutiscalepre-prediction callback for object detection.
When passed through train_params images, targets will be applied by the below transform to support multi scaling on the fly.
After each self.frequency forward passes, change size randomly from (input_size-self.multiscale_rangeself.image_size_steps, input_size-(self.multiscale_range-1)self.image_size_steps, ...input_size+self.multiscale_range*self.image_size_steps) and apply the same rescaling to the box coordinates.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
multiscale_range |
Range of values for resize sizes as discussed above (default=5) |
required | |
image_size_steps |
Image step sizes as discussed abov (default=32) |
required | |
change_frequency |
The frequency to apply change in input size. |
required |
Source code in src/super_gradients/training/datasets/datasets_utils.py
254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 |
|
MultiScaleCollateFunction
Bases: AbstractCollateFunction
a collate function to implement multi-scale data augmentation according to https://arxiv.org/pdf/1612.08242.pdf
Source code in src/super_gradients/training/datasets/datasets_utils.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
|
__init__(target_size=None, min_image_size=None, max_image_size=None, image_size_steps=32, change_frequency=10)
set parameters for the multi-scale collate function the possible image sizes are in range [min_image_size, max_image_size] in steps of image_size_steps a new size will be randomly selected every change_frequency calls to the collate_fn() :param target_size: scales will be [0.66 * target_size, 1.5 * target_size] :param min_image_size: the minimum size to scale down to (in pixels) :param max_image_size: the maximum size to scale up to (in pixels) :param image_size_steps: typically, the stride of the net, which defines the possible image size multiplications :param change_frequency:
Source code in src/super_gradients/training/datasets/datasets_utils.py
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
|
MultiscalePrePredictionCallback
Bases: AbstractPrePredictionCallback
Mutiscale pre-prediction callback pass function.
When passed through train_params images, targets will be applied by the below transform to support multi scaling on the fly.
After each self.frequency forward passes, change size randomly from (input_size-self.multiscale_rangeself.image_size_steps, input_size-(self.multiscale_range-1)self.image_size_steps, ...input_size+self.multiscale_range*self.image_size_steps)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
multiscale_range |
int
|
Range of values for resize sizes as discussed above (default=5) |
5
|
image_size_steps |
int
|
Image step sizes as discussed abov (default=32) |
32
|
change_frequency |
int
|
The frequency to apply change in input size. |
10
|
Source code in src/super_gradients/training/datasets/datasets_utils.py
188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 |
|
RandomResizedCropAndInterpolation
Bases: RandomResizedCrop
Crop the given PIL Image to random size and aspect ratio with explicitly chosen or random interpolation.
A crop of random size (default: of 0.08 to 1.0) of the original size and a random aspect ratio (default: of 3/4 to 4/3) of the original aspect ratio is made. This crop is finally resized to given size. This is popularly used to train the Inception networks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Expected output size of each edge |
required | |
scale |
Range of size of the origin size cropped |
(0.08, 1.0)
|
|
ratio |
Range of aspect ratio of the origin aspect ratio cropped |
(3.0 / 4.0, 4.0 / 3.0)
|
|
interpolation |
Default: PIL.Image.BILINEAR |
'default'
|
Source code in src/super_gradients/training/datasets/datasets_utils.py
319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 |
|
forward(img)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img |
Image
|
Image to be cropped and resized. |
required |
Returns:
Type | Description |
---|---|
Image
|
Image: Randomly cropped and resized image. |
Source code in src/super_gradients/training/datasets/datasets_utils.py
344 345 346 347 348 349 350 351 352 353 354 |
|
get_color_augmentation(rand_augment_config_string, color_jitter, crop_size=224, img_mean=[0.485, 0.456, 0.406])
Returns color augmentation class. As these augmentation cannot work on top one another, only one is returned according to rand_augment_config_string
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rand_augment_config_string |
str
|
string which defines the auto augment configurations. If none, color jitter will be returned. For possibile values see auto_augment.py |
required |
color_jitter |
tuple
|
tuple for color jitter value. |
required |
crop_size |
relevant only for auto augment |
224
|
|
img_mean |
relevant only for auto augment |
[0.485, 0.456, 0.406]
|
Returns:
Type | Description |
---|---|
RandAugment transform or ColorJitter |
Source code in src/super_gradients/training/datasets/datasets_utils.py
637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
get_mean_and_std_torch(data_dir=None, dataloader=None, num_workers=4, RandomResizeSize=224)
A function for getting the mean and std of large datasets using pytorch dataloader and gpu functionality.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
String, path to none-library dataset folder. For example "/data/Imagenette" or "/data/TinyImagenet" |
None
|
|
dataloader |
a torch DataLoader, as it would feed the data into the trainer (including transforms etc). |
None
|
|
RandomResizeSize |
Int, the size of the RandomResizeCrop as it appears in the DataInterface (for example, for Imagenet, this value should be 224). |
224
|
Returns:
Type | Description |
---|---|
2 lists,mean and std, each one of len 3 (1 for each channel) |
Source code in src/super_gradients/training/datasets/datasets_utils.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
worker_init_reset_seed(worker_id)
Make sure each process has different random seed, especially for 'fork' method. Check https://github.com/pytorch/pytorch/issues/63311 for more details.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
worker_id |
placeholder (needs to be passed to DataLoader init). |
required |
Source code in src/super_gradients/training/datasets/datasets_utils.py
657 658 659 660 661 662 663 664 665 666 667 |
|
AbstractDepthEstimationDataset
Bases: Dataset
Abstract class for datasets for depth estimation task.
Attempting to follow principles provided in pose_etimation_dataset.
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/abstract_depth_estimation_dataset.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
__getitem__(index)
Get a transformed depth estimation sample from the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Index of the sample to retrieve. |
required |
Returns:
Type | Description |
---|---|
Tuple[np.ndarray, np.ndarray]
|
Tuple containing the transformed image and depth map as np.ndarrays. After applying the transforms pipeline, the image is expected to be in HWC format, and the depth map should be a 2D array (e.g., Height x Width). Before returning the image and depth map, the image's channels are moved to CHW format and additional dummy dimension is added to the depth map resulting 1HW shape. |
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/abstract_depth_estimation_dataset.py
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
|
load_random_sample()
Return a random sample from the dataset
Returns:
Type | Description |
---|---|
DepthEstimationSample
|
Instance of DepthEstimationSample |
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/abstract_depth_estimation_dataset.py
44 45 46 47 48 49 50 51 52 |
|
load_sample(index)
abstractmethod
Load a depth estimation sample from the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Index of the sample to load. |
required |
Returns:
Type | Description |
---|---|
DepthEstimationSample
|
Instance of DepthEstimationSample. If your dataset contains non-labeled regions with a specific value (e.g., -100) representing ignored areas, ensure that the same value is used as the |
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/abstract_depth_estimation_dataset.py
30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
plot(max_samples_per_plot=8, n_plots=1, plot_transformed_data=True, color_scheme=None, drop_extreme_percentage=0, inverse=False)
Combine samples of images with depth maps into plots and display the result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_samples_per_plot |
int
|
Maximum number of samples (image with depth map) to be displayed per plot. |
8
|
n_plots |
int
|
Number of plots to display. |
1
|
plot_transformed_data |
bool
|
If True, the plot will be over samples after applying transforms (i.e., on getitem). If False, the plot will be over the raw samples (i.e., on load_sample). |
True
|
color_scheme |
Optional[int]
|
OpenCV color scheme for the depth map visualization. If not specified: - If |
None
|
drop_extreme_percentage |
float
|
Percentage of extreme values to drop on both ends of the depth spectrum. |
0
|
inverse |
bool
|
Apply inversion (1 / depth) if True to the depth map. |
False
|
Returns:
Type | Description |
---|---|
None |
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/abstract_depth_estimation_dataset.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
NYUv2DepthEstimationDataset
Bases: AbstractDepthEstimationDataset
Dataset class for NYU Depth V2 dataset for depth estimation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root |
str
|
Root directory containing the dataset. |
required |
df_path |
str
|
Path to the CSV file containing image and depth map file paths, relative to root. |
required |
transforms |
Transforms to be applied to the samples. To use the NYUv2Dataset class, ensure that your dataset directory is organized as follows: - Root directory (specified as 'root' when initializing the dataset) - nyu2_train (or any other split) - scene_category_1 - image_1.jpg - image_2.png - ... - scene_category_2 - image_1.jpg - image_2.png - ... - ... - nyu2_test (or any other split) - 00000_colors.png - 00001_colors.png - 00002_colors.png ... The CSV file (specified as 'df_path' when initializing the dataset) should contain two columns: path to the color images, path to depth maps (both relative to the root). Example CSV content: data/nyu2_train/scene_category_1/image_1.jpg, data/nyu2_train/scene_category_1/image_1_depth.png data/nyu2_train/scene_category_1/image_2.jpg, data/nyu2_train/scene_category_1/image_2_depth.png data/nyu2_train/scene_category_2/image_1.jpg, data/nyu2_train/scene_category_2/image_1_depth.png Note: As of 14/12/2023 official downlaod link is broken. Data can be obtained at https://www.kaggle.com/code/shreydan/monocular-depth-estimation-nyuv2/input ... |
None
|
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/nyuv2_dataset.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
__init__(root, df_path, transforms=None)
Initialize NYUv2Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root |
str
|
Root directory containing the dataset. |
required |
df_path |
str
|
Path to the CSV file containing image and depth map file paths. |
required |
transforms |
Transforms to be applied to the samples. |
None
|
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/nyuv2_dataset.py
54 55 56 57 58 59 60 61 62 63 64 65 |
|
__len__()
Get the number of samples in the dataset.
Returns:
Type | Description |
---|---|
Number of samples in the dataset. |
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/nyuv2_dataset.py
94 95 96 97 98 99 100 |
|
load_sample(index)
Load a depth estimation sample at the specified index.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Index of the sample. |
required |
Returns:
Type | Description |
---|---|
DepthEstimationSample
|
Loaded depth estimation sample. |
Source code in src/super_gradients/training/datasets/depth_estimation_datasets/nyuv2_dataset.py
80 81 82 83 84 85 86 87 88 89 90 91 92 |
|
COCODetectionDataset
Bases: COCOFormatDetectionDataset
Dataset for COCO object detection.
To use this Dataset you need to:
- Download coco dataset:
annotations: http://images.cocodataset.org/annotations/annotations_trainval2017.zip
train2017: http://images.cocodataset.org/zips/train2017.zip
val2017: http://images.cocodataset.org/zips/val2017.zip
- Unzip and organize it as below:
coco
├── annotations
│ ├─ instances_train2017.json
│ ├─ instances_val2017.json
│ └─ ...
└── images
├── train2017
│ ├─ 000000000001.jpg
│ └─ ...
└── val2017
└─ ...
- Install CoCo API: https://github.com/pdollar/coco/tree/master/PythonAPI
- Instantiate the dataset:
>> train_set = COCODetectionDataset(data_dir='.../coco', subdir='images/train2017', json_file='instances_train2017.json', ...)
>> valid_set = COCODetectionDataset(data_dir='.../coco', subdir='images/val2017', json_file='instances_val2017.json', ...)
Source code in src/super_gradients/training/datasets/detection_datasets/coco_detection.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
__init__(json_file='instances_train2017.json', subdir='images/train2017', *args, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
json_file |
str
|
Name of the coco json file, that resides in data_dir/annotations/json_file. |
'instances_train2017.json'
|
subdir |
str
|
Sub directory of data_dir containing the data. |
'images/train2017'
|
with_crowd |
Add the crowd groundtruths to getitem kwargs: all_classes_list: all classes list, default is COCO_DETECTION_CLASSES_LIST. |
required |
Source code in src/super_gradients/training/datasets/detection_datasets/coco_detection.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
COCOFormatDetectionDataset
Bases: DetectionDataset
Base dataset to load ANY dataset that is with a similar structure to the COCO dataset. - Annotation file (.json). It has to respect the exact same format as COCO, for both the json schema and the bbox format (xywh). - One folder with all the images.
Output format: (x, y, x, y, class_id)
Source code in src/super_gradients/training/datasets/detection_datasets/coco_format_detection.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
|
__init__(data_dir, json_annotation_file, images_dir, with_crowd=True, class_ids_to_ignore=None, tight_box_rotation=None, *args, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
str
|
Where the data is stored. |
required |
json_annotation_file |
str
|
Name of the coco json file. Path can be either absolute, or relative to data_dir. |
required |
images_dir |
str
|
Name of the directory that includes all the images. Path relative to data_dir. |
required |
with_crowd |
bool
|
Add the crowd groundtruths to getitem |
True
|
class_ids_to_ignore |
Optional[List[int]]
|
List of class ids to ignore in the dataset. By default, doesnt ignore any class. |
None
|
tight_box_rotation |
This parameter is deprecated and will be removed in a SuperGradients 3.8. |
None
|
Source code in src/super_gradients/training/datasets/detection_datasets/coco_format_detection.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
|
parse_coco_into_detection_annotations(ann, exclude_classes=None, include_classes=None, class_ids_to_ignore=None, image_path_prefix=None)
Load COCO detection dataset from annotation file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ann |
str
|
A path to the JSON annotation file in COCO format. |
required |
exclude_classes |
Optional[List[str]]
|
List of classes to exclude from the dataset. All other classes will be included. This parameter is mutually exclusive with include_classes and class_ids_to_ignore. |
None
|
include_classes |
Optional[List[str]]
|
List of classes to include in the dataset. All other classes will be excluded. This parameter is mutually exclusive with exclude_classes and class_ids_to_ignore. |
None
|
class_ids_to_ignore |
Optional[List[int]]
|
List of category ids to ignore in the dataset. All other classes will be included. This parameter added for the purpose of backward compatibility with the class_ids_to_ignore argument of COCOFormatDetectionDataset but will be removed in future in favor of include_classes/exclude_classes. This parameter is mutually exclusive with exclude_classes and include_classes. |
None
|
image_path_prefix |
A prefix to add to the image paths in the annotation file. |
None
|
Returns:
Type | Description |
---|---|
Tuple[List[str], List[DetectionAnnotation]]
|
Tuple (class_names, annotations) where class_names is a list of class names (respecting include_classes/exclude_classes/class_ids_to_ignore) and annotations is a list of DetectionAnnotation objects. |
Source code in src/super_gradients/training/datasets/detection_datasets/coco_format_detection.py
179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
|
DetectionDataset
Bases: Dataset
, HasPreprocessingParams
, HasClassesInformation
Detection dataset.
This is a boilerplate class to facilitate the implementation of datasets.
HOW TO CREATE A DATASET THAT INHERITS FROM DetectionDataSet ? - Inherit from DetectionDataSet - implement the method self.load_annotation to return at least the fields "target" and "img_path" - Call super().__init_ with the required params. //!\ super().init will call self.load_annotation, so make sure that every required attributes are set up before calling super().__init_ (ideally just call it last)
WORKFLOW: - On instantiation: - All annotations are cached. If class_inclusion_list was specified, there is also subclassing at this step.
- On call (__getitem__) for a specific image index:
- The image and annotations are grouped together in a dict called SAMPLE
- the sample is processed according to th transform
- Only the specified fields are returned by __getitem__
TERMINOLOGY - TARGET: Groundtruth, made of bboxes. The format can vary from one dataset to another - ANNOTATION: Combination of targets (groundtruth) and metadata of the image, but without the image itself. > Has to include the fields "target" and "img_path" > Can include other fields like "crowd_target", "image_info", "segmentation", ... - SAMPLE: Outout of the dataset: > Has to include the fields "target" and "image" > Can include other fields like "crowd_target", "image_info", "segmentation", ... - Index: Index of the sample in the dataset, AFTER filtering (if relevant). 0<=index<=len(dataset)-1 - Sample ID: Index of the sample in the dataset, WITHOUT considering any filtering. 0<=sample_id<=len(source)-1
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 |
|
__getitem__(index)
Get the sample post transforms at a specific index of the dataset. The output of this function will be collated to form batches.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Index refers to the index of the sample in the dataset, AFTER filtering (if relevant). 0<=index<=len(dataset)-1 |
required |
Returns:
Type | Description |
---|---|
Tuple
|
Sample, i.e. a dictionary including at least "image" and "target" |
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
367 368 369 370 371 372 373 374 375 376 377 378 379 |
|
__init__(data_dir, original_target_format, max_num_samples=None, cache_annotations=True, input_dim=None, transforms=[], all_classes_list=[], class_inclusion_list=None, ignore_empty_annotations=True, target_fields=None, output_fields=None, verbose=True, show_all_warnings=False, cache=None, cache_dir=None)
Detection dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
str
|
Where the data is stored |
required |
input_dim |
Union[int, Tuple[int, int], None]
|
Image size (when loaded, before transforms). Can be None, scalar or tuple (rows, cols). None means that the image will be loaded as is. Scalar (size) - Image will be resized to (size, size) Tuple (rows,cols) - Image will be resized to (rows, cols) |
None
|
original_target_format |
Union[ConcatenatedTensorFormat, DetectionTargetsFormat]
|
Format of targets stored on disk. raw data format, the output format might differ based on transforms. |
required |
max_num_samples |
int
|
If not None, set the maximum size of the dataset by only indexing the first n annotations/images. |
None
|
cache_annotations |
bool
|
Whether to cache annotations or not. This reduces training time by pre-loading all the annotations, but requires more RAM and more time to instantiate the dataset when working on very large datasets. |
True
|
transforms |
List[AbstractDetectionTransform]
|
List of transforms to apply sequentially on sample. |
[]
|
all_classes_list |
Optional[List[str]]
|
All the class names. |
[]
|
class_inclusion_list |
Optional[List[str]]
|
If not None, define the subset of classes to be included as targets. Classes not in this list will excluded from training. Thus, number of classes in model must be adjusted accordingly. |
None
|
ignore_empty_annotations |
bool
|
If True and class_inclusion_list not None, images without any target will be ignored. |
True
|
target_fields |
List[str]
|
List of the fields target fields. This has to include regular target, but can also include crowd target, segmentation target, ... It has to include at least "target" but can include other. |
None
|
output_fields |
List[str]
|
Fields that will be outputed by getitem. It has to include at least "image" and "target" but can include other. |
None
|
verbose |
bool
|
Whether to show additional information or not, such as loading progress. (doesnt include warnings) |
True
|
show_all_warnings |
bool
|
Whether to show all warnings or not. |
False
|
cache |
Deprecated. This parameter is not used and setting it has no effect. It will be removed in 3.8 |
None
|
|
cache_dir |
Deprecated. This parameter is not used and setting it has no effect. It will be removed in 3.8 |
None
|
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
|
__len__()
Get the length of the dataset. Note that this is the number of samples AFTER filtering (if relevant).
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
363 364 365 |
|
apply_transforms(sample)
Applies self.transforms sequentially to sample
If a transforms has the attribute 'additional_samples_count', additional samples will be loaded and stored in sample["additional_samples"] prior to applying it. Combining with the attribute "non_empty_annotations" will load only additional samples with objects in them.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sample |
Dict[str, Union[np.ndarray, Any]]
|
Sample to apply the transforms on to (loaded with self.get_sample) |
required |
Returns:
Type | Description |
---|---|
Dict[str, Union[np.ndarray, Any]]
|
Transformed sample |
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 |
|
get_dataset_preprocessing_params()
Return any hardcoded preprocessing + adaptation for PIL.Image image reading (RGB). image_processor as returned as as list of dicts to be resolved by processing factory.
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 |
|
get_random_samples(count, ignore_empty_annotations=False)
Load random samples.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
count |
int
|
The number of samples wanted |
required |
ignore_empty_annotations |
bool
|
If true, only return samples with at least 1 annotation |
False
|
Returns:
Type | Description |
---|---|
List[Dict[str, Union[np.ndarray, Any]]]
|
A list of samples satisfying input params |
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
431 432 433 434 435 436 437 438 |
|
get_sample(index, ignore_empty_annotations=False)
Get raw sample, before any transform (beside subclassing).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Index refers to the index of the sample in the dataset, AFTER filtering (if relevant). 0<=index<=len(dataset)-1 |
required |
ignore_empty_annotations |
bool
|
If True, empty annotations will be ignored |
False
|
Returns:
Type | Description |
---|---|
Dict[str, Union[np.ndarray, Any]]
|
Sample, i.e. a dictionary including at least "image" and "target" |
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
384 385 386 387 388 389 390 391 392 |
|
plot(max_samples_per_plot=16, n_plots=1, plot_transformed_data=True, box_thickness=2)
Combine samples of images with bbox into plots and display the result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_samples_per_plot |
int
|
Maximum number of images to be displayed per plot |
16
|
n_plots |
int
|
Number of plots to display (each plot being a combination of img with bbox) |
1
|
plot_transformed_data |
bool
|
If True, the plot will be over samples after applying transforms (i.e. on getitem). If False, the plot will be over the raw samples (i.e. on get_sample) |
True
|
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/detection_datasets/detection_dataset.py
465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 |
|
PascalVOCDetectionDataset
Bases: PascalVOCFormatDetectionDataset
Dataset for Pascal VOC object detection
Parameters:
data_dir (str): Base directory where the dataset is stored.
images_dir (str, optional): Directory containing all the images, relative to `data_dir`. Defaults to None.
labels_dir (str, optional): Directory containing all the labels, relative to `data_dir`. Defaults to None.
images_sub_directory (str, optional): Deprecated. Subdirectory within data_dir that includes images. Defaults to None.
download (bool, optional): If True, download the dataset to `data_dir`. Defaults to False.
Dataset structure:
./data/pascal_voc
├─images
│ ├─ train2012
│ ├─ val2012
│ ├─ VOCdevkit
│ │ ├─ VOC2007
│ │ │ ├──JPEGImages
│ │ │ ├──SegmentationClass
│ │ │ ├──ImageSets
│ │ │ ├──ImageSets/Segmentation
│ │ │ ├──ImageSets/Main
│ │ │ ├──ImageSets/Layout
│ │ │ ├──Annotations
│ │ │ └──SegmentationObject
│ │ └──VOC2012
│ │ ├──JPEGImages
│ │ ├──SegmentationClass
│ │ ├──ImageSets
│ │ ├──ImageSets/Segmentation
│ │ ├──ImageSets/Main
│ │ ├──ImageSets/Action
│ │ ├──ImageSets/Layout
│ │ ├──Annotations
│ │ └──SegmentationObject
│ ├─train2007
│ ├─test2007
│ └─val2007
└─labels
├─train2012
├─val2012
├─train2007
├─test2007
└─val2007
Note: If both 'images_sub_directory' and ('images_dir', 'labels_dir') are provided, a warning will be raised.
Usage: voc_2012_train = PascalVOCDetectionDataset(data_dir="./data/pascal_voc", images_dir="images/train2012/JPEGImages", labels_dir="labels/train2012/Annotations", download=True)
Source code in src/super_gradients/training/datasets/detection_datasets/pascal_voc_detection.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
__init__(data_dir, images_sub_directory=None, images_dir=None, labels_dir=None, download=False, max_num_samples=None, cache_annotations=True, input_dim=None, transforms=[], class_inclusion_list=None, ignore_empty_annotations=True, verbose=True, show_all_warnings=False, cache=None, cache_dir=None)
Initialize the Pascal VOC Detection Dataset.
Source code in src/super_gradients/training/datasets/detection_datasets/pascal_voc_detection.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
|
download(data_dir)
staticmethod
Download Pascal dataset in XYXY_LABEL format.
Data extracted form http://host.robots.ox.ac.uk/pascal/VOC/
Source code in src/super_gradients/training/datasets/detection_datasets/pascal_voc_detection.py
143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
PascalVOCUnifiedDetectionTrainDataset
Bases: ConcatDataset
Unified Dataset for Pascal VOC object detection.
Unified Dataset class for training on Pascal VOC object detection datasets.
This class combines datasets from multiple years (e.g., 2007, 2012) into a single dataset for training purposes.
Parameters:
data_dir (str): Base directory where the dataset is stored.
input_dim (tuple): Input dimension that the images should be resized to.
cache (optional): Cache configuration.
cache_dir (optional): Directory for cache.
transforms (List[AbstractDetectionTransform], optional): List of transforms to apply.
class_inclusion_list (Optional[List[str]], optional): List of classes to include.
max_num_samples (int, optional): Maximum number of samples to include from each dataset part.
download (bool, optional): If True, downloads the dataset parts to data_dir
. Defaults to False.
images_dir (Optional[str], optional): Directory containing all the images, relative to data_dir
. Should only be used without 'images_sub_directory'.
labels_dir (Optional[str], optional): Directory containing all the labels, relative to data_dir
. Should only be used without 'images_sub_directory'.
images_sub_directory (Optional[str], optional): Deprecated. Use 'images_dir' and 'labels_dir' instead for future compatibility.
Example Dataset structure:
./data/pascal_voc/
├─images
│ ├─ train2012
│ ├─ val2012
│ ├─ VOCdevkit
│ │ ├─ VOC2007
│ │ │ ├──JPEGImages
│ │ │ ├──SegmentationClass
│ │ │ ├──ImageSets
│ │ │ ├──ImageSets/Segmentation
│ │ │ ├──ImageSets/Main
│ │ │ ├──ImageSets/Layout
│ │ │ ├──Annotations
│ │ │ └──SegmentationObject
│ │ └──VOC2012
│ │ ├──JPEGImages
│ │ ├──SegmentationClass
│ │ ├──ImageSets
│ │ ├──ImageSets/Segmentation
│ │ ├──ImageSets/Main
│ │ ├──ImageSets/Action
│ │ ├──ImageSets/Layout
│ │ ├──Annotations
│ │ └──SegmentationObject
│ ├─train2007
│ ├─test2007
│ └─val2007
└─labels
├─train2012
├─val2012
├─train2007
├─test2007
└─val2007
Usage:
unified_dataset = PascalVOCUnifiedDetectionTrainDataset(data_dir="./data/pascal_voc",
input_dim=(512, 512),
download=True,
images_dir="images",
labels_dir="labels")
Source code in src/super_gradients/training/datasets/detection_datasets/pascal_voc_detection.py
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 |
|
PascalVOCFormatDetectionDataset
Bases: DetectionDataset
Dataset for Pascal VOC object detection
Parameters: data_dir (str): Base directory where the dataset is stored.
images_dir (Optional[str]): Directory containing all the images, relative to `data_dir`. Defaults to None.
labels_dir (Optional[str]): Directory containing all the labels, relative to `data_dir`. Defaults to None.
max_num_samples (Optional[int]): If not None, sets the maximum size of the dataset by only indexing the first
n annotations/images. Defaults to None.
cache_annotations (bool): Whether to cache annotations. Reduces training time by pre-loading all annotations
but requires more RAM. Defaults to True.
input_dim (Optional[Union[int, Tuple[int, int]]]): Image size when loaded, before transforms. Can be None, a scalar,
or a tuple (height, width). Defaults to None.
transforms (List[AbstractDetectionTransform]): List of transforms to apply sequentially on each sample.
Defaults to an empty list.
all_classes_list (Optional[List[str]]): All class names in the dataset. Defaults to an empty list.
class_inclusion_list (Optional[List[str]]): Subset of classes to include. Classes not in this list will be excluded.
Adjust the number of model classes accordingly. Defaults to None.
ignore_empty_annotations (bool): If True and class_inclusion_list is not None, images without any target will be
ignored. Defaults to True.
verbose (bool): If True, displays additional information (does not include warnings). Defaults to True.
show_all_warnings (bool): If True, displays all warnings. Defaults to False.
cache (Optional): Deprecated. This parameter is not used and setting it has no effect. Will be removed in a
future version.
cache_dir (Optional): Deprecated. This parameter is not used and setting it has no effect. Will be removed in
a future version.
Dataset structure:
./data/pascal_voc
├─images
│ ├─ train2012
│ ├─ val2012
│ ├─ VOCdevkit
│ │ ├─ VOC2007
│ │ │ ├──JPEGImages
│ │ │ ├──SegmentationClass
│ │ │ ├──ImageSets
│ │ │ ├──ImageSets/Segmentation
│ │ │ ├──ImageSets/Main
│ │ │ ├──ImageSets/Layout
│ │ │ ├──Annotations
│ │ │ └──SegmentationObject
│ │ └──VOC2012
│ │ ├──JPEGImages
│ │ ├──SegmentationClass
│ │ ├──ImageSets
│ │ ├──ImageSets/Segmentation
│ │ ├──ImageSets/Main
│ │ ├──ImageSets/Action
│ │ ├──ImageSets/Layout
│ │ ├──Annotations
│ │ └──SegmentationObject
│ ├─train2007
│ ├─test2007
│ └─val2007
└─labels
├─train2012
├─val2012
├─train2007
├─test2007
└─val2007
Note: If both 'images_sub_directory' and ('images_dir', 'labels_dir') are provided, a warning will be raised.
Usage: voc_2012_train = PascalVOCDetectionDataset(data_dir="./data/pascal_voc", images_dir="images/train2012/JPEGImages", labels_dir="labels/train2012/Annotations", download=True)
Source code in src/super_gradients/training/datasets/detection_datasets/pascal_voc_format_detection.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
|
__init__(data_dir, images_dir, labels_dir, max_num_samples=None, cache_annotations=True, input_dim=None, transforms=[], all_classes_list=[], class_inclusion_list=None, ignore_empty_annotations=True, verbose=True, show_all_warnings=False, cache=None, cache_dir=None)
Initialize the Pascal VOC Detection Dataset.
Source code in src/super_gradients/training/datasets/detection_datasets/pascal_voc_format_detection.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
|
RoboflowDetectionDataset
Bases: COCOFormatDetectionDataset
Dataset that can be used with ANY of the Roboflow100 benchmark datasets for object detection. Checkout the datasets at https://universe.roboflow.com/roboflow-100?ref=blog.roboflow.com
To use this Dataset you need to:
- Follow the official instructions to download Roboflow100: https://github.com/roboflow/roboflow-100-benchmark?ref=roboflow-blog
//!\ To use this dataset, you have to download the "coco" format, NOT the yolov5.
- Your dataset should look like this:
rf100
├── 4-fold-defect
│ ├─ train
│ │ ├─ 000000000001.jpg
│ │ ├─ ...
│ │ └─ _annotations.coco.json
│ ├─ valid
│ │ └─ ...
│ └─ test
│ └─ ...
├── abdomen-mri
│ └─ ...
└── ...
- Install CoCo API: https://github.com/pdollar/coco/tree/master/PythonAPI
- Instantiate the dataset (in this case we load the dataset called "digits-t2eg6")"
>> train_set = RoboflowDetectionDataset(data_dir='<path-to>/rf100', dataset_name="digits-t2eg6", split="train")
>> valid_set = RoboflowDetectionDataset(data_dir='<path-to>/rf100', dataset_name="digits-t2eg6", split="valid")
Note: dataset_name
refers to the official name of the dataset. You can run RoboflowDetectionDataset.list_datasets() to see all available datasets)
OR you can find it in the url of the dataset: https://universe.roboflow.com/roboflow-100/digits-t2eg6 -> digits-t2eg6
Source code in src/super_gradients/training/datasets/detection_datasets/roboflow/roboflow100.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
metadata: Optional[Dict[str, Union[str, int]]]
property
Category of the dataset. Note that each dataset has one and only one category.
__init__(data_dir, dataset_name, split, *args, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
str
|
Where the data is stored. |
required |
dataset_name |
str
|
One of the 100 dataset name. (You can run RoboflowDetectionDataset.list_datasets() to see all available datasets) |
required |
split |
str
|
train, valid or test. |
required |
Source code in src/super_gradients/training/datasets/detection_datasets/roboflow/roboflow100.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
list_datasets(categories=None)
staticmethod
List all available datasets of specified categories. By default, list all the datasets.
Source code in src/super_gradients/training/datasets/detection_datasets/roboflow/roboflow100.py
60 61 62 63 |
|
get_dataset_metadata(dataset_name)
Get the metadata of a specific roboflow dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_name |
str
|
Name of the dataset, as listed in the official repo - https://github.com/roboflow/roboflow-100-benchmark/blob/main/metadata/datasets_stats.csv |
required |
Returns:
Type | Description |
---|---|
Optional[Dict[str, Union[str, int]]]
|
Metadata of the dataset |
Source code in src/super_gradients/training/datasets/detection_datasets/roboflow/utils.py
15 16 17 18 19 20 21 22 23 24 |
|
get_dataset_num_classes(dataset_name)
Get the number of classes of a specific roboflow dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_name |
str
|
Name of the dataset, as listed in the official repo - https://github.com/roboflow/roboflow-100-benchmark/blob/main/metadata/datasets_stats.csv |
required |
Returns:
Type | Description |
---|---|
int
|
Number of classes of the dataset. Note that the number of classes in the official documentation is different to the actual one. |
Source code in src/super_gradients/training/datasets/detection_datasets/roboflow/utils.py
27 28 29 30 31 32 33 34 35 36 |
|
list_datasets(categories=None)
List all available datasets of specified categories. By default, list all the datasets.
Source code in src/super_gradients/training/datasets/detection_datasets/roboflow/utils.py
9 10 11 12 |
|
YoloDarknetFormatDetectionDataset
Bases: DetectionDataset
Base dataset to load ANY dataset that is with a similar structure to the Yolo/Darknet dataset.
Note: For compatibility reasons, the dataset returns labels in Coco format (XYXY_LABEL) and NOT in Yolo format (LABEL_CXCYWH).
The dataset can have any structure, as long as images_dir
and labels_dir
inside data_dir
.
Each image is expected to have a file with the same name as the label.
Example1:
data_dir
├── images
│ ├─ 0001.jpg
│ ├─ 0002.jpg
│ └─ ...
└── labels
├─ 0001.txt
├─ 0002.txt
└─ ...
>> data_set = YoloDarknetFormatDetectionDataset(data_dir='
Example2: data_dir ├── train │ ├── images │ │ ├─ 0001.jpg │ │ ├─ 0002.jpg │ │ └─ ... │ └── labels │ ├─ 0001.txt │ ├─ 0002.txt │ └─ ... └── val ├── images │ ├─ 434343.jpg │ ├─ 434344.jpg │ └─ ... └── labels ├─ 434343.txt ├─ 434344.txt └─ ...
>> train_set = YoloDarknetFormatDetectionDataset(
data_dir='<path-to>/data_dir', images_dir="train/images", labels_dir="train/labels", classes=[<to-fill>]
)
>> val_set = YoloDarknetFormatDetectionDataset(
data_dir='<path-to>/data_dir', images_dir="val/images", labels_dir="val/labels", classes=[<to-fill>]
)
Example3: data_dir ├── train │ ├─ 0001.jpg │ ├─ 0001.txt │ ├─ 0002.jpg │ ├─ 0002.txt │ └─ ... └── val ├─ 4343.jpg ├─ 4343.txt ├─ 4344.jpg ├─ 4344.txt └─ ...
>> train_set = YoloDarknetFormatDetectionDataset(data_dir='<path-to>/data_dir', images_dir="train", labels_dir="train", classes=[<to-fill>])
>> val_set = YoloDarknetFormatDetectionDataset(data_dir='<path-to>/data_dir', images_dir="val", labels_dir="val", classes=[<to-fill>])
Each label file being in LABEL_NORMALIZED_CXCYWH format: 0 0.33 0.33 0.50 0.44 1 0.21 0.54 0.30 0.60 ...
Output format: XYXY_LABEL (x, y, x, y, class_id)
Source code in src/super_gradients/training/datasets/detection_datasets/yolo_format_detection.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
__init__(data_dir, images_dir, labels_dir, classes, class_ids_to_ignore=None, ignore_invalid_labels=True, show_all_warnings=False, *args, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
str
|
Where the data is stored. |
required |
images_dir |
str
|
Local path to directory that includes all the images. Path relative to |
required |
labels_dir |
str
|
Local path to directory that includes all the labels. Path relative to |
required |
classes |
List[str]
|
List of class names. |
required |
class_ids_to_ignore |
Optional[List[int]]
|
List of class ids to ignore in the dataset. By default, doesnt ignore any class. |
None
|
ignore_invalid_labels |
bool
|
Whether to ignore labels that fail to be parsed. If True ignores and logs a warning, otherwise raise an error. |
True
|
show_all_warnings |
bool
|
Whether to show every yolo format parser warnings or not. |
False
|
Source code in src/super_gradients/training/datasets/detection_datasets/yolo_format_detection.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
|
Mixup and Cutmix
Papers: mixup: Beyond Empirical Risk Minimization (https://arxiv.org/abs/1710.09412)
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (https://arxiv.org/abs/1905.04899)
Code Reference: CutMix: https://github.com/clovaai/CutMix-PyTorch CutMix by timm: https://github.com/rwightman/pytorch-image-models/timm
CollateMixup
Collate with Mixup/Cutmix that applies different params to each element or whole batch A Mixup impl that's performed while collating the batches.
Source code in src/super_gradients/training/datasets/mixup.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 |
|
__init__(mixup_alpha=1.0, cutmix_alpha=0.0, cutmix_minmax=None, prob=1.0, switch_prob=0.5, mode='batch', correct_lam=True, label_smoothing=0.1, num_classes=1000)
Mixup/Cutmix that applies different params to each element or whole batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mixup_alpha |
float
|
mixup alpha value, mixup is active if > 0. |
1.0
|
cutmix_alpha |
float
|
cutmix alpha value, cutmix is active if > 0. |
0.0
|
cutmix_minmax |
List[float]
|
cutmix min/max image ratio, cutmix is active and uses this vs alpha if not None. |
None
|
prob |
float
|
probability of applying mixup or cutmix per batch or element |
1.0
|
switch_prob |
float
|
probability of switching to cutmix instead of mixup when both are active |
0.5
|
mode |
str
|
how to apply mixup/cutmix params (per 'batch', 'pair' (pair of elements), 'elem' (element) |
'batch'
|
correct_lam |
bool
|
apply lambda correction when cutmix bbox clipped by image borders |
True
|
label_smoothing |
float
|
apply label smoothing to the mixed target tensor |
0.1
|
num_classes |
int
|
number of classes for target |
1000
|
Source code in src/super_gradients/training/datasets/mixup.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
cutmix_bbox_and_lam(img_shape, lam, ratio_minmax=None, correct_lam=True, count=None)
Generate bbox and apply lambda correction.
Source code in src/super_gradients/training/datasets/mixup.py
89 90 91 92 93 94 95 96 97 98 99 100 |
|
mixup_target(target, num_classes, lam=1.0, smoothing=0.0, device='cuda')
generate a smooth target (label) two-hot tensor to support the mixed images with different labels
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target |
torch.Tensor
|
the targets tensor |
required |
num_classes |
int
|
number of classes (to set the final tensor size) |
required |
lam |
float
|
percentage of label a range [0, 1] in the mixing |
1.0
|
smoothing |
float
|
the smoothing multiplier |
0.0
|
device |
str
|
usable device ['cuda', 'cpu'] |
'cuda'
|
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/mixup.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
rand_bbox(img_shape, lam, margin=0.0, count=None)
Standard CutMix bounding-box Generates a random square bbox based on lambda value. This impl includes support for enforcing a border margin as percent of bbox dimensions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img_shape |
tuple
|
Image shape as tuple |
required |
lam |
float
|
Cutmix lambda value |
required |
margin |
float
|
Percentage of bbox dimension to enforce as margin (reduce amount of box outside image) |
0.0
|
count |
int
|
Number of bbox to generate |
None
|
Source code in src/super_gradients/training/datasets/mixup.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
rand_bbox_minmax(img_shape, minmax, count=None)
Min-Max CutMix bounding-box Inspired by Darknet cutmix impl, generates a random rectangular bbox based on min/max percent values applied to each dimension of the input image.
Typical defaults for minmax are usually in the .2-.3 for min and .8-.9 range for max.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img_shape |
tuple
|
Image shape as tuple |
required |
minmax |
Union[tuple, list]
|
Min and max bbox ratios (as percent of image size) |
required |
count |
int
|
Number of bbox to generate |
None
|
Source code in src/super_gradients/training/datasets/mixup.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
|
AbstractPoseEstimationDataset
Bases: Dataset
, HasPreprocessingParams
Abstract class for strongly typed dataset classes for pose estimation task. This new concept introduced in SG 3.3 and will be used in the future to replace the old BaseKeypointsDataset. The reasoning begin strongly typed dataset includes: 1. Introduction of a new concept of "data sample" that has clear definition (via @dataclass) thus reducing change of bugs/confusion. 2. Data sample becomes a central concept in data augmentation transforms and metrics. 3. Dataset implementation decoupled from the model & loss - now the dataset returns the data sample objects and model/loss specific conversion happens only in collate function.
Descendants should implement the load_sample method to read a sample from the disk and return PoseEstimationSample object.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/abstract_pose_estimation_dataset.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
|
__init__(transforms, num_joints, edge_links, edge_colors, keypoint_colors)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transforms |
List[AbstractKeypointTransform]
|
Transforms to be applied to the image & keypoints |
required |
num_joints |
int
|
Number of joints to be predicted |
required |
edge_links |
Union[ListConfig, List[Tuple[int, int]], np.ndarray]
|
Edge links between joints |
required |
edge_colors |
Union[ListConfig, List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the edge links. If None, the color will be generated randomly. |
required |
keypoint_colors |
Union[ListConfig, List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the keypoints. If None, the color will be generated randomly. |
required |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/abstract_pose_estimation_dataset.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
get_dataset_preprocessing_params()
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/abstract_pose_estimation_dataset.py
98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
|
load_random_sample()
Return a random sample from the dataset
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Instance of PoseEstimationSample |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/abstract_pose_estimation_dataset.py
83 84 85 86 87 88 89 90 91 |
|
load_sample(index)
abstractmethod
Read a sample from the disk and return a PoseEstimationSample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Sample index |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Returns an instance of PoseEstimationSample that holds complete sample (image and annotations) |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/abstract_pose_estimation_dataset.py
74 75 76 77 78 79 80 81 |
|
BaseKeypointsDataset
Bases: Dataset
, HasPreprocessingParams
Base class for pose estimation datasets. Descendants should implement the load_sample method to read a sample from the disk and return (image, mask, joints, extras) tuple.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
|
__init__(target_generator, transforms, min_instance_area, num_joints, edge_links, edge_colors, keypoint_colors)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target_generator |
KeypointsTargetsGenerator
|
Target generator that will be used to generate the targets for the model. See DEKRTargetsGenerator for an example. |
required |
transforms |
List[KeypointTransform]
|
Transforms to be applied to the image & keypoints |
required |
min_instance_area |
float
|
Minimum area of an instance to be included in the dataset |
required |
num_joints |
int
|
Number of joints to be predicted |
required |
edge_links |
Union[ListConfig, List[Tuple[int, int]], np.ndarray]
|
Edge links between joints |
required |
edge_colors |
Union[ListConfig, List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the edge links. If None, the color will be generated randomly. |
required |
keypoint_colors |
Union[ListConfig, List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the keypoints. If None, the color will be generated randomly. |
required |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
compute_area(joints)
Compute area of a bounding box for each instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
joints |
np.ndarray
|
[Num Instances, Num Joints, 3] |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[Num Instances] |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
95 96 97 98 99 100 101 102 103 |
|
filter_joints(joints, image)
Filter instances that are either too small or do not have visible keypoints
Parameters:
Name | Type | Description | Default |
---|---|---|---|
joints |
np.ndarray
|
Array of shape [Num Instances, Num Joints, 3] |
required |
image |
np.ndarray
|
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[New Num Instances, Num Joints, 3], New Num Instances <= Num Instances |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
|
get_dataset_preprocessing_params()
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
|
load_sample(index)
abstractmethod
Read a sample from the disk and return (image, mask, joints, extras) tuple
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
Sample index |
required |
Returns:
Type | Description |
---|---|
Tuple[np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]
|
Tuple of (image, mask, joints, extras) image - Numpy array of [H,W,3] shape, which represents input RGB image mask - Numpy array of [H,W] shape, which represents a binary mask with zero values corresponding to an ignored region which should not be used for training (contribute to loss) joints - Numpy array of [Num Instances, Num Joints, 3] shape, which represents the skeletons of the instances extras - Dictionary of extra information about the sample that should be included in |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
KeypointsCollate
Collate image & targets, return extras as is.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/base_keypoints.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
|
COCOKeypointsDataset
Bases: BaseKeypointsDataset
Dataset class for training pose estimation models on COCO Keypoints dataset. Use should pass a target generator class that is model-specific and generates the targets for the model.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_keypoints.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
|
__init__(data_dir, images_dir, json_file, include_empty_samples, target_generator, transforms, min_instance_area, edge_links, edge_colors, keypoint_colors)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
str
|
Root directory of the COCO dataset |
required |
images_dir |
str
|
path suffix to the images directory inside the dataset_root |
required |
json_file |
str
|
path suffix to the json file inside the dataset_root |
required |
include_empty_samples |
bool
|
if True, images without any annotations will be included in the dataset. Otherwise, they will be filtered out. |
required |
target_generator |
Target generator that will be used to generate the targets for the model. See DEKRTargetsGenerator for an example. |
required | |
transforms |
List[KeypointTransform]
|
Transforms to be applied to the image & keypoints |
required |
min_instance_area |
float
|
Minimum area of an instance to be included in the dataset |
required |
edge_links |
Union[List[Tuple[int, int]], np.ndarray]
|
Edge links between joints |
required |
edge_colors |
Union[List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the edge links. If None, the color will be generated randomly. |
required |
keypoint_colors |
Union[List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the keypoints. If None, the color will be generated randomly. |
required |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_keypoints.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
filter_joints(image_shape, joints, areas, bboxes, is_crowd)
Filter instances that are either too small or do not have visible keypoints.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
Image if [H,W,C] shape. Used to infer image boundaries |
required | |
joints |
np.ndarray
|
Array of shape [Num Instances, Num Joints, 3] |
required |
areas |
np.ndarray
|
Array of shape [Num Instances] with area of each instance. Instance area comes from segmentation mask from COCO annotation file. |
required |
bboxes |
np.ndarray
|
Array of shape [Num Instances, 4] for bounding boxes in XYWH format. Bounding boxes comes from segmentation mask from COCO annotation file. |
required |
Returns:
Type | Description |
---|---|
[New Num Instances, Num Joints, 3], New Num Instances <= Num Instances |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_keypoints.py
137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
|
get_dataset_preprocessing_params()
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_keypoints.py
189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
|
COCOPoseEstimationDataset
Bases: AbstractPoseEstimationDataset
Dataset class for training pose estimation models using COCO format dataset. Please note that COCO annotations must have exactly one category (e.g. "person") and keypoints must be defined for this category.
Compatible datasets are - COCO2017 dataset - CrowdPose dataset - Any other dataset that is compatible with COCO format
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_pose_estimation_dataset.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 |
|
__init__(data_dir, images_dir, json_file, include_empty_samples, transforms, edge_links, edge_colors, keypoint_colors, remove_duplicate_annotations=False, crowd_annotations_action=CrowdAnnotationActionEnum.NO_ACTION)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dir |
str
|
Root directory of the COCO dataset |
required |
images_dir |
str
|
Path suffix to the images directory inside the dataset_root |
required |
json_file |
str
|
Path suffix to the json file inside the dataset_root |
required |
include_empty_samples |
bool
|
If True, images without any annotations will be included in the dataset. Otherwise, they will be filtered out. |
required |
transforms |
List[AbstractKeypointTransform]
|
Transforms to be applied to the image & keypoints |
required |
edge_links |
Union[List[Tuple[int, int]], np.ndarray]
|
Edge links between joints |
required |
edge_colors |
Union[List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the edge links. If None, the color will be generated randomly. |
required |
keypoint_colors |
Union[List[Tuple[int, int, int]], np.ndarray, None]
|
Color of the keypoints. If None, the color will be generated randomly. |
required |
remove_duplicate_annotations |
bool
|
If True will remove duplicate instances from the dataset. It is known issue of COCO dataset - it has some duplicate annotations that affects the AP metric on validation greatly. This option allows to remove these duplicates. However, it is disabled by default to preserve backward compatibility with COCO evaluation. When remove_duplicate_annotations is False no action will be taken and these duplicate instances will be left unchanged. Default value is False. |
False
|
crowd_annotations_action |
CrowdAnnotationActionEnum
|
Action to take for annotations with iscrowd=1. Can be one of the following: "drop_sample" - Samples with crowd annotations will be dropped from the dataset. "drop_annotation" - Crowd annotations will be dropped from the dataset. "mask_as_normal" - These annotations will be treated as normal (non-crowd) annotations. "no_action" - No action will be taken for crowd annotations. |
CrowdAnnotationActionEnum.NO_ACTION
|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_pose_estimation_dataset.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
|
get_dataset_preprocessing_params()
This method returns a dictionary of parameters describing preprocessing steps to be applied to the dataset.
Returns:
Type | Description |
---|---|
dict
|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_pose_estimation_dataset.py
164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 |
|
load_sample(index)
Read a sample from the disk and return a PoseEstimationSample
Parameters:
Name | Type | Description | Default |
---|---|---|---|
index |
int
|
Sample index |
required |
Returns:
Type | Description |
---|---|
PoseEstimationSample
|
Returns an instance of PoseEstimationSample that holds complete sample (image and annotations) |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_pose_estimation_dataset.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
CrowdAnnotationActionEnum
Bases: str
, Enum
Enum that contains possible actions to take for crowd annotations.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_utils.py
20 21 22 23 24 25 26 27 28 |
|
parse_coco_into_keypoints_annotations(ann, image_path_prefix=None, crowd_annotations_action=CrowdAnnotationActionEnum.NO_ACTION, remove_duplicate_annotations=False)
Load COCO keypoints dataset from annotation file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ann |
str
|
A path to the JSON annotation file in COCO format. |
required |
image_path_prefix |
A prefix to add to the image paths in the annotation file. |
None
|
Returns:
Type | Description |
---|---|
Tuple[str, Dict, List[KeypointsAnnotation]]
|
Tuple (class_names, annotations) where class_names is a list of class names (respecting include_classes/exclude_classes/class_ids_to_ignore) and annotations is a list of DetectionAnnotation objects. |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_utils.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
|
rle2mask(rle, image_shape)
Convert RLE to binary mask
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rle |
np.ndarray
|
A string containing RLE-encoded mask |
required |
image_shape |
Tuple[int, int]
|
Output image shape (rows, cols) |
required |
Returns:
Type | Description |
---|---|
A decoded binary mask |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_utils.py
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
|
segmentation2mask(segmentation, image_shape)
Decode segmentation annotation into binary mask
Parameters:
Name | Type | Description | Default |
---|---|---|---|
segmentation |
Input segmentation annotation. Can come in many forms: - |
required | |
image_shape |
Tuple[int, int]
|
required |
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/coco_utils.py
155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
TrainRescoringDataset
Bases: RescoringDataset
Implementation of the dataset for training the rescoring network. In this implementation, the dataset is a list of individual poses and DataLoader randomly samples them to form a batch during training.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/rescoring_dataset.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
ValTrainRescoringDataset
Bases: RescoringDataset
Implementation of the dataset for validating the rescoring model. It differs from the training dataset implementation. Each sample represents a single image with all the poses on it, this enables us to compute pose estimation metrics after rescoring.
This dataset is intended to be used with DataLoader with batch_size=1. In this case we don't need to padd poses in collate_fn.
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/rescoring_dataset.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
|
DEKRTargetsGenerator
Bases: KeypointsTargetsGenerator
Target generator for pose estimation task tailored for the DEKR paper (https://arxiv.org/abs/2104.02300)
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
__call__(image, joints, mask)
Encode the keypoints into dense targets that participate in loss computation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
Tensor
|
Image tensor [3, H, W] |
required |
joints |
np.ndarray
|
[Instances, NumJoints, 3] |
required |
mask |
np.ndarray
|
[H,W] A mask that indicates which pixels should be included (1) or which one should be excluded (0) from loss computation. |
required |
Returns:
Type | Description |
---|---|
Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]
|
Tuple of (heatmap, mask, offset, offset_weight) heatmap - [NumJoints+1, H // Output Stride, W // Output Stride] mask - [NumJoints+1, H // Output Stride, H // Output Stride] offset - [NumJoints2, H // Output Stride, W // Output Stride] offset_weight - [NumJoints2, H // Output Stride, W // Output Stride] |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
__init__(output_stride, sigma, center_sigma, bg_weight, offset_radius)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_stride |
int
|
Downsampling factor for target maps (w.r.t to input image resolution) |
required |
sigma |
float
|
Sigma of the gaussian kernel used to generate the heatmap (Effective radius of the heatmap would be 3*sigma) |
required |
center_sigma |
float
|
Sigma of the gaussian kernel used to generate the instance "center" heatmap (Effective radius of the heatmap would be 3*sigma) |
required |
bg_weight |
float
|
Weight assigned to all background pixels (used to re-weight the heatmap loss) |
required |
offset_radius |
float
|
Radius for the offset encoding (in pixels) |
required |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
augment_with_center_joint(joints)
Augment set of joints with additional center joint. Returns a new array with shape [Instances, Joints+1, 3] where the last joint is the center joint. Only instances with at least one visible joint are returned.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
joints |
np.ndarray
|
[Num Instances, Num Joints, 3] Last channel represents (x, y, visibility) |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[Num Instances, Num Joints + 1, 3] |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
compute_area(joints)
Compute area of a bounding box for each instance
Parameters:
Name | Type | Description | Default |
---|---|---|---|
joints |
np.ndarray
|
[Num Instances, Num Joints, 3] |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
[Num Instances] |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
54 55 56 57 58 59 60 61 62 |
|
sort_joints_by_area(joints)
Rearrange joints in descending order of area of bounding box around them
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
64 65 66 67 68 69 70 71 72 |
|
KeypointsTargetsGenerator
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
__call__(image, joints, mask)
abstractmethod
Encode input joints into target tensors
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
Tensor
|
[C,H,W] Input image tensor |
required |
joints |
np.ndarray
|
[Num Instances, Num Joints, 3] Last channel represents (x, y, visibility) |
required |
mask |
np.ndarray
|
[H,W] Mask representing valid image areas. For instance, in COCO dataset crowd targets are not used during training and corresponding instances will be zero-masked. Your implementation may use this mask when generating targets. |
required |
Returns:
Type | Description |
---|---|
Union[Tensor, Tuple[Tensor, ...], Dict[str, Tensor]]
|
Encoded targets |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/target_generators.py
14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
YoloNASPoseCollateFN
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/yolo_nas_pose_collate_fn.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
__call__(batch)
Collate samples into a batch. This collate function is compatible with YoloNASPose model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[PoseEstimationSample]
|
A list of samples from the dataset. Each sample is a tuple of (image, (boxes, joints), extras) |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, Tuple[Tensor, Tensor, Tensor], Dict]
|
Tuple of (images, (boxes, joints), extras) - images: [Batch, 3, H, W] - boxes: [NumInstances, 5], last dimension represents (batch_index, x1, y1, x2, y2) of all boxes in a batch - joints: [NumInstances, NumJoints, 4] of all poses in a batch. Last dimension represents (batch_index, x, y, visibility) - extras: A dict of extra information per image need for metric computation |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/yolo_nas_pose_collate_fn.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
|
__init__(set_image_to_none=True)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
set_image_to_none |
bool
|
If True, image and mask properties for each sample will be set to None after collation. After we collate images from samples into batch individual images are not needed anymore. Keeping them in sample slows down data transfer time and slows training 2X. If True, image and mask properties will be set to None after collation. If False, image and mask properties will be converted to torch tensors and kept in the sample. |
True
|
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/yolo_nas_pose_collate_fn.py
17 18 19 20 21 22 23 24 25 26 |
|
flat_collate_tensors_with_batch_index(labels_batch)
Concatenate tensors along the first dimension and add a sample index as the first element in the last dimension.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
labels_batch |
List[Tensor]
|
A list of targets per image (each of arbitrary length: [N1, ..., C], [N2, ..., C], [N3, ..., C],...) |
required |
Returns:
Type | Description |
---|---|
Tensor
|
A single tensor of shape [N1+N2+N3+..., ..., C+1]. |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/yolo_nas_pose_collate_fn.py
93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
undo_flat_collate_tensors_with_batch_index(flat_tensor, batch_size)
Unrolls the flat tensor into list of tensors per batch item. As name suggest it undoes what flat_collate_tensors_with_batch_index does.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flat_tensor |
Tensor
|
Tensor of shape [N1+N2+N3+..., ..., C+1]. |
required |
batch_size |
int
|
The batch size (Number of items in the batch) |
required |
Returns:
Type | Description |
---|---|
List[Tensor]
|
List of tensors [N1, ..., C], [N2, ..., C], [N3, ..., C],... |
Source code in src/super_gradients/training/datasets/pose_estimation_datasets/yolo_nas_pose_collate_fn.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
|
ClassBalancedSampler
Bases: WeightedRandomSampler
Source code in src/super_gradients/training/datasets/samplers/class_balanced_sampler.py
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
|
__init__(dataset=None, precomputed_factors_file=None, oversample_threshold=None, oversample_aggressiveness=0.5, num_samples=None, generator=None)
Wrap WeightedRandomSampler with weights that are computed from the class frequencies of the dataset.
Source code in src/super_gradients/training/datasets/samplers/class_balanced_sampler.py
122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
|
ClassBalancer
Source code in src/super_gradients/training/datasets/samplers/class_balanced_sampler.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
|
from_precomputed_sample_repeat_factors(precomputed_path)
staticmethod
Loads the repeat factors from a precomputed file.
Source code in src/super_gradients/training/datasets/samplers/class_balanced_sampler.py
106 107 108 109 110 111 112 113 114 115 116 117 |
|
get_sample_repeat_factors(class_information_provider, oversample_threshold=None, oversample_aggressiveness=0.5)
staticmethod
Oversampling scarce classes from detection dataset, following sampling strategy described in https://arxiv.org/pdf/1908.03195.pdf.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
class_information_provider |
HasClassesInformation
|
An object (probably a dataset) that provides the class information. |
required |
oversample_threshold |
Optional[float]
|
A frequency threshold (fraction, 0-1). Classes that are less frequent than this threshold will be oversampled. The default value is None. If None, the median of the class frequencies will be used. |
None
|
oversample_aggressiveness |
float
|
How aggressive the oversampling is. The higher the value, the more aggressive the oversampling is. The default value is 0.5, and corresponds to the implementation in the paper. A value of 0.0 corresponds to no oversampling. The repeat factor is computed as followed: 1. For each class c, compute the fraction # of images that contain it (its frequency): :math: |
0.5
|
Source code in src/super_gradients/training/datasets/samplers/class_balanced_sampler.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
|
DatasetFromSampler
Bases: Dataset
Source code in src/super_gradients/training/datasets/samplers/distributed_sampler_wrapper.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
__len__()
Returns: int: length of the dataset
Source code in src/super_gradients/training/datasets/samplers/distributed_sampler_wrapper.py
17 18 19 20 21 22 |
|
DistributedSamplerWrapper
Bases: DistributedSampler
Wrapper over Sampler
for distributed training.
Allows you to use any sampler in distributed mode.
It is especially useful in conjunction with
torch.nn.parallel.DistributedDataParallel
. In such case, each
process can pass a DistributedSamplerWrapper instance as a DataLoader
sampler, and load a subset of subsampled data of the original dataset
that is exclusive to it.
.. note:: Sampler is assumed to be of constant size.
Source code in src/super_gradients/training/datasets/samplers/distributed_sampler_wrapper.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
|
__init__(sampler, num_replicas=None, rank=None, shuffle=True)
Args:
sampler: Sampler used for subsampling
num_replicas (int, optional): Number of processes participating in
distributed training
rank (int, optional): Rank of the current process
within num_replicas
shuffle (bool, optional): If true (default),
sampler will shuffle the indices
Source code in src/super_gradients/training/datasets/samplers/distributed_sampler_wrapper.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
RepeatAugSampler
Bases: Sampler
Sampler that restricts data loading to a subset of the dataset for distributed, with repeated augmentation. It ensures that different each augmented version of a sample will be visible to a different process (GPU). Heavily based on torch.utils.data.DistributedSampler This sampler was taken from https://github.com/facebookresearch/deit/blob/0c4b8f60/samplers.py Copyright (c) 2015-present, Facebook, Inc.
Below code is modified from: https://github.com/rwightman/pytorch-image-models/blame/master/timm/data/distributed_sampler.py
Note this sampler is currently supported only for DDP training.
Arguments: dataset (torch.utils.data.Dataset): dataset to sample from. num_replicas (int): Number of dataset replicas, equals to world_size when set to 0 (default=0). shuffle (bool): whether to shuffle the dataset indices (default=True). num_repeats (int): amount of repetitions for each example. selected_round (int): When > 0, the number of samples to select per epoch for each rank is determined by
int(math.floor(len(self.dataset) // selected_round * selected_round / selected_ratio))
(default=256)
selected_ratio (int): ratio to reduce selected samples by, num_replicas if 0.
Source code in src/super_gradients/training/datasets/samplers/repeated_augmentation_sampler.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
|
CityscapesConcatDataset
Bases: ConcatDataset
Support building a Cityscapes dataset which includes multiple group of samples from several list files. i.e to initiate a trainval dataset:
trainval_set = CityscapesConcatDataset( root_dir='/data', list_files=['lists/train.lst', 'lists/val.lst'], labels_csv_path='lists/labels.csv', ... )
i.e to initiate a combination of the train-set with AutoLabelling-set:
train_al_set = CityscapesConcatDataset( root_dir='/data', list_files=['lists/train.lst', 'lists/auto_labelling.lst'], labels_csv_path='lists/labels.csv', ... )
Source code in src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
|
__init__(root_dir, list_files, labels_csv_path, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root_dir |
str
|
Absolute path to root directory of the dataset. |
required |
list_files |
List[str]
|
List of list files that contains names of images to load, line format: |
required |
labels_csv_path |
str
|
Path to csv file, with labels metadata and mapping. The path is relative to root. |
required |
kwargs |
Any hyper params required for the dataset, i.e img_size, crop_size, cache_images |
{}
|
Source code in src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
|
CityscapesDataset
Bases: SegmentationDataSet
CityscapesDataset - Segmentation Data Set Class for Cityscapes Segmentation Data Set, main resolution of dataset: (2048 x 1024). Not all the original labels are used for training and evaluation, according to cityscape paper: "Classes that are too rare are excluded from our benchmark, leaving 19 classes for evaluation". For more details about the dataset labels format see: https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py
To use this Dataset you need to:
-
Download cityscape dataset (https://www.cityscapes-dataset.com/downloads/)
root_dir (in recipe default to /data/cityscapes) ├─── gtFine │ ├── test │ │ ├── berlin │ │ │ ├── berlin_000000_000019_gtFine_color.png │ │ │ ├── berlin_000000_000019_gtFine_instanceIds.png │ │ │ └── ... │ │ ├── bielefeld │ │ │ └── ... │ │ └── ... │ ├─── train │ │ └── ... │ └─── val │ └── ... └─── leftImg8bit ├── test │ └── ... ├─── train │ └── ... └─── val └── ...
-
Download metadata folder (https://deci-pretrained-models.s3.amazonaws.com/cityscape_lists.zip)
lists ├── labels.csv ├── test.lst ├── train.lst ├── trainval.lst └── val.lst
-
Move Metadata folder to the Cityscape folder
root_dir (in recipe default to /data/cityscapes) ├─── gtFine │ └── ... ├─── leftImg8bit │ └── ... └─── lists └── ...
Example: >> CityscapesDataset(root_dir='.../root_dir', list_file='lists/train.lst', labels_csv_path='lists/labels.csv', ...)
Source code in src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
|
__init__(root_dir, list_file, labels_csv_path, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root_dir |
str
|
Absolute path to root directory of the dataset. |
required |
list_file |
str
|
List file that contains names of images to load, line format: |
required |
labels_csv_path |
str
|
Path to csv file, with labels metadata and mapping. The path is relative to root. |
required |
kwargs |
Any hyper params required for the dataset, i.e img_size, crop_size, cache_images |
{}
|
Source code in src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
target_loader(label_path)
Override target_loader function, load the labels mask image. :param label_path: Path to the label image. :return: The mask image created from the array, with converted class labels.
Source code in src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py
106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
CoCoSegmentationDataSet
Bases: SegmentationDataSet
Segmentation Data Set Class for COCO 2017 Segmentation Data Set
To use this Dataset you need to:
- Download coco dataset:
annotations: http://images.cocodataset.org/annotations/annotations_trainval2017.zip
train2017: http://images.cocodataset.org/zips/train2017.zip
val2017: http://images.cocodataset.org/zips/val2017.zip
- Unzip and organize it as below:
coco
├── annotations
│ ├─ instances_train2017.json
│ ├─ instances_val2017.json
│ └─ ...
└── images
├── train2017
│ ├─ 000000000001.jpg
│ └─ ...
└── val2017
└─ ...
- Instantiate the dataset:
>> train_set = CoCoSegmentationDataSet(data_dir='.../coco', subdir='images/train2017', json_file='instances_train2017.json', ...)
>> valid_set = CoCoSegmentationDataSet(data_dir='.../coco', subdir='images/val2017', json_file='instances_val2017.json', ...)
Source code in src/super_gradients/training/datasets/segmentation_datasets/coco_segmentation.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
|
target_loader(mask_metadata_tuple)
target_loader :param mask_metadata_tuple: A tuple of (coco_image_id, original_image_height, original_image_width) :return: The mask image created from the array
Source code in src/super_gradients/training/datasets/segmentation_datasets/coco_segmentation.py
89 90 91 92 93 94 95 96 97 98 99 |
|
MapillaryDataset
Bases: SegmentationDataSet
Mapillary Vistas is a large-scale urban street-view dataset. This dataset contains 18k, 2k, and 5k images for training, validation and testing with a variety of image resolutions, ranging from 1024 × 768 to 4000 × 6000. Paper: "Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulò, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In CVPR, 2017." https://openaccess.thecvf.com/content_ICCV_2017/papers/Neuhold_The_Mapillary_Vistas_ICCV_2017_paper.pdf Official site: https://www.mapillary.com/ (register for free, then download Vistas dataset)
Source code in src/super_gradients/training/datasets/segmentation_datasets/mapillary_dataset.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
apply_color_map(target)
Convert a greyscale target PIL image to an RGB numpy array according to the official Mapillary color map.
Source code in src/super_gradients/training/datasets/segmentation_datasets/mapillary_dataset.py
107 108 109 110 111 112 113 114 115 116 117 118 |
|
PascalAUG2012SegmentationDataSet
Bases: PascalVOC2012SegmentationDataSet
Segmentation Data Set Class for Pascal AUG 2012 Data Set
- Download pascal AUG 2012 dataset:
https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz
- Unzip and organize it as below:
pascal_voc_2012
└──VOCaug
├── aug.txt
└── dataset
├──inst
├──img
└──cls
- Instantiate the dataset:
>> train_set = PascalAUG2012SegmentationDataSet(
root='.../pascal_voc_2012',
list_file='VOCaug/dataset/aug.txt',
samples_sub_directory='VOCaug/dataset/img',
targets_sub_directory='VOCaug/dataset/cls',
...
)
NOTE: this dataset is only available for training. To test, please use PascalVOC2012SegmentationDataSet.
Source code in src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py
176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
|
target_loader(target_path)
staticmethod
target_loader :param target_path: The path to the target data :return: The loaded target
Source code in src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py
210 211 212 213 214 215 216 217 218 219 |
|
PascalVOC2012SegmentationDataSet
Bases: SegmentationDataSet
Segmentation Data Set Class for Pascal VOC 2012 Data Set.
To use this Dataset you need to:
- Download pascal VOC 2012 dataset:
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
- Unzip and organize it as below:
pascal_voc_2012
└──VOCdevkit
└──VOC2012
├──JPEGImages
├──SegmentationClass
├──ImageSets
│ ├──Segmentation
│ │ └── train.txt
│ ├──Main
│ ├──Action
│ └──Layout
├──Annotations
└──SegmentationObject
- Instantiate the dataset:
>> train_set = PascalVOC2012SegmentationDataSet(
root='.../pascal_voc_2012',
list_file='VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt',
samples_sub_directory='VOCdevkit/VOC2012/JPEGImages',
targets_sub_directory='VOCdevkit/VOC2012/SegmentationClass',
...
)
>> valid_set = PascalVOC2012SegmentationDataSet(
root='.../pascal_voc_2012',
list_file='VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt',
samples_sub_directory='VOCdevkit/VOC2012/JPEGImages',
targets_sub_directory='VOCdevkit/VOC2012/SegmentationClass',
...
)
Source code in src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
|
decode_segmentation_mask(label_mask)
decode_segmentation_mask - Decodes the colors for the Segmentation Mask :param: label_mask: an (M,N) array of integer values denoting the class label at each spatial location.
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py
98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
|
PascalVOCAndAUGUnifiedDataset
Bases: ConcatDataset
Pascal VOC + AUG train dataset, aka SBD
dataset contributed in "Semantic contours from inverse detectors".
This is class implement the common usage of the SBD and PascalVOC datasets as a unified augmented trainset.
The unified dataset includes a total of 10,582 samples and don't contains duplicate samples from the PascalVOC
validation set.
To use this Dataset you need to:
- Download pascal datasets:
VOC 2012: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
AUG 2012: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz
- Unzip and organize it as below:
pascal_voc_2012
├─VOCdevkit
│ └──VOC2012
│ ├──JPEGImages
│ ├──SegmentationClass
│ ├──ImageSets
│ │ ├──Segmentation
│ │ │ └── train.txt
│ │ ├──Main
│ │ ├──Action
│ │ └──Layout
│ ├──Annotations
│ └──SegmentationObject
└──VOCaug
├── aug.txt
└── dataset
├──inst
├──img
└──cls
- Instantiate the dataset:
>> train_set = PascalVOCAndAUGUnifiedDataset(root='.../pascal_voc_2012', ...)
NOTE: this dataset is only available for training. To test, please use PascalVOC2012SegmentationDataSet.
Source code in src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py
222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 |
|
SegmentationDataSet
Bases: DirectoryDataSet
, ListDataset
, HasPreprocessingParams
Source code in src/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
|
__init__(root, list_file=None, samples_sub_directory=None, targets_sub_directory=None, cache_labels=False, cache_images=False, collate_fn=None, target_extension='.png', transforms=None)
SegmentationDataSet :param root: Root folder of the Data Set :param list_file: Path to the file with the samples list :param samples_sub_directory: name of the samples sub-directory :param targets_sub_directory: name of the targets sub-directory :param cache_labels: "Caches" the labels -> Pre-Loads to memory as a list :param cache_images: "Caches" the images -> Pre-Loads to memory as a list :param collate_fn: collate_fn func to process batches for the Data Loader :param target_extension: file extension of the targets (default is .png for PASCAL VOC 2012) :param transforms: transforms to be applied on image and mask
Source code in src/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
|
get_dataset_preprocessing_params()
Return any hardcoded preprocessing + adaptation for PIL.Image image reading (RGB). image_processor as returned as a list of dicts to be resolved by processing factory.
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
|
sample_loader(sample_path)
staticmethod
sample_loader - Loads a dataset image from path using PIL :param sample_path: The path to the sample image :return: The loaded Image
Source code in src/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py
93 94 95 96 97 98 99 100 101 |
|
target_loader(target_path)
staticmethod
target_loader :param target_path: The path to the sample image :return: The loaded Image
Source code in src/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py
103 104 105 106 107 108 109 110 111 |
|
SuperviselyPersonsDataset
Bases: SegmentationDataSet
SuperviselyPersonsDataset - Segmentation Data Set Class for Supervisely Persons Segmentation Data Set, main resolution of dataset: (600 x 800). This dataset is a subset of the original dataset (see below) and contains filtered samples For more details about the ORIGINAL dataset see: https://app.supervise.ly/ecosystem/projects/persons For more details about the FILTERED dataset see: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.3/contrib/PP-HumanSeg
To use this Dataset you need to:
- Download supervisely dataset:
https://deci-pretrained-models.s3.amazonaws.com/supervisely-persons.zip)
- Unzip:
supervisely-persons
├──images
│ ├──image-name.png
│ └──...
├──images_600x800
│ ├──image-name.png
│ └──...
├──masks
└──masks_600x800
- Instantiate the dataset:
>> train_set = SuperviselyPersonsDataset(root_dir='.../supervisely-persons', list_file='train.csv', ...)
>> valid_set = SuperviselyPersonsDataset(root_dir='.../supervisely-persons', list_file='val.csv', ...)
Source code in src/super_gradients/training/datasets/segmentation_datasets/supervisely_persons_segmentation.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
__init__(root_dir, list_file, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root_dir |
str
|
root directory to dataset. |
required |
list_file |
str
|
list file that contains names of images to load, line format: |
required |
kwargs |
Any hyper params required for the dataset, i.e img_size, crop_size, etc... |
{}
|
Source code in src/super_gradients/training/datasets/segmentation_datasets/supervisely_persons_segmentation.py
43 44 45 46 47 48 49 50 51 |
|
BaseSgVisionDataset
Bases: VisionDataset
BaseSgVisionDataset
Source code in src/super_gradients/training/datasets/sg_dataset.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
|
__getitem__(item)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
item |
required |
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/sg_dataset.py
50 51 52 53 54 55 56 |
|
__init__(root, sample_loader=default_loader, target_loader=None, collate_fn=None, valid_sample_extensions=IMG_EXTENSIONS, sample_transform=None, target_transform=None)
Ctor :param root: :param sample_loader: :param target_loader: :param collate_fn: :param valid_sample_extensions: :param sample_transform: :param target_transform:
Source code in src/super_gradients/training/datasets/sg_dataset.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
__len__()
Returns:
Type | Description |
---|---|
Source code in src/super_gradients/training/datasets/sg_dataset.py
58 59 60 61 62 63 |
|
numpy_loader_func(path)
staticmethod
_numpy_loader_func - Uses numpy load func :param path: :return:
Source code in src/super_gradients/training/datasets/sg_dataset.py
83 84 85 86 87 88 89 90 |
|
text_file_loader_func(text_file_path, inline_splitter=' ')
staticmethod
text_file_loader_func - Uses a line by line based code to get vectorized data from a text-based file
:param text_file_path: Input text file
:param inline_splitter: The char to use in order to separate between different VALUES of the SAME vector
please notice that DIFFERENT VECTORS SHOULD BE IN SEPARATE LINES ('
') SEPARATED :return: a list of tuples, where each tuple is a vector of target values
Source code in src/super_gradients/training/datasets/sg_dataset.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
|
DirectoryDataSet
Bases: BaseSgVisionDataset
DirectoryDataSet - A PyTorch Vision Data Set extension that receives a root Dir and two separate sub directories: - Sub-Directory for Samples - Sub-Directory for Targets
Source code in src/super_gradients/training/datasets/sg_dataset.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 |
|
__getitem__(item)
getter method for iteration :param item: :return:
Source code in src/super_gradients/training/datasets/sg_dataset.py
163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
|
__init__(root, samples_sub_directory, targets_sub_directory, target_extension, sample_loader=default_loader, target_loader=None, collate_fn=None, sample_extensions=IMG_EXTENSIONS, sample_transform=None, target_transform=None)
CTOR :param root: root directory that contains all of the Data Set :param samples_sub_directory: name of the samples sub-directory :param targets_sub_directory: name of the targets sub-directory :param sample_extensions: file extensions for samples :param target_extension: file extension of the targets :param sample_loader: Func to load samples :param target_loader: Func to load targets :param collate_fn: collate_fn func to process batches for the Data Loader :param sample_transform: Func to pre-process samples for data loading :param target_transform: Func to pre-process targets for data loading
Source code in src/super_gradients/training/datasets/sg_dataset.py
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
|
ListDataset
Bases: BaseSgVisionDataset
ListDataset - A PyTorch Vision Data Set extension that receives a file with FULL PATH to each of the samples. Then, the assumption is that for every sample, there is a * matching target * in the same path but with a different extension, i.e: for the samples paths: (That appear in the list file) /root/dataset/class_x/sample1.png /root/dataset/class_y/sample123.png
the matching labels paths: (That DO NOT appear in the list file)
/root/dataset/class_x/sample1.ext
/root/dataset/class_y/sample123.ext
Source code in src/super_gradients/training/datasets/sg_dataset.py
215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 |
|
__getitem__(item)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
item |
int
|
Index |
required |
Returns:
Type | Description |
---|---|
Tuple[Any, Any]
|
Tuple (sample, target) where target is class_index of the target class. |
Source code in src/super_gradients/training/datasets/sg_dataset.py
273 274 275 276 277 278 279 280 281 282 283 284 285 286 |
|
__init__(root, file, sample_loader=default_loader, target_loader=None, collate_fn=None, sample_extensions=IMG_EXTENSIONS, sample_transform=None, target_transform=None, target_extension='.npy')
CTOR :param root: root directory that contains all of the Data Set :param file: Path to the file with the samples list :param sample_extensions: file extension for samples :param target_extension: file extension of the targets :param sample_loader: Func to load samples :param target_loader: Func to load targets :param collate_fn: collate_fn func to process batches for the Data Loader :param sample_transform: Func to pre-process samples for data loading :param target_transform: Func to pre-process targets for data loading
Source code in src/super_gradients/training/datasets/sg_dataset.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 |
|