Metrics
ToyTestClassificationMetric
Bases: Metric
Dummy classification Mettric object returning 0 always (for testing).
Source code in src/super_gradients/training/metrics/classification_metrics.py
81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
accuracy(output, target, topk=(1))
Computes the precision@k for the specified values of k
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output |
Tensor / Numpy / List The prediction |
required | |
target |
Tensor / Numpy / List The corresponding lables |
required | |
topk |
tuple The type of accuracy to calculate, e.g. topk=(1,5) returns accuracy for top-1 and top-5 |
(1)
|
Source code in src/super_gradients/training/metrics/classification_metrics.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
DetectionMetrics
Bases: Metric
DetectionMetrics
Metric class for computing F1, Precision, Recall and Mean Average Precision.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_cls |
int
|
Number of classes. |
required |
post_prediction_callback |
DetectionPostPredictionCallback
|
DetectionPostPredictionCallback to be applied on net's output prior to the metric computation (NMS). |
required |
normalize_targets |
bool
|
Whether to normalize bbox coordinates by image size. |
False
|
iou_thres |
Union[IouThreshold, Tuple[float, float], float]
|
IoU threshold to compute the mAP. Could be either instance of IouThreshold, a tuple (lower bound, upper_bound) or single scalar. |
IouThreshold.MAP_05_TO_095
|
recall_thres |
torch.Tensor
|
Recall threshold to compute the mAP. |
None
|
score_thres |
float
|
Score threshold to compute Recall, Precision and F1. |
0.1
|
top_k_predictions |
int
|
Number of predictions per class used to compute metrics, ordered by confidence score |
100
|
dist_sync_on_step |
bool
|
Synchronize metric state across processes at each |
False
|
accumulate_on_cpu |
bool
|
Run on CPU regardless of device used in other parts. This is to avoid "CUDA out of memory" that might happen on GPU. |
True
|
include_classwise_ap |
bool
|
Whether to include the class-wise average precision in the returned metrics dictionary. If enabled, output metrics dictionary will look similar to this: { 'Precision0.5:0.95': 0.5, 'Recall0.5:0.95': 0.5, 'F10.5:0.95': 0.5, 'mAP0.5:0.95': 0.5, 'AP0.5:0.95_person': 0.5, 'AP0.5:0.95_car': 0.5, 'AP0.5:0.95_bicycle': 0.5, 'AP0.5:0.95_motorcycle': 0.5, ... } Class names are either provided via the class_names parameter or are generated automatically. |
False
|
class_names |
List[str]
|
Array of class names. When include_classwise_ap=True, will use these names to make per-class APs keys in the output metrics dictionary. If None, will use dummy names |
None
|
Source code in src/super_gradients/training/metrics/detection_metrics.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 |
|
compute()
Compute the metrics for all the accumulated results.
Returns:
Type | Description |
---|---|
Dict[str, Union[float, torch.Tensor]]
|
Metrics of interest |
Source code in src/super_gradients/training/metrics/detection_metrics.py
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 |
|
update(preds, target, device, inputs, crowd_targets=None)
Apply NMS and match all the predictions and targets of a given batch, and update the metric state accordingly.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
preds |
Raw output of the model, the format might change from one model to another, but has to fit the input format of the post_prediction_callback (cx,cy,wh) |
required | |
target |
torch.Tensor
|
Targets for all images of shape (total_num_targets, 6) LABEL_CXCYWH. format: (index, label, cx, cy, w, h) |
required |
device |
str
|
Device to run on |
required |
inputs |
torch.tensor
|
Input image tensor of shape (batch_size, n_img, height, width) |
required |
crowd_targets |
Optional[torch.Tensor]
|
Crowd targets for all images of shape (total_num_targets, 6), LABEL_CXCYWH |
None
|
Source code in src/super_gradients/training/metrics/detection_metrics.py
134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
|
flatten_metrics_dict(metrics_dict)
Returns:
Type | Description |
---|---|
flattened dict of metric values i.e {metric1_name: metric1_value...} |
Source code in src/super_gradients/training/metrics/metric_utils.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
get_logging_values(loss_loggings, metrics, criterion=None)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loss_loggings |
AverageMeter
|
AverageMeter running average for the loss items |
required |
metrics |
MetricCollection
|
MetricCollection object for running user specified metrics |
required |
Returns:
Type | Description |
---|---|
tuple of the computed values |
Source code in src/super_gradients/training/metrics/metric_utils.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
get_metrics_dict(metrics_tuple, metrics_collection, loss_logging_item_names)
Returns a dictionary with the epoch results as values and their names as keys.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics_tuple |
the result tuple |
required | |
metrics_collection |
MetricsCollection |
required | |
loss_logging_item_names |
loss component's names. |
required |
Returns:
Type | Description |
---|---|
dict |
Source code in src/super_gradients/training/metrics/metric_utils.py
79 80 81 82 83 84 85 86 87 88 89 |
|
get_metrics_results_tuple(metrics_collection)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics_collection |
MetricCollection
|
metrics collection of the user specified metrics @type metrics_collection |
required |
Returns:
Type | Description |
---|---|
tuple of metrics values |
Source code in src/super_gradients/training/metrics/metric_utils.py
44 45 46 47 48 49 50 51 52 53 54 55 |
|
get_metrics_titles(metrics_collection)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics_collection |
MetricCollection
|
MetricCollection object for running user specified metrics |
required |
Returns:
Type | Description |
---|---|
list of all the names of the computed values list(str) |
Source code in src/super_gradients/training/metrics/metric_utils.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
get_train_loop_description_dict(metrics_tuple, metrics_collection, loss_logging_item_names, **log_items)
Returns a dictionary with the epoch's logging items as values and their names as keys, with the purpose of passing it as a description to tqdm's progress bar.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics_tuple |
the result tuple |
required | |
metrics_collection |
MetricsCollection |
required | |
loss_logging_item_names |
loss component's names. |
required |
Returns:
Type | Description |
---|---|
dict |
Source code in src/super_gradients/training/metrics/metric_utils.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
|
PoseEstimationMetrics
Bases: Metric
Implementation of COCO Keypoint evaluation metric. When instantiated with default parameters, it will default to COCO params. By default, only AR and AP metrics are computed:
from super_gradients.training.metrics import PoseEstimationMetrics metric = PoseEstimationMetrics(...) metric.update(...) metrics = metric.compute() # {"AP": 0.123, "AR": 0.456 }
If you wish to get AR/AR at specific thresholds, you can specify them using iou_thresholds_to_report
argument:
from super_gradients.training.metrics import PoseEstimationMetrics metric = PoseEstimationMetrics(iou_thresholds_to_report=[0.5, 0.75], ...) metric.update(...) metrics = metric.compute() # {"AP": 0.123, "AP_0.5": 0.222, "AP_0.75: 0.111, "AR": 0.456, "AR_0.5":0.212, "AR_0.75": 0.443 }
Source code in src/super_gradients/training/metrics/pose_estimation_metrics.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 |
|
__init__(post_prediction_callback, num_joints, max_objects_per_image=20, oks_sigmas=None, iou_thresholds=None, recall_thresholds=None, iou_thresholds_to_report=None)
Compute the AP & AR metrics for pose estimation. By default, this class returns only AP and AR values.
If you need to get additional metrics (AP at specific threshold), pass these thresholds via iou_thresholds_to_report
argument.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
post_prediction_callback |
AbstractPoseEstimationPostPredictionCallback
|
A callback to decode model predictions to poses. This should be callable that takes input (model predictions) and returns a tuple of (poses, scores) |
required |
num_joints |
int
|
Number of joints per pose |
required |
max_objects_per_image |
int
|
Maximum number of predicted poses to include in evaluation (Top-K poses will be used). |
20
|
oks_sigmas |
Optional[Iterable]
|
OKS sigma factor for custom keypoint detection dataset. If None, then metric will use default OKS from COCO and expect num_joints to be equal 17 |
None
|
recall_thresholds |
Optional[Iterable]
|
List of recall thresholds to compute AP. If None, then will use default 101 recall thresholds from COCO in range [0..1] |
None
|
iou_thresholds |
Optional[Iterable]
|
List of IoU thresholds to use. If None, then COCO version of IoU will be used (0.5 ... 0.95) |
None
|
Source code in src/super_gradients/training/metrics/pose_estimation_metrics.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
compute()
Compute the metrics for all the accumulated results.
Returns:
Type | Description |
---|---|
Dict[str, Union[float, torch.Tensor]]
|
Metrics of interest |
Source code in src/super_gradients/training/metrics/pose_estimation_metrics.py
335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 |
|
update(preds, target, gt_joints=None, gt_iscrowd=None, gt_bboxes=None, gt_areas=None, gt_samples=None)
Decode the predictions and update the metric.
The signature of this method is a bit complicated, because we want to support both old-style form of passing groundtruth information (gt_joints, gt_iscrowd, gt_bboxes, gt_areas) and a new style of passing groundtruth information as a list of PoseEstimationSample objects.
Passing PoseEstimationSample objects is more convenient and default way to go with sample-centric datasets introduced in SuperGradients 3.3. Two options are mutually exclusive, so if you pass groundtruth_samples, all other arguments are ignored and vice versa.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
preds
|
Raw output of the model |
required |
target |
Any
|
Targets for the model training (Not used for evaluation) |
required |
gt_joints |
List[np.ndarray]
|
List of ground-truth joints for each image in the batch. Each element is a numpy array of shape (num_instances, num_joints, 3). Note that augmentation/preprocessing transformations (Affine transforms specifically) must also be applied to gt_joints. This is to ensure joint coordinates are transforms identically as image. This is differs form COCO evaluation, where predictions rescaled back to original size of the image. However, this makes code much more (unnecessary) complicated, so we do it differently and evaluate joints in the coordinate system of the predicted image. |
None
|
gt_iscrowd |
List[np.ndarray]
|
Optional argument indicating which instance is annotated with |
None
|
gt_bboxes |
List[np.ndarray]
|
Bounding boxes of the groundtruth instances (XYWH). This is COCO-specific and is used in OKS computation for instances w/o visible keypoints. If not provided, the bounding box is computed as the minimum bounding box that contains all visible keypoints. |
None
|
gt_areas |
List[np.ndarray]
|
Area of the groundtruth area. in COCO this is the area of the corresponding segmentation mask and not the bounding box, so it cannot be computed programmatically. It's value used in object-keypoint similarity metric (OKS) computation. If not provided, the area is computed as the product of the width and height of the bounding box. (For instance this is used in CrowdPose dataset) |
None
|
gt_samples |
List[PoseEstimationSample]
|
List of ground-truth samples |
None
|
Source code in src/super_gradients/training/metrics/pose_estimation_metrics.py
134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
|
update_single_image(predicted_poses, predicted_scores, gt_joints, gt_bboxes, gt_areas, gt_iscrowd)
Update internal state of metric class with a single image predictions & corresponding groundtruth. Method compute OKS for predicted poses, match them to groundtruth poses and update internal state of the metric.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
predicted_poses |
Union[Tensor, np.ndarray]
|
Predicted poses of shape (num_instances, num_joints, 3) |
required |
predicted_scores |
Union[Tensor, np.ndarray]
|
Predicted scores of shape (num_instances,) |
required |
gt_joints |
np.ndarray
|
Groundtruth joints of shape (num_instances, num_joints, 3) |
required |
gt_bboxes |
Optional[np.ndarray]
|
Groundtruth bounding boxes of shape (num_instances, 4) in XYWH format |
required |
gt_areas |
Optional[np.ndarray]
|
Groundtruth areas of shape (num_instances,) |
required |
gt_iscrowd |
Optional[np.ndarray]
|
Groundtruth is_crowd flag of shape (num_instances,) |
required |
Source code in src/super_gradients/training/metrics/pose_estimation_metrics.py
237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 |
|
compute_img_keypoint_matching(preds, pred_scores, targets, targets_visibilities, targets_areas, targets_bboxes, targets_ignored, crowd_targets, crowd_visibilities, crowd_targets_areas, crowd_targets_bboxes, iou_thresholds, sigmas, top_k)
Match predictions and the targets (ground truth) with respect to IoU and confidence score for a given image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
preds |
Tensor
|
Tensor of shape (K, NumJoints, 3) - Array of predicted skeletons. Last dimension encode X,Y and confidence score of each joint |
required |
pred_scores |
Tensor
|
Tensor of shape (K) - Confidence scores for each pose |
required |
targets |
Tensor
|
Targets joints (M, NumJoints, 2) - Array of groundtruth skeletons |
required |
targets_visibilities |
Tensor
|
Visibility status for each keypoint (M, NumJoints). Values are 0 - invisible, 1 - occluded, 2 - fully visible |
required |
targets_areas |
Tensor
|
Tensor of shape (M) - Areas of target objects |
required |
targets_bboxes |
Tensor
|
Tensor of shape (M,4) - Bounding boxes (XYWH) of targets |
required |
targets_ignored |
Tensor
|
Tensor of shape (M) - Array of target that marked as ignored (E.g all keypoints are not visible or target does not fit the desired area range) |
required |
crowd_targets |
Tensor
|
Targets joints (Mc, NumJoints, 3) - Array of groundtruth skeletons Last dimension encode X,Y and visibility score of each joint: (0 - invisible, 1 - occluded, 2 - fully visible) |
required |
crowd_visibilities |
Tensor
|
Visibility status for each keypoint of crowd targets (Mc, NumJoints). Values are 0 - invisible, 1 - occluded, 2 - fully visible |
required |
crowd_targets_areas |
Tensor
|
Tensor of shape (Mc) - Areas of target objects |
required |
crowd_targets_bboxes |
Tensor
|
Tensor of shape (Mc, 4) - Bounding boxes (XYWH) of crowd targets |
required |
iou_thresholds |
torch.Tensor
|
IoU Threshold to compute the mAP |
required |
sigmas |
Tensor
|
Tensor of shape (NumJoints) with sigmas for each joint. Sigma value represent how 'hard' it is to locate the exact groundtruth position of the joint. |
required |
top_k |
int
|
Number of predictions to keep, ordered by confidence score |
required |
Returns:
Type | Description |
---|---|
ImageKeypointMatchingResult
|
:preds_matched: Tensor of shape (min(top_k, len(preds)), n_iou_thresholds) True when prediction (i) is matched with a target with respect to the (j)th IoU threshold :preds_to_ignore: Tensor of shape (min(top_k, len(preds)), n_iou_thresholds) True when prediction (i) is matched with a crowd target with respect to the (j)th IoU threshold :preds_scores: Tensor of shape (min(top_k, len(preds))) with scores of top-k predictions :num_targets: Number of groundtruth targets (total num targets minus number of ignored) |
Source code in src/super_gradients/training/metrics/pose_estimation_utils.py
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 |
|
compute_oks(pred_joints, gt_joints, gt_keypoint_visibility, sigmas, gt_areas=None, gt_bboxes=None)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pred_joints |
Tensor
|
[K, NumJoints, 2] or [K, NumJoints, 3] |
required |
pred_scores |
[K] |
required | |
gt_joints |
Tensor
|
[M, NumJoints, 2] |
required |
gt_keypoint_visibility |
Tensor
|
[M, NumJoints] |
required |
gt_areas |
Tensor
|
[M] Area of each ground truth instance. COCOEval uses area of the instance mask to scale OKs, so it must be provided separately. If None, we will use area of bounding box of each instance computed from gt_joints. |
None
|
gt_bboxes |
Tensor
|
[M, 4] Bounding box (X,Y,W,H) of each ground truth instance. If None, we will use bounding box of each instance computed from gt_joints. |
None
|
sigmas |
Tensor
|
[NumJoints] |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
IoU matrix [K, M] |
Source code in src/super_gradients/training/metrics/pose_estimation_utils.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
compute_visible_bbox_xywh(joints, visibility_mask)
Compute the bounding box (X,Y,W,H) of the visible joints for each instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
joints |
Tensor
|
[Num Instances, Num Joints, 2+] last channel must have dimension of at least 2 that is considered to contain (X,Y) coordinates of the keypoint |
required |
visibility_mask |
Tensor
|
[Num Instances, Num Joints] |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
A numpy array [Num Instances, 4] where last dimension contains bbox in format XYWH |
Source code in src/super_gradients/training/metrics/pose_estimation_utils.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
AbstractMetricsArgsPrepFn
Bases: ABC
Abstract preprocess metrics arguments class.
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
156 157 158 159 160 161 162 163 164 165 166 |
|
__call__(preds, target)
abstractmethod
All base classes must implement this function and return a tuple of torch tensors (predictions, target).
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
161 162 163 164 165 166 |
|
Dice
Bases: torchmetrics.JaccardIndex
Dice Coefficient Metric
Args: num_classes: Number of classes in the dataset. ignore_index: Optional[Union[int, List[int]]], specifying a target class(es) to ignore. If given, this class index does not contribute to the returned score, regardless of reduction method. Has no effect if given an int that is not in the range [0, num_classes-1]. By default, no index is ignored, and all classes are used. IMPORTANT: reduction="none" alongside with a list of ignored indices is not supported and will raise an error. threshold: Threshold value for binary or multi-label probabilities. reduction: a method to reduce metric score over labels:
- ``'elementwise_mean'``: takes the mean (default)
- ``'sum'``: takes the sum
- ``'none'``: no reduction will be applied
metrics_args_prep_fn: Callable, inputs preprocess function applied on preds, target before updating metrics.
By default set to PreprocessSegmentationMetricsArgs(apply_arg_max=True)
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 |
|
compute()
Computes Dice coefficient
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
386 387 388 |
|
IoU
Bases: torchmetrics.JaccardIndex
IoU Metric
Args: num_classes: Number of classes in the dataset. ignore_index: Optional[Union[int, List[int]]], specifying a target class(es) to ignore. If given, this class index does not contribute to the returned score, regardless of reduction method. Has no effect if given an int that is not in the range [0, num_classes-1]. By default, no index is ignored, and all classes are used. IMPORTANT: reduction="none" alongside with a list of ignored indices is not supported and will raise an error. threshold: Threshold value for binary or multi-label probabilities. reduction: a method to reduce metric score over labels:
- ``'elementwise_mean'``: takes the mean (default)
- ``'sum'``: takes the sum
- ``'none'``: no reduction will be applied
metrics_args_prep_fn: Callable, inputs preprocess function applied on preds, target before updating metrics.
By default set to PreprocessSegmentationMetricsArgs(apply_arg_max=True)
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 |
|
PixelAccuracy
Bases: Metric
Pixel Accuracy
Args: ignore_label: Optional[Union[int, List[int]]], specifying a target class(es) to ignore. If given, this class index does not contribute to the returned score, regardless of reduction method. Has no effect if given an int that is not in the range [0, num_classes-1]. By default, no index is ignored, and all classes are used. IMPORTANT: reduction="none" alongside with a list of ignored indices is not supported and will raise an error. reduction: a method to reduce metric score over labels:
- ``'elementwise_mean'``: takes the mean (default)
- ``'sum'``: takes the sum
- ``'none'``: no reduction will be applied
metrics_args_prep_fn: Callable, inputs preprocess function applied on preds, target before updating metrics.
By default set to PreprocessSegmentationMetricsArgs(apply_arg_max=True)
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 |
|
PreprocessSegmentationMetricsArgs
Bases: AbstractMetricsArgsPrepFn
Default segmentation inputs preprocess function before updating segmentation metrics, handles multiple inputs and apply normalizations.
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
__init__(apply_arg_max=False, apply_sigmoid=False)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
apply_arg_max |
bool
|
Whether to apply argmax on predictions tensor. |
False
|
apply_sigmoid |
bool
|
Whether to apply sigmoid on predictions tensor. |
False
|
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
175 176 177 178 179 180 181 |
|
batch_intersection_union(predict, target, nclass)
Batch Intersection of Union
Parameters:
Name | Type | Description | Default |
---|---|---|---|
predict |
torch.Tensor
|
input 4D tensor |
required |
target |
torch.Tensor
|
label 3D tensor |
required |
nclass |
int
|
number of categories (int) |
required |
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
batch_pix_accuracy(predict, target)
Batch Pixel Accuracy
Parameters:
Name | Type | Description | Default |
---|---|---|---|
predict |
torch.Tensor
|
input 4D tensor |
required |
target |
torch.Tensor
|
label 3D tensor |
required |
Source code in src/super_gradients/training/metrics/segmentation_metrics.py
16 17 18 19 20 21 22 23 24 25 26 27 28 |
|