Skip to content

Data types

EvaluationType

Bases: str, Enum

EvaluationType

Passed to Trainer.evaluate(..), and controls which phase callbacks should be triggered (if at all).

Source code in src/super_gradients/common/data_types/enum/evaluation_type.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
class EvaluationType(str, Enum):
    """
    EvaluationType

    Passed to Trainer.evaluate(..), and controls which phase callbacks should be triggered (if at all).
    """

    TEST = "TEST"
    """Evaluate on Test set."""

    VALIDATION = "VALIDATION"
    """Evaluate on Validation set."""

TEST = 'TEST' class-attribute

Evaluate on Test set.

VALIDATION = 'VALIDATION' class-attribute

Evaluate on Validation set.

MultiGPUMode

Bases: str, Enum

MultiGPUMode: Enumeration of different ways to use gpu.

Source code in src/super_gradients/common/data_types/enum/multi_gpu_mode.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class MultiGPUMode(str, Enum):
    """MultiGPUMode: Enumeration of different ways to use gpu."""

    OFF = "Off"
    """Single GPU Mode / CPU Mode"""

    DATA_PARALLEL = "DP"
    """Multiple GPUs, Synchronous"""

    DISTRIBUTED_DATA_PARALLEL = "DDP"
    """Multiple GPUs, Asynchronous"""

    AUTO = "AUTO"
    """Runs "DDP" if more than 1 GPU available. Otherwise, runs "Off"."""

    @classmethod
    def dict(cls) -> Dict[str, "MultiGPUMode"]:
        """
        Return dictionary mapping from the mode name (in call string cases) to the enum value
        """
        out_dict = dict()
        for mode in MultiGPUMode:
            out_dict[mode.value] = mode
            out_dict[mode.name] = mode
            out_dict[stringcase.capitalcase(mode.name)] = mode
            out_dict[stringcase.camelcase(mode.name)] = mode
            out_dict[stringcase.lowercase(mode.name)] = mode
        out_dict[False] = MultiGPUMode.OFF
        return out_dict

AUTO = 'AUTO' class-attribute

Runs "DDP" if more than 1 GPU available. Otherwise, runs "Off".

DATA_PARALLEL = 'DP' class-attribute

Multiple GPUs, Synchronous

DISTRIBUTED_DATA_PARALLEL = 'DDP' class-attribute

Multiple GPUs, Asynchronous

OFF = 'Off' class-attribute

Single GPU Mode / CPU Mode

dict() classmethod

Return dictionary mapping from the mode name (in call string cases) to the enum value

Source code in src/super_gradients/common/data_types/enum/multi_gpu_mode.py
21
22
23
24
25
26
27
28
29
30
31
32
33
34
@classmethod
def dict(cls) -> Dict[str, "MultiGPUMode"]:
    """
    Return dictionary mapping from the mode name (in call string cases) to the enum value
    """
    out_dict = dict()
    for mode in MultiGPUMode:
        out_dict[mode.value] = mode
        out_dict[mode.name] = mode
        out_dict[stringcase.capitalcase(mode.name)] = mode
        out_dict[stringcase.camelcase(mode.name)] = mode
        out_dict[stringcase.lowercase(mode.name)] = mode
    out_dict[False] = MultiGPUMode.OFF
    return out_dict

StrictLoad

Bases: Enum

Wrapper for adding more functionality to torch's strict_load parameter in load_state_dict().

Source code in src/super_gradients/common/data_types/enum/strict_load.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
class StrictLoad(Enum):
    """Wrapper for adding more functionality to torch's strict_load parameter in load_state_dict()."""

    OFF = False
    """Native torch "strict_load = off" behaviour. See nn.Module.load_state_dict() documentation for more details."""

    ON = True
    """Native torch "strict_load = on" behaviour. See nn.Module.load_state_dict() documentation for more details."""

    NO_KEY_MATCHING = "no_key_matching"
    """Allows the usage of SuperGradient's adapt_checkpoint function, which loads a checkpoint by matching each
    layer's shapes (and bypasses the strict matching of the names of each layer (ie: disregards the state_dict key matching)).
    This implementation assumes order of layers in the state_dict and model are the same since it goes layer by layer and as name
    suggest does not use key matching, relying only on index of each weight.
    """

    KEY_MATCHING = "key_matching"
    """Loose load strategy that loads the state dict from checkpoint into model only for common keys and also handling the
    case when shapes of the tensors in the state dict and model are different for the same key (Such layers will be skipped)."""

KEY_MATCHING = 'key_matching' class-attribute

Loose load strategy that loads the state dict from checkpoint into model only for common keys and also handling the case when shapes of the tensors in the state dict and model are different for the same key (Such layers will be skipped).

NO_KEY_MATCHING = 'no_key_matching' class-attribute

Allows the usage of SuperGradient's adapt_checkpoint function, which loads a checkpoint by matching each layer's shapes (and bypasses the strict matching of the names of each layer (ie: disregards the state_dict key matching)). This implementation assumes order of layers in the state_dict and model are the same since it goes layer by layer and as name suggest does not use key matching, relying only on index of each weight.

OFF = False class-attribute

Native torch "strict_load = off" behaviour. See nn.Module.load_state_dict() documentation for more details.

ON = True class-attribute

Native torch "strict_load = on" behaviour. See nn.Module.load_state_dict() documentation for more details.