helios.core.cuda

Classes

DisableCuDNNBenchmarkContext

Allow disabling CuDNN benchmark within a scope.

Functions

requires_cuda_support(→ None)

Ensure that CUDA support is found, or raise an exception otherwise.

Module Contents

helios.core.cuda.requires_cuda_support() None[source]

Ensure that CUDA support is found, or raise an exception otherwise.

Raises:

RuntimeError – if no CUDA support is found.

class helios.core.cuda.DisableCuDNNBenchmarkContext[source]

Allow disabling CuDNN benchmark within a scope.

The intention is to facilitate the disabling of CuDNN benchmark for specific purposes (such as validation or testing) but then restoring it to its previous state upon leaving the scope. Note that if CUDA is not available, the scope does nothing.

Example

torch.backends.cudnn.benchmark = True # Enable CuDNN
...
with DisableCuDNNBenchmarkContext():
    # Benchmark is disabled.
    print(torch.backends.cudnn.benchmark) # <- Prints False
    ...

print(torch.backends.cudnn.benchmark) # <- Prints true
__enter__() None[source]

Disable CuDNN benchmark.

__exit__(exc_type, exc_value, exc_traceback) None[source]

Restore CuDNN benchmark to its starting state.