fairseq2.device¶
This module provides abstractions for managing PyTorch devices and handling CUDA contexts.
Interfaces¶
- class fairseq2.device.CudaContext[source]¶
Bases:
ABC
Represents an interface for interacting with CUDA runtime and device information.
Classes¶
- final class fairseq2.device.StandardCudaContext[source]¶
Bases:
CudaContext
Represents the standard implementation of
CudaContext
.
Functions¶
- fairseq2.device.get_default_device() device [source]¶
Returns the default device of this process.
The default device is determined by the following precedence:
If
FAIRSEQ2_DEVICE
environment variable is set, the specified device will be used.If CUDA is enabled and
CUDA_VISIBLE_DEVICES
environment variable contains a single device, the specified device will be used.If CUDA is enabled and
LOCAL_RANK
environment variable is set, the CUDA device at the specified index will be used.CPU will be used.
- Raises:
LocalRankOutOfRangeError – If
LOCAL_RANK
environment variable is less than zero or exceeds the number of available devices.
- fairseq2.device.get_current_device() device [source]¶
Returns the current device of the calling thread.
The current device of a thread can be changed by using a device as a context manager. See here for more information.
Note
PyTorch does not currently expose a public API to retrieve the current device of a thread. If such an API becomes available in the future, this function will act as an alias for it.
Warning
This function might impose a slight performance cost. Avoid calling it in hot code paths.
import torch from fairseq2.device get_current_device # Default device used by PyTorch. Typically CPU. default_device = torch.get_default_device() assert get_current_device() == default_device device = torch.device("cuda:0") # Instruct PyTorch to use the specified device instead of the default # device for tensor factory operations. with device: assert get_current_device() == device