Pytorch CUDA

You can accelerate the training of your model by making use of hardware accelerators such as Graphics Processing Units(GPUs). You can make use of GPU hardware accelerators for training your Pytorch models if you have NVIDIA GPU(s) by making use of Compute Unified Device Architecture(CUDA) API. In this chapter of the Pytorch tutorial, you will learn how you can make use of CUDA/GPU to accelerate the training process.

Checking CUDA Availability

Before you start using CUDA, you need to check if you have CUDA available in your environment. You can check it by using the torch.cuda.is_available() function.

# Checking if CUDA is available
torch.cuda.is_available()

The function returns True if CUDA is available, else it will return False.


Moving to CUDA

If you have CUDA available in your environment, you can move your tensor to GPU. You can do this by calling the to() method on the variable(tensors and models) and specifying the device parameter.

Example

In this example, we are moving a tensor from CPU to GPU by creating a copy of the tensor in the GPU with the same variable name, therefore, effectively moving the tensor to GPU.

# Create a tensor
tensor_1 = torch.tensor([1, 2, 3, 4])

# Copy the tensor to GPU with same variable name
tensor_1 = tensor_1.to(device='cuda')

Similarly, you can also move your Neural Network model to GPU.

Example

In this example, we are moving our Neural Network model mynet to GPU. We don’t need to create another variable, you can just make use of the to() method.

mynet.to(device='cuda')

You can check the device on which your variable exists.

Example

For a tensor, this can be achieved by checking the device attribute of your tensor.

tensor_2 = torch.tensor([5, 6, 7, 8])

# Print the device on which tensor is present
print(tensor_2.device)
# Outputs- device(type='cpu')

# Moving the tensor to GPU
tensor_2 = tensor_2.to(device='cuda')

# Print the device on which tensor is present
print(tensor_2.device)
# Outputs- device(type='cuda', index=0)

Example

A model consists of many layers, each of which has its own set of parameters. Therefore you need to check the device on which the parameters of the models exist. In this example, we will see how to check the device on which our Neural Network model mynet exists by checking the device on which its parameters exist.

# Print the device on which mynet exists
print(next(mynet.parameters()).device)
# Outputs- device(type='cpu')

# Moving the model to GPU
mynet.to(device='cuda')

# Print the device on which mynet exists
print(next(mynet.parameters()).device)
# Outputs- device(type='cuda', index=0)

Notice how Pytorch by default makes use of CPU for storing tensors and models. If you have to make use of GPU to store your tensors and models, you will have to explicitly move them to a GPU. However, it can be quite challenging to check if CUDA is available in your environment and moving your Pytorch tensors and models to a GPU if it’s available. This step can be simplified a bit, take a look at the example below.

Example

In this example, we are setting the value of device to 'cuda' if CUDA is available, else we are setting it to 'cpu'. Next, we can move all the tensors and models to device. Therefore, if CUDA is available, then all the models and tensors will be moved to GPU else they will be in CPU.

# Assigning the value of device
if torch.cuda.is_available():
    device = 'cuda'
else:
    device = 'cpu'

# Moving the Neural Network model to the available device
mynet.to(device=device)

# Moving the tensor to the available device
tensor_3 = tensor_3.to(device=device)

Now you can move all your models and parameters to device before training without having to worry about if CUDA will be available or not in the environment in which your code will be running. This way you will be able to take advantage of GPU for training if it’s available.