Pytorch Loss Functions

A loss function is used to calculate loss which is a measure of how good(or rather, bad) our model is performing. Loss function along with an optimizer is used to adjust the parameters of the Neural Network model. In this chapter of the Pytorch Tutorial, you will learn about the inbuilt loss functions available in the Pytorch library and how you can use them.

During training, the objective is to fine-tune the parameters of the model to minimize the loss. During validation and testing, loss function is used to measure how well the model is performing.

Importing Loss Functions

In Pytorch library, the loss functions are located in the torch.nn module.

from torch import nn

You need to create an instance of the loss function that you want to use.

Example

In this example we create an instance of MSELoss class, which is used to calculate the Mean Squared Error loss.

loss_function = nn.MSELoss()

The torch.nn module contains class for all the commonly used losses and many more. The ones that are most commonly used are-

Loss ClassBrief Description
CrossEntropyLossIt is used to calculate the loss for a classification problem. It is also known as log loss.
MSELossIt is used for calculating the loss for a regression problems when there are few outliers. It is used to calculate the Mean Squared Error, also known as squared L2 norm.
L1LossIt is used for calculating the loss for a regression problem when there are many outliers. It is used to calculate the Mean Absolute Error, also known as L1 norm.

Forward Pass

In the forward pass, the loss function is used to calculate the loss. You can calculate the loss by calling the loss object and passing it the model output and the actual labels.

Example

We will be calculating the loss by using the loss function we created earlier. Here, output is the model output and target is the actual label for the input.

loss = loss_function(output, target)

Backward Pass

In the backward pass, we calculate the gradients for each parameter of the model. You can calculate the backward gradients by calling the backward() method on the loss returned by loss_function.

Example

loss.backward()

More on Loss

Loss is a numeric value that is a function of the predicted output of the model and the ground truth for a particular set of model parameters. To put it simply, it is a way to measure how much is the actual output different from the target output. Therefore, a loss function can be mathematically defined as-

loss = ƒ(actual output, target output)

The aim when training the model is to iteratively update the model parameters such that the loss is minimized. This is achieved through the backpropagation algorithm. The loss is back-propagated from the output layer to input layer and the gradients are calculated for each of these parameters. These gradients are then used to update the model parameters.