Pytorch Layers

A layer is the most fundamental and basic component of any Neural Network model. A Neural Network can be more or less considered a stack of layers that make it up. Pytorch has inbuilt classes for the most commonly used layers. In this chapter of the Pytorch tutorial, you will learn about the various layers that are available in the Pytorch library and how to use them.

Importing torch.nn

All the layers in the Pytorch library are located in the torch.nn module. Therefore, it is important to import torch.nn before using these layers to build your Neural Network.

from torch import nn

Now you are ready to use any of the layers Pytorch provides by creating an instance of it and passing the required arguments. We will take a look at some of the most basic layers in Pytorch and how to use them.


Dense Layer

A fully connected dense layer is one of the most commonly used layers in Neural Networks. You can create a dense layer by instantiating the Linear class. You will have to provide the number of inputs to the layer and the number of outputs from it.

Example

In this example, we have created a dense layer that takes 300 inputs and produces 100 outputs. Or alternatively, this layer has 100 neurons and takes its input from a previous layer that had 300 neurons.

# dense_layer1 is a dense layer with 300 inputs and 100 outputs
dense_layer1 = nn.Linear(300, 100)

Activation Layer

An activation layer is a layer that applies an activation function to all the inputs. Therefore, the number of outputs is the same as the number of inputs.

Example

In this example, we have created an activation layer of the ReLU activation function.

# creates a ReLU activation layer for the layer before
nn.ReLU()

Note– Activation layer is used along with Sequential (nn.Sequential) API in Pytorch.


Convolutional Layer

Convolutional layers are the most commonly used layers in Convolutional Neural Networks(CNNs) along with Pooling Layers. They serve the purpose of extracting features from a local area in the input. Pytorch provides various classes to perform convolutions over varying dimensions.

Example

In this example, we have created a 2D convolutional layer of a kernel size 5*5 that applies to 32 feature maps from the previous Convolutional Layer and produces 64 feature maps. Or alternatively, the layer contains 64 kernels of size 5*5 with trainable parameters that each apply to all of the 32 input feature maps. This Convolutional Layer has a stride of 1 and padding of 0.

# Creates a Convolutional Layer of kernel size 5*5
# The layer gets 32 input feature maps and outputs 64 feature maps
# stride is set to 1 and padding to 0
conv_layer_1 = nn.Conv2d(32, 64, (5, 5), stride = 1, padding=0)

Pooling Layer

Pooling layers are the most commonly used layers in Convolutional Neural Networks(CNNs) along with Convolutional Layers. They serve the purpose of reducing dimensions of the feature maps while simultaneously preserving the most amount of information. Pytorch provides various classes to perform various types of pooling operations over varying dimensions.

Example

In this example, we have created a 2D max-pooling layer over a kernel of size 3*3 with a stride of 3 and no padding.

# Creates a Max Pool layer of kernel size 3*3 with a stride of 3 and no padding
max_pool_1 = nn.MaxPool2d((3, 3), stride=3, padding=0)

Commonly used Layers

The table below contains some of the commonly used layers apart from the ones we have already discussed. This table does not include the Activation Layers which have already been discussed previously.

Layer ClassBrief Description
nn.DropoutRandomly drops out some of the neurons in the layer with a given probability during training.
nn.AvgPool2dApplies Average Pooling over the input.
nn.BatchNorm2dApplies Batch Normalization over a batch of images with multiple channels. E.g- A batch of RGB images.
nn.RNNCreates a Recurrent Neural Network Layer

You have learned about the various layers available in Pytorch. You will learn how to use them to create Neural Networks in the next chapter.