Neural Network

Pretext Training Tasks for Self Supervised Learning in Computer Vision

In one of my recent posts, I talked about pretext training, in which the Machine Learning model is trained to perform an upstream task prior to being trained on the actual, downstream task. This helps in reducing the amount of labeled data required to train a model and is used widely for Self Supervised Learning. …

Pretext Training Tasks for Self Supervised Learning in Computer Vision Read More »

Leaky ReLU as an Activation Function in Neural Networks

Rectified Linear Unit, also known as ReLU overcame some of the serious disadvantages of the earlier used activation functions such as Sigmoid and Hyperbolic Tangent activation functions. These include the exploding gradients and vanishing gradients problem. ReLU overcame these problems and it was fast and simple to calculate. However, despite all this it was far …

Leaky ReLU as an Activation Function in Neural Networks Read More »