Pretext Training Tasks for Self Supervised Learning in Computer Vision
In one of my recent posts, I talked about pretext training, in which the Machine Learning model is trained to perform an upstream task prior to being trained on the actual, downstream task. This helps in reducing the amount of labeled data required to train a model and is used widely for Self Supervised Learning. …
Pretext Training Tasks for Self Supervised Learning in Computer Vision Read More »