Supervised Contrastive Learning
Created: 20 Apr 2023, 11:00 AM | Modified: =dateformat(this.file.mtime,"dd MMM yyyy, hh:mm a")
Tags: knowledge, GeneralDL
- Loss function is either contrastive loss, or triplet loss
- These are not the same loss, but are often confused because many people use the term contrastive to refer to the triplet loss.
- Contrastive Loss is defined in the paper “Dimensionality Reduction by Learning an Invariant Mapping” (link) and works with similarity labels to learn a distance mapping.
- Triplet Loss is defined in the paper “FaceNet: A Unified Embedding for Face Recognition and Clustering” (link) and this is where the triplet concept appears, with a anchor and negative/positive samples defined with respect to the anchor. This is a form of contrastive learning, but its not the same as the contrastive loss.
- Many papers mistakenly confuse the Triplet loss with the Contrastive loss, and they are not the same.
- Note that these solve different problems, if you have known similarity relationships, then the Contrastive loss is appropriate, if you have negative/positive relationships (like for face recognition where people’s identity is the anchor), then you should use the Triplet loss.
- Both of these losses can be used to train a siamese neural network, but mostly they are used for embedding learning.
- https://ai.stackexchange.com/a/36054
- https://towardsdatascience.com/triplet-loss-advanced-intro-49a07b7d8905
- These are not the same loss, but are often confused because many people use the term contrastive to refer to the triplet loss.