Loading [MathJax]/extensions/MathZoom.js
On Compositions of Transformations in Contrastive Self-Supervised Learning | IEEE Conference Publication | IEEE Xplore

On Compositions of Transformations in Contrastive Self-Supervised Learning


Abstract:

In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, ...Show More

Abstract:

In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or distinctiveness is sought. We show that it is not immediately obvious how existing methods such as SimCLR can be extended to do so. Instead, we introduce a number of formal requirements that all contrastive formulations must satisfy, and propose a practical construction which satisfies these requirements. In order to maximise the reach of this analysis, we express all components of noise contrastive formulations as the choice of certain generalized transformations of the data (GDTs), including data sampling. We then consider videos as an example of data in which a large variety of transformations are applicable, accounting for the extra modalities – for which we analyze audio and text – and the dimension of time. We find that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art for multiple benchmarks by a large margin, and even surpassing supervised pretraining. Code and pretrained models are available1.
Date of Conference: 10-17 October 2021
Date Added to IEEE Xplore: 28 February 2022
ISBN Information:

ISSN Information:

Conference Location: Montreal, QC, Canada

Contact IEEE to Subscribe

References

References is not available for this document.