Movatterモバイル変換


[0]ホーム

URL:


Now on home page

ADS

Understanding Self-supervised Learning with Dual Deep Networks

Abstract

We propose a novel theoretical framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks (e.g., SimCLR). First, we prove that in each SGD update of SimCLR with various loss functions, including simple contrastive loss, soft Triplet loss and InfoNCE loss, the weights at each layer are updated by a \emph{covariance operator} that specifically amplifies initial random selectivities that vary across data samples but survive averages over data augmentations. To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a \emph{hierarchical latent tree model} (HLTM) and prove that the hidden neurons of deep ReLU networks can learn the latent variables in HLTM, despite the fact that the network receives \emph{no direct supervision} from these unobserved latent variables. This leads to a provable emergence of hierarchical features through the amplification of initially random selectivities through contrastive SSL. Extensive numerical studies justify our theoretical findings. Code is released in https://github.com/facebookresearch/luckmatters/tree/master/ssl.


Publication:
arXiv e-prints
Pub Date:
October 2020
DOI:

10.48550/arXiv.2010.00578

arXiv:
arXiv:2010.00578
Bibcode:
2020arXiv201000578T
Keywords:
  • Computer Science - Machine Learning;
  • Computer Science - Artificial Intelligence;
  • Statistics - Machine Learning
full text sources
Preprint
|
🌓

[8]ページ先頭

©2009-2025 Movatter.jp