Self-Supervised Video Representation Learning via Latent Time Navigation

Authors

  • Di YangInria, 2004 Rte des Lucioles, Valbonne, FranceUniversite Cote d’Azur, 28 Av. de Valrose, Nice, France
  • Yaohui WangInria, 2004 Rte des Lucioles, Valbonne, FranceUniversite Cote d’Azur, 28 Av. de Valrose, Nice, FranceShanghai AI Laboratory, 701 Yunjin Road, Shanghai, China
  • Quan KongWoven Planet Holdings, 3-2-1 Nihonbashimuromachi, Chuo-ku, Tokyo, Japan
  • Antitza DantchevaInria, 2004 Rte des Lucioles, Valbonne, FranceUniversite Cote d’Azur, 28 Av. de Valrose, Nice, France
  • Lorenzo GarattoniToyota Motor Europe, 60 Av. du Bourget, Brussels, Belgium
  • Gianpiero FrancescaToyota Motor Europe, 60 Av. du Bourget, Brussels, Belgium
  • François BrémondInria, 2004 Rte des Lucioles, Valbonne, FranceUniversite Cote d’Azur, 28 Av. de Valrose, Nice, France

DOI:

https://doi.org/10.1609/aaai.v37i3.25416

Keywords:

CV: Video Understanding & Activity Analysis

Abstract

Self-supervised video representation learning aimed at maximizing similarity between different temporal segments of one video, in order to enforce feature persistence over time. This leads to loss of pertinent information related to temporal relationships, rendering actions such as `enter' and `leave' to be indistinguishable. To mitigate this limitation, we propose Latent Time Navigation (LTN), a time parameterized contrastive learning strategy that is streamlined to capture fine-grained motions. Specifically, we maximize the representation similarity between different video segments from one video, while maintaining their representations time-aware along a subspace of the latent representation code including an orthogonal basis to represent temporal changes. Our extensive experimental analysis suggests that learning video representations by LTN consistently improves performance of action classification in fine-grained and human-oriented tasks (e.g., on Toyota Smarthome dataset). In addition, we demonstrate that our proposed model, when pre-trained on Kinetics-400, generalizes well onto the unseen real world video benchmark datasets UCF101 and HMDB51, achieving state-of-the-art performance in action recognition.
AAAI-23 Proceedings Cover

Downloads

Published

2023-06-26

How to Cite

Yang, D., Wang, Y., Kong, Q., Dantcheva, A., Garattoni, L., Francesca, G., & Brémond, F. (2023). Self-Supervised Video Representation Learning via Latent Time Navigation.Proceedings of the AAAI Conference on Artificial Intelligence,37(3), 3118-3126. https://doi.org/10.1609/aaai.v37i3.25416

Issue

Section

AAAI Technical Track on Computer Vision III