Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Cornell University
We gratefully acknowledge support from the Simons Foundation,member institutions, and all contributors.Donate
arxiv logo>cs> arXiv:2406.18070
arXiv logo
Cornell University Logo

Computer Science > Computer Vision and Pattern Recognition

arXiv:2406.18070 (cs)
[Submitted on 26 Jun 2024 (v1), last revised 1 Jul 2024 (this version, v4)]

Title:EgoVideo: Exploring Egocentric Foundation Model and Downstream Adaptation

View PDFHTML (experimental)
Abstract:In this report, we present our solutions to the EgoVis Challenges in CVPR 2024, including five tracks in the Ego4D challenge and three tracks in the EPIC-Kitchens challenge. Building upon the video-language two-tower model and leveraging our meticulously organized egocentric video data, we introduce a novel foundation model called EgoVideo. This model is specifically designed to cater to the unique characteristics of egocentric videos and provides strong support for our competition submissions. In the Ego4D challenges, we tackle various tasks including Natural Language Queries, Step Grounding, Moment Queries, Short-term Object Interaction Anticipation, and Long-term Action Anticipation. In addition, we also participate in the EPIC-Kitchens challenge, where we engage in the Action Recognition, Multiple Instance Retrieval, and Domain Adaptation for Action Recognition tracks. By adapting EgoVideo to these diverse tasks, we showcase its versatility and effectiveness in different egocentric video analysis scenarios, demonstrating the powerful representation ability of EgoVideo as an egocentric foundation model. Our codebase and pretrained models are publicly available atthis https URL.
Comments:Champion solutions in the EgoVis CVPR 2024 workshop
Subjects:Computer Vision and Pattern Recognition (cs.CV)
Cite as:arXiv:2406.18070 [cs.CV]
 (orarXiv:2406.18070v4 [cs.CV] for this version)
 https://doi.org/10.48550/arXiv.2406.18070
arXiv-issued DOI via DataCite

Submission history

From: Yifei Huang [view email]
[v1] Wed, 26 Jun 2024 05:01:37 UTC (26 KB)
[v2] Thu, 27 Jun 2024 06:26:12 UTC (116 KB)
[v3] Fri, 28 Jun 2024 03:50:19 UTC (116 KB)
[v4] Mon, 1 Jul 2024 02:44:11 UTC (116 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
Change to browse by:
export BibTeX citation

Bookmark

BibSonomy logoReddit logo

Bibliographic and Citation Tools

Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
scite Smart Citations(What are Smart Citations?)

Code, Data and Media Associated with this Article

CatalyzeX Code Finder for Papers(What is CatalyzeX?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)

Demos

Hugging Face Spaces(What is Spaces?)

Recommenders and Search Tools

Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.

Which authors of this paper are endorsers? |Disable MathJax (What is MathJax?)

[8]ページ先頭

©2009-2025 Movatter.jp