Computer Science > Computer Vision and Pattern Recognition
arXiv:2211.11733 (cs)
[Submitted on 21 Nov 2022 (v1), last revised 30 May 2023 (this version, v2)]
Title:Teaching Structured Vision&Language Concepts to Vision&Language Models
Authors:Sivan Doveh,Assaf Arbelle,Sivan Harary,Rameswar Panda,Roei Herzig,Eli Schwartz,Donghyun Kim,Raja Giryes,Rogerio Feris,Shimon Ullman,Leonid Karlinsky
View a PDF of the paper titled Teaching Structured Vision&Language Concepts to Vision&Language Models, by Sivan Doveh and 10 other authors
View PDFAbstract:Vision and Language (VL) models have demonstrated remarkable zero-shot performance in a variety of tasks. However, some aspects of complex language understanding still remain a challenge. We introduce the collective notion of Structured Vision&Language Concepts (SVLC) which includes object attributes, relations, and states which are present in the text and visible in the image. Recent studies have shown that even the best VL models struggle with SVLC. A possible way of fixing this issue is by collecting dedicated datasets for teaching each SVLC type, yet this might be expensive and time-consuming. Instead, we propose a more elegant data-driven approach for enhancing VL models' understanding of SVLCs that makes more effective use of existing VL pre-training datasets and does not require any additional data. While automatic understanding of image structure still remains largely unsolved, language structure is much better modeled and understood, allowing for its effective utilization in teaching VL models. In this paper, we propose various techniques based on language structure understanding that can be used to manipulate the textual part of off-the-shelf paired VL datasets. VL models trained with the updated data exhibit a significant improvement of up to 15% in their SVLC understanding with only a mild degradation in their zero-shot capabilities both when training from scratch or fine-tuning a pre-trained model.
Subjects: | Computer Vision and Pattern Recognition (cs.CV) |
Cite as: | arXiv:2211.11733 [cs.CV] |
(orarXiv:2211.11733v2 [cs.CV] for this version) | |
https://doi.org/10.48550/arXiv.2211.11733 arXiv-issued DOI via DataCite | |
Journal reference: | CVPR 2023 |
Submission history
From: Sivan Doveh [view email][v1] Mon, 21 Nov 2022 18:54:10 UTC (20,509 KB)
[v2] Tue, 30 May 2023 17:08:43 UTC (20,522 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
- Other Formats
View a PDF of the paper titled Teaching Structured Vision&Language Concepts to Vision&Language Models, by Sivan Doveh and 10 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.