Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.
Thomas Scialom, Patrick Bordes, Paul-Alexis Dray, Jacopo Staiano, and Patrick Gallinari. 2020.What BERT Sees: Cross-Modal Transfer for Visual Question Generation. InProceedings of the 13th International Conference on Natural Language Generation, pages 327–337, Dublin, Ireland. Association for Computational Linguistics.
@inproceedings{scialom-etal-2020-bert, title = "What {BERT} Sees: Cross-Modal Transfer for Visual Question Generation", author = "Scialom, Thomas and Bordes, Patrick and Dray, Paul-Alexis and Staiano, Jacopo and Gallinari, Patrick", editor = "Davis, Brian and Graham, Yvette and Kelleher, John and Sripada, Yaji", booktitle = "Proceedings of the 13th International Conference on Natural Language Generation", month = dec, year = "2020", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.inlg-1.39/", doi = "10.18653/v1/2020.inlg-1.39", pages = "327--337", abstract = "Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="scialom-etal-2020-bert"> <titleInfo> <title>What BERT Sees: Cross-Modal Transfer for Visual Question Generation</title> </titleInfo> <name type="personal"> <namePart type="given">Thomas</namePart> <namePart type="family">Scialom</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Patrick</namePart> <namePart type="family">Bordes</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Paul-Alexis</namePart> <namePart type="family">Dray</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Jacopo</namePart> <namePart type="family">Staiano</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Patrick</namePart> <namePart type="family">Gallinari</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2020-12</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the 13th International Conference on Natural Language Generation</title> </titleInfo> <name type="personal"> <namePart type="given">Brian</namePart> <namePart type="family">Davis</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Yvette</namePart> <namePart type="family">Graham</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">John</namePart> <namePart type="family">Kelleher</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Yaji</namePart> <namePart type="family">Sripada</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Dublin, Ireland</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.</abstract> <identifier type="citekey">scialom-etal-2020-bert</identifier> <identifier type="doi">10.18653/v1/2020.inlg-1.39</identifier> <location> <url>https://aclanthology.org/2020.inlg-1.39/</url> </location> <part> <date>2020-12</date> <extent unit="page"> <start>327</start> <end>337</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T What BERT Sees: Cross-Modal Transfer for Visual Question Generation%A Scialom, Thomas%A Bordes, Patrick%A Dray, Paul-Alexis%A Staiano, Jacopo%A Gallinari, Patrick%Y Davis, Brian%Y Graham, Yvette%Y Kelleher, John%Y Sripada, Yaji%S Proceedings of the 13th International Conference on Natural Language Generation%D 2020%8 December%I Association for Computational Linguistics%C Dublin, Ireland%F scialom-etal-2020-bert%X Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.%R 10.18653/v1/2020.inlg-1.39%U https://aclanthology.org/2020.inlg-1.39/%U https://doi.org/10.18653/v1/2020.inlg-1.39%P 327-337
Thomas Scialom, Patrick Bordes, Paul-Alexis Dray, Jacopo Staiano, and Patrick Gallinari. 2020.What BERT Sees: Cross-Modal Transfer for Visual Question Generation. InProceedings of the 13th International Conference on Natural Language Generation, pages 327–337, Dublin, Ireland. Association for Computational Linguistics.