Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Cornell University
We gratefully acknowledge support from the Simons Foundation,member institutions, and all contributors.Donate
arxiv logo>cs> arXiv:2401.03590
arXiv logo
Cornell University Logo

Computer Science > Computation and Language

arXiv:2401.03590 (cs)
[Submitted on 7 Jan 2024 (v1), last revised 5 Jun 2024 (this version, v2)]

Title:Building Efficient and Effective OpenQA Systems for Low-Resource Languages

View PDFHTML (experimental)
Abstract:Question answering (QA) is the task of answering questions posed in natural language with free-form natural language answers extracted from a given passage. In the OpenQA variant, only a question text is given, and the system must retrieve relevant passages from an unstructured knowledge source and use them to provide answers, which is the case in the mainstream QA systems on the Web. QA systems currently are mostly limited to the English language due to the lack of large-scale labeled QA datasets in non-English languages. In this paper, we show that effective, low-cost OpenQA systems can be developed for low-resource contexts. The key ingredients are (1) weak supervision using machine-translated labeled datasets and (2) a relevant unstructured knowledge source in the target language context. Furthermore, we show that only a few hundred gold assessment examples are needed to reliably evaluate these systems. We apply our method to Turkish as a challenging case study, since English and Turkish are typologically very distinct and Turkish has limited resources for QA. We present SQuAD-TR, a machine translation of SQuAD2.0, and we build our OpenQA system by adapting ColBERT-QA and retraining it over Turkish resources and SQuAD-TR using two versions of Wikipedia dumps spanning two years. We obtain a performance improvement of 24-32% in the Exact Match (EM) score and 22-29% in the F1 score compared to the BM25-based and DPR-based baseline QA reader models. Our results show that SQuAD-TR makes OpenQA feasible for Turkish, which we hope encourages researchers to build OpenQA systems in other low-resource languages. We make all the code, models, and the dataset publicly available atthis https URL.
Subjects:Computation and Language (cs.CL)
Report number:KNOSYS_112243
Cite as:arXiv:2401.03590 [cs.CL]
 (orarXiv:2401.03590v2 [cs.CL] for this version)
 https://doi.org/10.48550/arXiv.2401.03590
arXiv-issued DOI via DataCite
Journal reference:Knowledge-Based Systems, Vol. 302, p. 112243, 2024
Related DOI:https://doi.org/10.1016/j.knosys.2024.112243
DOI(s) linking to related resources

Submission history

From: Emrah Budur [view email]
[v1] Sun, 7 Jan 2024 22:11:36 UTC (289 KB)
[v2] Wed, 5 Jun 2024 03:13:31 UTC (310 KB)
Full-text links:

Access Paper:

  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
Current browse context:
cs.CL
Change to browse by:
export BibTeX citation

Bookmark

BibSonomy logoReddit logo

Bibliographic and Citation Tools

Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
scite Smart Citations(What are Smart Citations?)

Code, Data and Media Associated with this Article

CatalyzeX Code Finder for Papers(What is CatalyzeX?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)

Demos

Hugging Face Spaces(What is Spaces?)

Recommenders and Search Tools

Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.

Which authors of this paper are endorsers? |Disable MathJax (What is MathJax?)

[8]ページ先頭

©2009-2025 Movatter.jp