Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

ConClue: Conditional Clue Extraction for Multiple Choice Question Answering

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14809))

Included in the following conference series:

  • 509Accesses

Abstract

The task of Multiple Choice Question Answering (MCQA) aims to identify the correct answer from a set of candidates, given a background passage and an associated question. Considerable research efforts have been dedicated to addressing this task, leveraging a diversity of semantic matching techniques to estimate the alignment among the answer, passage, and question. However, key challenges arise as not all sentences from the passage contribute to the question answering, while only a few supporting sentences (clues) are useful. Existing clue extraction methods suffer from inefficiencies in identifying supporting sentences, relying on resource-intensive algorithms, pseudo labels, or overlooking the semantic coherence of the original passage. Addressing this gap, this paper introduces a novel extraction approach, termedConditionalClue extractor (ConClue), for MCQA. ConClue leverages the principles of Conditional Optimal Transport to effectively identify clues by transporting the semantic meaning of one or several words (from the original passage) to selected words (within identified clues), under the prior condition of the question and answer. Empirical studies on several competitive benchmarks consistently demonstrate the superiority of our proposed method over different traditional approaches, with a substantial average improvement of 1.1–2.5 absolute percentage points in answering accuracy.

This work is partially supported by the Australian Research Council Discovery Project (DP210101426), the Australian Research Council Linkage Project (LP200201035), AEGiS Advance Grant(888/008/268, University of Wollongong), and Telstra-UOW Hub for AIOT Solutions Seed Funding (2024).

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Altschuler, J., Weed, J., Rigollet, P.: Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1961–1971 (2017)

    Google Scholar 

  2. Berzak, Y., Malmaud, J., Levy, R.: STARC: structured annotations for reading comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5726–5735 (2020)

    Google Scholar 

  3. Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, pp. 2292–2300 (2013)

    Google Scholar 

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)

    Google Scholar 

  5. Huang, Z., Yu, P., Allan, J.: Improving cross-lingual information retrieval on low-resource languages via optimal transport distillation. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 1048–1056 (2023)

    Google Scholar 

  6. Huang, Z., Wu, A., Shen, Y., Cheng, G., Qu, Y.: When retriever-reader meets scenario-based multiple-choice questions. In: Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 985–994 (2021)

    Google Scholar 

  7. Huang, Z., Wu, A., Zhou, J., Gu, Y., Zhao, Y., Cheng, G.: Clues before answers: generation-enhanced multiple-choice QA. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3272–3287 (2022)

    Google Scholar 

  8. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale ReAding comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794 (2017)

    Google Scholar 

  9. Li, R., Jiang, Z., Wang, L., Lu, X., Zhao, M., Chen, D.: Enhancing transformer-based language models with commonsense representations for knowledge-driven machine comprehension. Knowl.-Based Syst.220, 106936 (2021)

    Article  Google Scholar 

  10. Luo, D., et al.: Evidence augment for multiple-choice machine reading comprehension by weak supervision. In: 30th International Conference on Artificial Neural Networks, pp. 357–368 (2021)

    Google Scholar 

  11. Malmaud, J., Levy, R., Berzak, Y.: Bridging information-seeking human gaze and machine reading comprehension. In: Proceedings of the 24th Conference on Computational Natural Language Learning, pp. 142–152 (2020)

    Google Scholar 

  12. Ni, J., Zhu, C., Chen, W., McAuley, J.: Learning to attend on essential terms: an enhanced retriever-reader model for open-domain question answering. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 335–344 (2019)

    Google Scholar 

  13. Niu, Y., Jiao, F., Zhou, M., Yao, T., Xu, J., Huang, M.: A self-training method for machine reading comprehension with soft evidence extraction. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3916–3927 (2020)

    Google Scholar 

  14. Nouri, N.: Text style transfer via optimal transport. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2532–2541 (2022)

    Google Scholar 

  15. Singh, J., McCann, B., Keskar, N.S., Xiong, C., Socher, R.: XLDA: cross-lingual data augmentation for natural language inference and question answering. In: The Eighth International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  16. Sun, K., Yu, D., Chen, J., Yu, D., Choi, Y., Cardie, C.: Dream: a challenge data set and models for dialogue-based reading comprehension. Trans. Assoc. Comput. Linguist.7, 217–231 (2019)

    Article  Google Scholar 

  17. Tabak, E.G., Trigila, G., Zhao, W.: Data driven conditional optimal transport. Mach. Learn.110(11), 3135–3155 (2021)

    Article MathSciNet  Google Scholar 

  18. Villani, C., et al.: Optimal Transport: old and new, vol. 338 (2009)

    Google Scholar 

  19. Wei, Q., Ma, K., Liu, X., Ji, K., Yang, B., Abraham, A.: DIMN: dual integrated matching network for multi-choice reading comprehension. Eng. Appl. Artif. Intell.130, 107694 (2024)

    Article  Google Scholar 

  20. Yao, X., et al.: Context-guided triple matching for multiple choice question answering. In: 2023 IEEE Smart World Congress (SWC), pp. 1–8. IEEE (2023)

    Google Scholar 

  21. Yao, X., Ma, J., Hu, X., Yang, J., Li, Y.F.: Improving machine reading comprehension through a simple masked-training scheme. In: Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pp. 222–232 (2023)

    Google Scholar 

  22. Yu, A.W., et al.: QANet: combining local convolution with global self-attention for reading comprehension. In: The Sixth International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  23. Yu, H.T., Jatowt, A., Joho, H., Jose, J.M., Yang, X., Chen, L.: Wassrank: Listwise document ranking using optimal transport theory. In: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 24–32 (2019)

    Google Scholar 

  24. Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: DCMN+: dual co-matching network for multi-choice reading comprehension. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9563–9570 (2020)

    Google Scholar 

  25. Zhang, Z., Wu, Y., Zhou, J., Duan, S., Zhao, H., Wang, R.: SG-Net: syntax-guided machine reading comprehension. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9636–9643 (2020)

    Google Scholar 

  26. Zhao, Y., Zhang, Z., Zhao, H.: Reference knowledgeable network for machine reading comprehension. IEEE/ACM Trans. Audio Speech Lang. Process.30, 1461–1473 (2022)

    Article  Google Scholar 

  27. Zhu, P., Zhang, Z., Zhao, H., Li, X.: DUMA: reading comprehension with transposition thinking. IEEE/ACM Trans. Audio Speech Lang. Process.30, 269–279 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

  1. School of Computing and Information Technology, University of Wollongong, Wollongong, Australia

    Wangli Yang, Jie Yang & Wanqing Li

  2. School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, Australia

    Yi Guo

Authors
  1. Wangli Yang

    You can also search for this author inPubMed Google Scholar

  2. Jie Yang

    You can also search for this author inPubMed Google Scholar

  3. Wanqing Li

    You can also search for this author inPubMed Google Scholar

  4. Yi Guo

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toJie Yang.

Editor information

Editors and Affiliations

  1. Luleå Tekniska Universitet, Luleå, Sweden

    Elisa H. Barney Smith

  2. Luleå Tekniska Universitet, Luleå, Sweden

    Marcus Liwicki

  3. Tsinghua University, Beijing, China

    Liangrui Peng

Rights and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, W., Yang, J., Li, W., Guo, Y. (2024). ConClue: Conditional Clue Extraction for Multiple Choice Question Answering. In: Barney Smith, E.H., Liwicki, M., Peng, L. (eds) Document Analysis and Recognition - ICDAR 2024. ICDAR 2024. Lecture Notes in Computer Science, vol 14809. Springer, Cham. https://doi.org/10.1007/978-3-031-70552-6_11

Download citation

Publish with us

Societies and partnerships

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp