Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension (2022, Skelter Labs)

License

NotificationsYou must be signed in to change notification settings

SkelterLabsInc/JaQuAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

Japanese Question Answering Dataset (JaQuAD), released in 2022, is ahuman-annotated dataset created for Japanese Machine Reading Comprehension.JaQuAD is developed to provide a SQuAD-like QA dataset in Japanese.JaQuAD contains 39,696 question-answer pairs.Questions and answers are manually curated by human annotators.Contexts are collected from Japanese Wikipedia articles.

For more information on how the dataset was created, refer to our paper,JaQuAD: Japanese Question Answering Dataset for Machine ReadingComprehension.

Data

JaQuAD consists of three sets:train,validation, andtest. They werecreated from disjoint sets of Wikipedia articles. The following table showsstatistics for each set:

SetNumber of ArticlesNumber of ContextsNumber of Questions
Train691971331748
Validation10114313939
Test10914794009

You can also download our datasethere.(Thetest set is not publicly released yet.)

fromdatasetsimportload_datasetjaquad_data=load_dataset('SkelterLabsInc/JaQuAD')

Baseline

We also provide a baseline model for JaQuAD for comparison. We created thismodel by fine-tuning a publicly available Japanese BERT model on JaQuAD. You cansee the performance of the baseline model in the table below.

For more information on the model's creation, refer toJaQuAD.ipynb.

Pre-trained LMDev F1Dev EMTest F1Test EM
BERT-Japanese77.3561.0178.9263.38

You can download the baseline modelhere.

Usage

fromtransformersimportAutoModelForQuestionAnswering,AutoTokenizerquestion='アレクサンダー・グラハム・ベルは、どこで生まれたの?'context='アレクサンダー・グラハム・ベルは、スコットランド生まれの科学者、発明家、工学者である。世界初の>実用的電話の発明で知られている。'model=AutoModelForQuestionAnswering.from_pretrained('SkelterLabsInc/bert-base-japanese-jaquad')tokenizer=AutoTokenizer.from_pretrained('SkelterLabsInc/bert-base-japanese-jaquad')inputs=tokenizer(question,context,add_special_tokens=True,return_tensors="pt")input_ids=inputs["input_ids"].tolist()[0]outputs=model(**inputs)answer_start_scores=outputs.start_logitsanswer_end_scores=outputs.end_logits# Get the most likely start of the answer with the argmax of the score.answer_start=torch.argmax(answer_start_scores)# Get the most likely end of the answer with the argmax of the score.# 1 is added to `answer_end` because the index of the score is inclusive.answer_end=torch.argmax(answer_end_scores)+1answer=tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))# answer = 'スコットランド'

Limitations

This dataset is not yet complete.The social biases of this dataset have not yet been investigated.

If you find any errors in JaQuAD, please contactjaquad@skelterlabs.com.

Reference

If you use our dataset or code, please cite our paper:

@misc{so2022jaquad,title={{JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension}},author={ByungHoon So and Kyuhong Byun and Kyungwon Kang and Seongjin Cho},year={2022},eprint={2202.01764},archivePrefix={arXiv},primaryClass={cs.CL}}

LICENSE

The JaQuAD dataset is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.

Have Questions?

Ask us atjaquad@skelterlabs.com.

About

JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension (2022, Skelter Labs)

Resources

License

Stars

Watchers

Forks


[8]ページ先頭

©2009-2025 Movatter.jp