Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

BERT based pretrained model using SQuAD 2.0 Dataset for Question-Answering

NotificationsYou must be signed in to change notification settings

alexaapo/BERT-based-pretrained-model-using-SQuAD-2.0-dataset

Repository files navigation

Here I built a BERT-based model which returns an answer, given a user question and a passage, which includes the answer of the question. For this question answering task, I used theSQuAD 2.0 dataset.

You should read the notebooks with this order:

  1. Fine_Tuning_Bert
  2. Evaluate_Fine_Tuned_Bert
  3. Evaluate_Existed_Fine_Tuned_Bert

I started with the BERT-base pretrained modelbert-base-uncased and fine-tune it to have a question answering task, implemented by myself.

Then, I evaluate the model with different passages, with increasing difficulty level.

Finally, as a benchmark I used the implemented pretrained and fine-tuned model fromHugging Face, named asbert-large-uncased-whole-word-masking-finetuned-squad. You will see, that my model performance is pretty decent.

Note: My solution is implemented in PyTorch and the report is well documented. For running the notebooks, I used the Google Colab with its GPU.

You can check the Google Colab Notebooks here:

  • Fine Tuned Bert:Open In Colab
  • Evaluate Fine Tuned Bert:Open In Colab
  • Evaluate Existed Fine Tuning Bert:Open In Colab

About

BERT based pretrained model using SQuAD 2.0 Dataset for Question-Answering

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp