- Notifications
You must be signed in to change notification settings - Fork8
alexaapo/BERT-based-pretrained-model-using-SQuAD-2.0-dataset
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Here I built a BERT-based model which returns an answer, given a user question and a passage, which includes the answer of the question. For this question answering task, I used theSQuAD 2.0 dataset.
- Fine_Tuning_Bert
- Evaluate_Fine_Tuned_Bert
- Evaluate_Existed_Fine_Tuned_Bert
I started with the BERT-base pretrained modelbert-base-uncased and fine-tune it to have a question answering task, implemented by myself.
Then, I evaluate the model with different passages, with increasing difficulty level.
Finally, as a benchmark I used the implemented pretrained and fine-tuned model fromHugging Face, named asbert-large-uncased-whole-word-masking-finetuned-squad. You will see, that my model performance is pretty decent.
Note: My solution is implemented in PyTorch and the report is well documented. For running the notebooks, I used the Google Colab with its GPU.
You can check the Google Colab Notebooks here:
About
BERT based pretrained model using SQuAD 2.0 Dataset for Question-Answering
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.