Here I built a BERT-based model which returns an answer, given a user question and a passage, which includes the answer of the question. For this question answering task, I used the SQuAD 2.0 dataset.
- Fine_Tuning_Bert
- Evaluate_Fine_Tuned_Bert
- Evaluate_Existed_Fine_Tuned_Bert
I started with the BERT-base pretrained model bert-base-uncased and fine-tune it to have a question answering task, implemented by myself.
Then, I evaluate the model with different passages, with increasing difficulty level.
Finally, as a benchmark I used the implemented pretrained and fine-tuned model from Hugging Face, named as bert-large-uncased-whole-word-masking-finetuned-squad. You will see, that my model performance is pretty decent.
Note: My solution is implemented in PyTorch and the report is well documented. For running the notebooks, I used the Google Colab with its GPU.
You can check the Google Colab Notebooks here: