A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.
-
Updated
Nov 28, 2022 - Jupyter Notebook
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.
Source code for our "MMM" paper at AAAI 2020
Repository for My HuggingFace Natural Language Processing Projects
[COLING 2020] BERT-based Models for Chengyu
Hong Kong Polytechnic Unversity Master degree's Natural Language Processing (COMP5523) Course project
The web app is designed for generating multiple-choice questions from text input. Users can either type text directly or upload `.txt` files to create questions. The application also offers options to download the generated questions in PDF or Word format.
ThinkBench is an LLM benchmarking tool focused on evaluating the effectiveness of chain-of-thought (CoT) prompting for answering multiple-choice questions.
Add a description, image, and links to the multiple-choice-question-answering topic page so that developers can more easily learn about it.
To associate your repository with the multiple-choice-question-answering topic, visit your repo's landing page and select "manage topics."