Papers
arxiv:1606.05250

SQuAD: 100,000+ Questions for Machine Comprehension of Text

Published on Jun 16, 2016
Authors:
,
,
,

Abstract

The Stanford Question Answering Dataset (SQuAD) consists of questions with answers from Wikipedia articles, requiring various reasoning types and achieving moderate performance with a logistic regression model.

AI-generated summary

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com

Community

the seminal dataset for QA

Sign up or log in to comment

Models citing this paper 33

Browse 33 models citing this paper

Datasets citing this paper 34

Browse 34 datasets citing this paper

Spaces citing this paper 1,738

Collections including this paper 1