许多最近的神经模型在机器阅读理解中表现出了显着的经验结果,但有时证据表明,有时这些模型利用数据集偏见来预测和无法推广样本外数据。尽管已经提出了许多其他方法来从计算角度(例如新体系结构或培训程序)解决此问题,但我们认为一种使研究人员发现偏见并在较早阶段调整数据或模型的方法将是有益的。因此,我们介绍了MRCLEN,该工具包检测到用户训练完整模型之前是否存在偏见。为了方便引入工具包,我们还提供了MRC中常见偏见的分类。
translated by 谷歌翻译
快捷方式学习的问题在NLP中广为人知,并且近年来一直是重要的研究重点。数据中的意外相关性使模型能够轻松地求解旨在表现出高级语言理解和推理能力的任务。在本调查论文中,我们关注机器阅读理解的领域(MRC),这是展示高级语言理解的重要任务,这也遭受了一系列快捷方式。我们总结了用于测量和减轻快捷方式的可用技术,并以捷径研究进一步进展的建议结论。最重要的是,我们强调了MRC中缓解快捷方式的两个主要问题:缺乏公共挑战集,有效和可重复使用的评估的必要组成部分以及在其他领域中缺乏某些缓解技术。
translated by 谷歌翻译
已经做出了许多努力,试图理解什么语法知识(例如,理解代币的语音部分的能力)是在大型预训练的语言模型(LM)中编码的。这是通过“边缘探测”(EP)测试完成的:监督分类任务,以预测SPAN的语法属性(是否具有语音的特定部分)仅使用来自LM编码器的令牌表示。但是,大多数NLP应用程序对这些LM编码器进行了微调,以用于特定任务。在这里,我们问:如果通过EP测试来衡量,LM是否进行了微调,它的语言信息的编码会改变吗?具体来说,我们专注于回答(QA)的任务,并在多个数据集上进行实验。我们发现,当微调模型表现良好或在模型被迫学习错误的相关性的对抗情况下,EP测试结果不会发生显着变化。从类似的发现来看,最近的一些论文得出结论,微调不会改变编码器中的语言知识,但它们没有提供解释。我们发现,EP模型本身容易利用EP数据集中的虚假相关性。当纠正该数据集偏差时,我们确实会看到EP测试结果的改善。
translated by 谷歌翻译
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com.
translated by 谷歌翻译
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students' ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/ ˜glai1/data/race/ and the code is available at https://github.com/ qizhex/RACE_AR_baselines
translated by 谷歌翻译
Multi-hop Machine reading comprehension is a challenging task with aim of answering a question based on disjoint pieces of information across the different passages. The evaluation metrics and datasets are a vital part of multi-hop MRC because it is not possible to train and evaluate models without them, also, the proposed challenges by datasets often are an important motivation for improving the existing models. Due to increasing attention to this field, it is necessary and worth reviewing them in detail. This study aims to present a comprehensive survey on recent advances in multi-hop MRC evaluation metrics and datasets. In this regard, first, the multi-hop MRC problem definition will be presented, then the evaluation metrics based on their multi-hop aspect will be investigated. Also, 15 multi-hop datasets have been reviewed in detail from 2017 to 2022, and a comprehensive analysis has been prepared at the end. Finally, open issues in this field have been discussed.
translated by 谷歌翻译
为了实现长文档理解的构建和测试模型,我们引入质量,具有中文段的多项选择QA DataSet,具有约5,000个令牌的平均长度,比典型的当前模型更长。与经过段落的事先工作不同,我们的问题是由阅读整个段落的贡献者编写和验证的,而不是依赖摘要或摘录。此外,只有一半的问题是通过在紧缩时间限制下工作的注释器来应答,表明略读和简单的搜索不足以一直表现良好。目前的模型在此任务上表现不佳(55.4%),并且落后于人类性能(93.5%)。
translated by 谷歌翻译
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. 1 Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning. We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp. github.io/coqa.
translated by 谷歌翻译
目前,自然语言理解(NLU)中最根本的两个挑战是:(a)如何以“正确”的原因确定基于深度学习的模型是否在NLU基准上得分很高;(b)了解这些原因甚至是什么。我们研究了关于两个语言“技能”的阅读理解模型的行为:核心分辨率和比较。我们为从系统中预期的推理步骤提出了一个定义,该系统将“缓慢阅读”,并将其与各种大小的贝特家族的五个模型的行为进行比较,这是通过显着分数和反事实解释观察到的。我们发现,对于比较(而不是核心),基于较大编码器的系统更有可能依靠“正确”的信息,但即使他们在概括方面也很难,表明他们仍然学习特定的词汇模式,而不是比较的一般原则。
translated by 谷歌翻译
本文通过将深度递归编码器添加到具有深递归编码器(BERT-DRE)的伯爵,提供了一种深度神经阵列匹配(NLSM)。我们对模型行为的分析表明,BERT仍未捕获文本的全部复杂性,因此伯特顶部应用了一个深递归编码器。具有残留连接的三个Bi-LSTM层用于设计递归编码器,并在此编码器顶部使用注意模块。为了获得最终的载体,使用由平均值和最大池组成的池化层。我们在四个基准,SNLI,贝尔船,Multinli,Scitail和新的波斯宗教问题数据集上进行模型。本文侧重于改善NLSM任务中的BERT结果。在这方面,进行BERT-DRE和BERT之间的比较,并且显示在所有情况下,BERT-DRE优于伯特。宗教数据集的BERT算法实现了89.70%的精度,并且BERT-DRE架构使用相同的数据集提高了90.29%。
translated by 谷歌翻译
使用来自表格(TableQA)的信息回答自然语言问题是最近的兴趣。在许多应用程序中,表未孤立,但嵌入到非结构化文本中。通常,通过将其部分与表格单元格内容或非结构化文本跨度匹配,并从任一源中提取答案来最佳地回答问题。这导致了HybridQA数据集引入的TextableQA问题的新空间。现有的表格表示对基于变换器的阅读理解(RC)架构的适应性未通过单个系统解决两个表示的不同模式。培训此类系统因对遥远监督的需求而进一步挑战。为了降低认知负担,培训实例通常包括问题和答案,后者匹配多个表行和文本段。这导致嘈杂的多实例培训制度不仅涉及表的行,而且涵盖了链接文本的跨度。我们通过提出Mitqa来回应这些挑战,这是一个新的TextableQA系统,明确地模拟了表行选择和文本跨度选择的不同但密切相关的概率空间。与最近的基线相比,我们的实验表明了我们的方法的优越性。该方法目前在HybridQA排行榜的顶部,并进行了一个试验集,在以前公布的结果上实现了对em和f1的21%的绝对改善。
translated by 谷歌翻译
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1
translated by 谷歌翻译
近年来,在挑战的多跳QA任务方面有令人印象深刻的进步。然而,当面对输入文本中的一些干扰时,这些QA模型可能会失败,并且它们进行多跳推理的可解释性仍然不确定。以前的逆势攻击作品通常编辑整个问题句,这对测试基于实体的多跳推理能力有限。在本文中,我们提出了一种基于多跳推理链的逆势攻击方法。我们将从查询实体开始的多跳推理链与构造的图表中的答案实体一起制定,这使我们能够将问题对齐到每个推理跳跃,从而攻击任何跃点。我们将问题分类为不同的推理类型和对应于所选推理跳的部分问题,以产生分散注意力的句子。我们在HotpotQA DataSet上的三个QA模型上测试我们的对抗方案。结果表明,对答案和支持事实预测的显着性能降低,验证了我们推理基于链条推理模型的攻击方法的有效性以及它们的脆弱性。我们的对抗重新培训进一步提高了这些模型的性能和鲁棒性。
translated by 谷歌翻译
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dualencoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system greatly by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. 1 * Equal contribution 1 The code and trained models have been released at https://github.com/facebookresearch/DPR.
translated by 谷歌翻译
问题答案(QA)是自然语言处理中最具挑战性的最具挑战性的问题之一(NLP)。问答(QA)系统试图为给定问题产生答案。这些答案可以从非结构化或结构化文本生成。因此,QA被认为是可以用于评估文本了解系统的重要研究区域。大量的QA研究致力于英语语言,调查最先进的技术和实现最先进的结果。然而,由于阿拉伯QA中的研究努力和缺乏大型基准数据集,在阿拉伯语问答进展中的研究努力得到了很大速度的速度。最近许多预先接受的语言模型在许多阿拉伯语NLP问题中提供了高性能。在这项工作中,我们使用四个阅读理解数据集来评估阿拉伯QA的最先进的接种变压器模型,它是阿拉伯语 - 队,ArcD,AQAD和TYDIQA-GoldP数据集。我们微调并比较了Arabertv2基础模型,ArabertV0.2大型型号和ARAElectra模型的性能。在最后,我们提供了一个分析,了解和解释某些型号获得的低绩效结果。
translated by 谷歌翻译
In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
有关应答数据集和模型的研究在研究界中获得了很多关注。其中许多人释放了自己的问题应答数据集以及模型。我们在该研究领域看到了巨大的进展。本调查的目的是识别,总结和分析许多研究人员释放的现有数据集,尤其是在非英语数据集以及研究代码和评估指标等资源中。在本文中,我们审查了问题应答数据集,这些数据集可以以法语,德语,日语,中文,阿拉伯语,俄语以及多语言和交叉的问答数据集进行英语。
translated by 谷歌翻译
相同上下文的可能后果可能会因我们所指的情况而异。但是,当前在自然语言处理中的研究并不集中于多种可能情况下的常识性推理。本研究通过短篇小说文字提出与候选人答案相同的结尾的多个问题来构成这项任务。我们由此产生的数据集,可能的故事,包括超过1.3k的故事文本超过4.5k的问题。我们发现,即使是目前的强训练性语言模型也很难始终如一地回答问题,这强调了无监督环境中最高的准确性(60.2%)远远落后于人类准确性(92.5%)。通过与现有数据集进行比较,我们观察到数据集中的问题包含答案选项中的最小注释伪像。此外,我们的数据集还包括需要反事实推理的示例,以及需要读者的反应和虚构信息的示例,这表明我们的数据集可以作为对未来常识性推理的未来研究的挑战性测试。
translated by 谷歌翻译
当前在提取问题答案(EQA)中进行的研究对单跨度提取设置进行了建模,其中单个答案跨度是可以预测给定问题对对的标签。对于通用域EQA来说,这种设置是自然的,因为可以单个跨度可以回答通用域中的大多数问题。遵循通用域EQA模型,当前的生物医学EQA(BIOEQA)模型利用单跨度提取设置,采用后处理步骤。在本文中,我们调查了整个普通和生物医学领域的问题分布,发现生物医学问题更可能需要列表型答案(多个答案),而不是Factoid-type答案(单个答案)。这需要能够为问题提供多个答案的模型。基于这项初步研究,我们为Bioeqa提出了一种序列标记方法,Bioeqa是一种多跨度提取设置。我们的方法直接以不同数量的短语作为答案来解决问题,并可以学会从培训数据中确定问题的答案数量。我们在BioASQ 7B和8B列表类型问题上的实验结果优于表现最佳的现有模型,而无需进行后处理步骤。源代码和资源可免费下载,网址为https://github.com/dmis-lab/seqtagqa
translated by 谷歌翻译