Triplet extraction aims to extract entities and their corresponding relations in unstructured text. Most existing methods train an extraction model on high-quality training data, and hence are incapable of extracting relations that were not observed during training. Generalizing the model to unseen relations typically requires fine-tuning on synthetic training data which is often noisy and unreliable. In this paper, we argue that reducing triplet extraction to a template filling task over a pre-trained language model can equip the model with zero-shot learning capabilities and enable it to leverage the implicit knowledge in the language model. Embodying these ideas, we propose a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that is based on end-to-end generative transformers. Our experiments show that without any data augmentation or pipeline systems, ZETT can outperform previous state-of-the-art models with 25% less parameters. We further show that ZETT is more robust in detecting entities and can be incorporated with automatically generated templates for relations.
translated by 谷歌翻译
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts under the zero-shot setting, where the relation sets at the training and testing stages are disjoint. Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples, which increases the training cost and severely constrains the model performance. To address the above issues, we propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation Selection and Entity Boundary Detection. The remarkable characteristic of PCRED is that it does not rely on additional data and still achieves promising performance. The model adopts a relation-first paradigm, recognizing unseen relations through candidate relation selection. With this approach, the semantics of relations are naturally infused in the context. Entities are extracted based on the context and the semantics of relations subsequently. We evaluate our model on two ZeroRTE datasets. The experiment results show that our method consistently outperforms previous works. Our code will be available at https://anonymous.4open.science/r/PCRED.
translated by 谷歌翻译
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context. Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient. We present LM-BFF-better few-shot fine-tuning of language models 1 -a suite of simple and complementary techniques for finetuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context. Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning. 2 * The first two authors contributed equally. 1 Alternatively, language models' best friends forever. 2 Our implementation is publicly available at https:// github.com/princeton-nlp/LM-BFF.
translated by 谷歌翻译
为了减少人际关系提取(RE)任务的注释,提出了遥远的监督方法,同时却在低性能方面挣扎。在这项工作中,我们提出了一个新颖的DSRE-NLI框架,该框架既考虑了现有知识库的遥远监督,又考虑了对其他任务的预读语言模型的间接监督。 DSRE-NLI通过半自动关系语言(SARV)机制为现成的自然语言推理(NLI)发动机充满电,以提供间接的监督并进一步巩固远处注释以使多型分类重新模型受益。基于NLI的间接监督仅获取一个从人类的关系模板作为每个关系的语义通用模板,然后模板集由高质量的文本模式富集,从遥远的注释的语料库中自动开采。通过两种简单有效的数据整合策略,培训数据的质量得到了显着提高。广泛的实验表明,所提出的框架可显着改善远距离监督的RE基准数据集上的SOTA性能(最高为F1的7.73%)。
translated by 谷歌翻译
We present Pre-trained Machine Reader (PMR), a novel method to retrofit Pre-trained Language Models (PLMs) into Machine Reading Comprehension (MRC) models without acquiring labeled data. PMR is capable of resolving the discrepancy between model pre-training and downstream fine-tuning of existing PLMs, and provides a unified solver for tackling various extraction tasks. To achieve this, we construct a large volume of general-purpose and high-quality MRC-style training data with the help of Wikipedia hyperlinks and design a Wiki Anchor Extraction task to guide the MRC-style pre-training process. Although conceptually simple, PMR is particularly effective in solving extraction tasks including Extractive Question Answering and Named Entity Recognition, where it shows tremendous improvements over previous approaches especially under low-resource settings. Moreover, viewing sequence classification task as a special case of extraction task in our MRC formulation, PMR is even capable to extract high-quality rationales to explain the classification process, providing more explainability of the predictions.
translated by 谷歌翻译
最近的自然语言理解进展(NLU)已经被驱动,部分是由胶水,超级格,小队等的基准。事实上,许多NLU模型现在在许多任务中匹配或超过“人类水平”性能这些基准。然而,大多数这些基准测试都提供模型访问相对大量的标记数据进行培训。因此,该模型提供了比人类所需的更多数据,以实现强大的性能。这有动机侧重于侧重于改善NLU模型的少量学习性能。然而,缺乏少量射门的标准化评估基准,导致不同纸张中的不同实验设置。为了帮助加速这一工作的工作,我们介绍了线索(受限制的语言理解评估标准),这是评估NLU模型的几次拍摄学习功能的基准。我们证明,虽然最近的模型在获得大量标记数据时达到人类性能,但对于大多数任务,少量拍摄设置中的性能存在巨大差距。我们还展示了几个拍摄设置中替代模型家族和适应技术之间的差异。最后,我们讨论了在设计实验设置时讨论了评估真实少量学习绩效的实验设置,并提出了统一的标准化方法,以获得少量学习评估。我们的目标是鼓励对NLU模型的研究,可以概括为具有少数示例的新任务。线索的代码和数据可以在https://github.com/microsoft/clues提供。
translated by 谷歌翻译
对于自然语言处理中的许多任务,将知识从一个域转移到另一个领域至关重要,尤其是当目标域中的可用数据量受到限制时。在这项工作中,我们在指定实体识别(NER)的背景下提出了一种新颖的域适应方法。我们提出了一种两步方法,该方法由可变基本模块和模板模块组成,该模块在简单的描述模式的帮助下利用了预训练的语言模型中捕获的知识。我们的方法简单而通用,可以在几次射击和零拍设置中应用。评估我们在许多不同数据集中的轻量级方法表明,它可以将最新基准的性能提高2-5%的F1分数。
translated by 谷歌翻译
我们研究了很少的细粒实体键入(FET)的问题,其中只有几个带注释的实体对每种实体类型提供了上下文。最近,基于及时的调整通过将实体类型分类任务作为“填补空白”的问题来表明在几次射击方案中表现出优越的性能。这允许有效利用预训练的语言模型(PLM)的强语建模能力。尽管当前基于及时的调整方法成功了,但仍有两个主要挑战:(1)提示中的口头化器要么是由外部知识基础手动设计或构建的,而无需考虑目标语料库和标签层次结构信息,而且(2)当前方法主要利用PLM的表示能力,但没有通过广泛的通用域预训练来探索其产生的功率。在这项工作中,我们为由两个模块组成的几个弹药fet提出了一个新颖的框架:(1)实体类型标签解释模块自动学习将类型标签与词汇联系起来,通过共同利用几个播放实例和标签层次结构和标签层次结构,以及(2)基于类型的上下文化实例生成器根据给定实例生成新实例,以扩大培训集以更好地概括。在三个基准数据集上,我们的模型优于大量利润的现有方法。可以在https://github.com/teapot123/fine-graining-entity-typing上找到代码。
translated by 谷歌翻译
几乎没有命名的实体识别(NER)对于在有限的资源领域中标记的实体标记至关重要,因此近年来受到了适当的关注。现有的几声方法主要在域内设置下进行评估。相比之下,对于这些固有的忠实模型如何使用一些标记的域内示例在跨域NER中执行的方式知之甚少。本文提出了一种两步以理性为中心的数据增强方法,以提高模型的泛化能力。几个数据集中的结果表明,与先前的最新方法相比,我们的模型无形方法可显着提高跨域NER任务的性能,包括反事实数据增强和及时调用方法。我们的代码可在\ url {https://github.com/lifan-yuan/factmix}上获得。
translated by 谷歌翻译
关系提取是一项重要但具有挑战性的任务,旨在从文本中提取所有隐藏的关系事实。随着深层语言模型的发展,关系提取方法在各种基准上都取得了良好的性能。但是,我们观察到以前方法的两个缺点:首先,在各种关系提取设置下没有统一的框架可以很好地工作;其次,有效利用外部知识作为背景信息。在这项工作中,我们提出了一种知识增强的生成模型来减轻这两个问题。我们的生成模型是一个统一的框架,可在各种关系提取设置下依次生成关系三胞胎,并明确利用来自知识图(KG)的相关知识来解决歧义。我们的模型在包括WebNLG,NYT10和Tacred在内的多个基准和设置上实现了卓越的性能。
translated by 谷歌翻译
本文着重于几次NLP任务的文本数据增强。现有的数据增强算法要么使用一个小型培训集来生成新的合成数据,要么利用与任务无关的启发式规则(例如,同义词替代)或微调通用预训练的语言模型(例如GPT2)。因此,这些方法具有特定于任务的知识,并且仅限于在简单任务中为弱基线产生低质量的合成数据。为了解决这个问题,我们提出了知识混合数据增强模型(KNOWDA):使用知识混合培训(KOMT)在不同的NLP任务的混合物上预测的编码器LM。 KOMT是一种培训程序,将各种异质NLP任务的输入示例重新定义为统一的文本到文本格式,并采用不同粒度的目标,以学习生成部分或完整的样本。在KOMT的帮助下,Knowda可以隐含地将所需的特定于任务的知识从任务的混合中隐含地结合在一起,并通过一些给定的实例迅速掌握目标任务的固有综合定律。据我们所知,我们是首次尝试将任务数量扩展到多任务共同培训以进行数据扩展。广泛的实验表明,i)Knowda成功地通过少量基准的基准成功地提高了Albert和Deberta的表现,表现优于先前的最新数据增强基线; ii)KNOWDA还可以改善少数弹药任务的模型性能,这是KOMT中未包含的固定任务类型。
translated by 谷歌翻译
Powerful generative models have led to recent progress in question generation (QG). However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QG-Bench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes general-purpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QG-Bench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on fine-tuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English. QG-Bench is released along with the fine-tuned models presented in the paper https://github.com/asahi417/lm-question-generation, which are also available as a demo https://autoqg.net/.
translated by 谷歌翻译
大型基于变压器的预训练的语言模型在各种知识密集的任务上取得了令人印象深刻的表现,并可以在其参数中捕获事实知识。我们认为,考虑到不断增长的知识和资源需求,在模型参数中存储大量知识是亚最佳选择。我们认为,更有效的替代方法是向模型提供对上下文相关的结构化知识的明确访问,并训练它以使用该知识。我们提出了LM核 - 实现这一目标的一般框架 - 允许从外部知识源对语言模型培训的\ textit {解耦},并允许后者更新而不会影响已经训练的模型。实验结果表明,LM核心获得外部知识,在知识探索任务上的最先进的知识增强语言模型中实现了重要而强大的优于性能。可以有效处理知识更新;并在两个下游任务上表现良好。我们还提出了一个彻底的错误分析,突出了LM核的成功和失败。
translated by 谷歌翻译
The strong few-shot in-context learning capability of large pre-trained language models (PLMs) such as GPT-3 is highly appealing for application domains such as biomedicine, which feature high and diverse demands of language technologies but also high data annotation costs. In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on two highly representative biomedical information extraction tasks, named entity recognition and relation extraction. We follow the true few-shot setting to avoid overestimating models' few-shot performance by model selection over a large validation set. We also optimize GPT-3's performance with known techniques such as contextual calibration and dynamic in-context example retrieval. However, our results show that GPT-3 still significantly underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3 in-context learning also yields smaller gains in accuracy when more training data becomes available. Our in-depth analyses further reveal issues of the in-context learning setting that may be detrimental to information extraction tasks in general. Given the high cost of experimenting with GPT-3, we hope our study provides guidance for biomedical researchers and practitioners towards more promising directions such as fine-tuning small PLMs.
translated by 谷歌翻译
Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.
translated by 谷歌翻译
本文探讨了提高语言模型的零次学习能力的简单方法。我们表明,指令调整 - 通过对说明书中所述的任务集合微调语言模型 - 大幅提升零射门上看不见任务中的表现。我们采取预训练的语言模型和指令调整它通过自然语言指令模板语言表达了60NLP任务137B参数。我们评估这种指令调整模型,我们称之为FLAN,在看不见的任务类型。FLAN显着改善其未修饰的对应的性能和超过25的20个任务,我们评估零射门175BGPT-3。FLAN甚至GPT-3通过在安利,RTE,BoolQ,AI2-ARC,OpenbookQA和StoryCloze大比分胜过几拍。消融研究显示任务和模型的规模,这个数字是指令调整取得成功的关键组成部分。
translated by 谷歌翻译
快速学习已成为现代自然语言处理的新范式,它直接适应培训的语言模型(PLMS)到$ CLOZE $ -Style预测,自回归建模或序列到序列生成,从而导致各种任务的表现。但是,尚未提出及时学习的标准实施框架,以及大多数现有的及时学习码条,通常是不受管制的,仅为特定方案提供有限的实现。由于有许多细节,例如模板策略,初始化策略和语言化策略等,因此需要在快速学习中考虑,从业者面临障碍,以便快速调整所需的迅速学习方法到他们的应用程序。在本文中,我们展示了{OpenPrompt},一个统一的易于使用的工具包,可以通过PLMS快速学习。 OpenPrompt是一项研究型框架,配备了效率,模块化和可扩展性,其组合性允许自由地将不同的PLMS,任务格式和提示模块组合在统一的范例中。用户可以宽松地部署快速学习框架,并在没有约束的情况下在不同的NLP任务上评估它们的泛化。 OpenPrompt在{\ url {https://github.com/thunlp/openprompt}}上公开发布。
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric prompting PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major few-shot and zero-shot learning methods on diverse NLP tasks: including text classification, text entailment, similar text retrieval, and paraphrasing. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8% in accuracy on text classification and 18.9% on the GLUE benchmark.
translated by 谷歌翻译
Two key obstacles in biomedical relation extraction (RE) are the scarcity of annotations and the prevalence of instances without explicitly pre-defined labels due to low annotation coverage. Existing approaches, which treat biomedical RE as a multi-class classification task, often result in poor generalization in low-resource settings and do not have the ability to make selective prediction on unknown cases but give a guess from seen relations, hindering the applicability of those approaches. We present NBR, which converts biomedical RE as natural language inference formulation through indirect supervision. By converting relations to natural language hypotheses, NBR is capable of exploiting semantic cues to alleviate annotation scarcity. By incorporating a ranking-based loss that implicitly calibrates abstinent instances, NBR learns a clearer decision boundary and is instructed to abstain on uncertain instances. Extensive experiments on three widely-used biomedical RE benchmarks, namely ChemProt, DDI and GAD, verify the effectiveness of NBR in both full-set and low-resource regimes. Our analysis demonstrates that indirect supervision benefits biomedical RE even when a domain gap exists, and combining NLI knowledge with biomedical knowledge leads to the best performance gains.
translated by 谷歌翻译