Simile recognition involves two subtasks: simile sentence classification that discriminates whether a sentence contains simile, and simile component extraction that locates the corresponding objects (i.e., tenors and vehicles). Recent work ignores features other than surface strings. In this paper, we explore expressive features for this task to achieve more effective data utilization. Particularly, we study two types of features: 1) input-side features that include POS tags, dependency trees and word definitions, and 2) decoding features that capture the interdependence among various decoding decisions. We further construct a model named HGSR, which merges the input-side features as a heterogeneous graph and leverages decoding features via distillation. Experiments show that HGSR significantly outperforms the current state-of-the-art systems and carefully designed baselines, verifying the effectiveness of introduced features. Our code is available at https://github.com/DeepLearnXMU/HGSR.
translated by 谷歌翻译
Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models have been developed based on syntactic structures of sentences, identified by syntactic parsers. However, previous neural OpenIE models under-explore the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from both graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.
translated by 谷歌翻译
隐性话语关系识别(IDRR)是话语分析中的一个具有挑战性,但重要的任务。大多数现有方法如何培训多个模型以独立预测多级标签,同时忽略分层结构标签之间的依赖。在本文中,我们将多级IDRR视为条件标签序列生成任务,并为其提出标签依赖感知序列生成模型(LDSGM)。具体而言,我们首先设计标签专注编码器,以了解输入实例的全局表示及其级别特定上下文,其中标记依赖性被集成以获取更好的标签嵌入。然后,我们使用标签序列解码器以自上而下方式输出预测标签,其中预测的更高级别标签直接用于指导当前级别的标签预测。我们进一步开发了一个相互学习的增强培训方法,以利用了基础方向上的标签依赖性,该依赖于训练期间引入的辅助解码器捕获。 PDTB数据集上的实验结果表明,我们的模型在多级IDRR上实现了最先进的性能。我们将在https://github.com/nlpersecjtu/ldsgm发布我们的代码。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
情绪引起的提取(ECPE)是情感原因分析的衍生子任务之一(ECA),与情感提取(EE)共享丰富的相关特征(EE)并引起提取(CE)。因此,EE和CE经常被用作更好的特征学习的辅助任务,通过先前的工作通过多任务学习(MTL)框架建模,以实现最新的ECPE结果。但是,现有的基于MTL的方法无法同时建模特定特征和之间的交互作用,或者遭受标签预测的不一致。在这项工作中,我们考虑通过使用新型A^2NET模型执行两种对齐机制来解决以上改善ECPE的挑战。我们首先提出一个功能任务对齐方式,以明确对特定的情感和特定特定功能和共享交互式特征进行建模。此外,还实施了任务跨度的对准,其中ECPE和EE和CE组合之间的标签距离被缩小了以获得更好的标签一致性。对基准的评估表明,我们的方法在所有ECA子任务上的表现都优于当前最佳性能系统。进一步的分析证明了我们提出的一致性机制对任务的重要性。
translated by 谷歌翻译
We propose a transition-based approach that, by training a single model, can efficiently parse any input sentence with both constituent and dependency trees, supporting both continuous/projective and discontinuous/non-projective syntactic structures. To that end, we develop a Pointer Network architecture with two separate task-specific decoders and a common encoder, and follow a multitask learning strategy to jointly train them. The resulting quadratic system, not only becomes the first parser that can jointly produce both unrestricted constituent and dependency trees from a single model, but also proves that both syntactic formalisms can benefit from each other during training, achieving state-of-the-art accuracies in several widely-used benchmarks such as the continuous English and Chinese Penn Treebanks, as well as the discontinuous German NEGRA and TIGER datasets.
translated by 谷歌翻译
我们通过纳入通用依赖性(UD)的句法特征来瞄准直接零射击设置中的跨语言机器阅读理解(MRC)的任务,以及我们使用的关键功能是每个句子中的语法关系。虽然以前的工作已经证明了有效的语法引导MRC模型,但我们建议采用句子际句法关系,除了基本的句子关系外,还可以进一步利用MRC任务的多句子输入中的句法依赖性。在我们的方法中,我们构建了句子间依赖图(ISDG)连接依赖树以形成横跨句子的全局句法关系。然后,我们提出了编码全局依赖关系图的ISDG编码器,通过明确地通过一个跳和多跳依赖性路径来解决句子间关系。三个多语言MRC数据集(XQUAD,MLQA,Tydiqa-Goldp)的实验表明,我们仅对英语培训的编码器能够在涵盖8种语言的所有14个测试集中提高零射性能,最高可达3.8 F1 / 5.2 EM平均改善,以及某些语言的5.2 F1 / 11.2 em。进一步的分析表明,改进可以归因于跨语言上一致的句法路径上的注意力。
translated by 谷歌翻译
基于方面的情绪分析(ABSA)任务由三个典型的子特点组成:术语术语提取,意见术语提取和情感极性分类。这三个子组织通常是共同执行的,以节省资源并减少管道中的错误传播。但是,大多数现有联合模型只关注编码器共享的福利在子任务之间共享,但忽略差异。因此,我们提出了一个关节ABSA模型,它不仅享有编码器共享的好处,而且还专注于提高模型效率的差异。详细地,我们介绍了双编码器设计,其中一对编码器特别侧重于候选方识对分类,并且原始编码器对序列标记进行注意。经验结果表明,我们的拟议模型显示了鲁棒性,并显着优于前一个基准数据集的先前最先进。
translated by 谷歌翻译
来自文本的采矿因果关系是一种复杂的和至关重要的自然语言理解任务,对应于人类认知。其解决方案的现有研究可以分为两种主要类别:基于特征工程和基于神经模型的方法。在本文中,我们发现前者具有不完整的覆盖范围和固有的错误,但提供了先验知识;虽然后者利用上下文信息,但其因果推断不足。为了处理限制,我们提出了一个名为MCDN的新型因果关系检测模型,明确地模拟因果关系,而且,利用两种方法的优势。具体而言,我们采用多头自我关注在Word级别获得语义特征,并在段级别推断出来的SCRN。据我们所知,关于因果关系任务,这是第一次应用关系网络。实验结果表明:1)该方法对因果区检测进行了突出的性能; 2)进一步分析表现出MCDN的有效性和稳健性。
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译
语法纠错(GEC)是检测和纠正句子中语法错误的任务。最近,神经机翻译系统已成为这项任务的流行方法。然而,这些方法缺乏使用句法知识,这在语法错误的校正中起着重要作用。在这项工作中,我们提出了一种语法引导的GEC模型(SG-GEC),它采用图表注意机制来利用依赖树的句法知识。考虑到语法不正确的源句子的依赖性树可以提供不正确的语法知识,我们提出了一个依赖树修正任务来处理它。结合数据增强方法,我们的模型在不使用任何大型预先训练模型的情况下实现了强大的性能。我们评估我们在GEC任务的公共基准上的模型,实现了竞争结果。
translated by 谷歌翻译
Multimodal named entity recognition (MNER) and multimodal relation extraction (MRE) are two fundamental subtasks in the multimodal knowledge graph construction task. However, the existing methods usually handle two tasks independently, which ignores the bidirectional interaction between them. This paper is the first to propose jointly performing MNER and MRE as a joint multimodal entity-relation extraction task (JMERE). Besides, the current MNER and MRE models only consider aligning the visual objects with textual entities in visual and textual graphs but ignore the entity-entity relationships and object-object relationships. To address the above challenges, we propose an edge-enhanced graph alignment network and a word-pair relation tagging (EEGA) for JMERE task. Specifically, we first design a word-pair relation tagging to exploit the bidirectional interaction between MNER and MRE and avoid the error propagation. Then, we propose an edge-enhanced graph alignment network to enhance the JMERE task by aligning nodes and edges in the cross-graph. Compared with previous methods, the proposed method can leverage the edge information to auxiliary alignment between objects and entities and find the correlations between entity-entity relationships and object-object relationships. Experiments are conducted to show the effectiveness of our model.
translated by 谷歌翻译
在本文中,我们试图通过引入深度学习模型的句法归纳偏见来建立两所学校之间的联系。我们提出了两个归纳偏见的家族,一个家庭用于选区结构,另一个用于依赖性结构。选区归纳偏见鼓励深度学习模型使用不同的单位(或神经元)分别处理长期和短期信息。这种分离为深度学习模型提供了一种方法,可以从顺序输入中构建潜在的层次表示形式,即更高级别的表示由高级表示形式组成,并且可以分解为一系列低级表示。例如,在不了解地面实际结构的情况下,我们提出的模型学会通过根据其句法结构组成变量和运算符的表示来处理逻辑表达。另一方面,依赖归纳偏置鼓励模型在输入序列中找到实体之间的潜在关系。对于自然语言,潜在关系通常被建模为一个定向依赖图,其中一个单词恰好具有一个父节点和零或几个孩子的节点。将此约束应用于类似变压器的模型之后,我们发现该模型能够诱导接近人类专家注释的有向图,并且在不同任务上也优于标准变压器模型。我们认为,这些实验结果为深度学习模型的未来发展展示了一个有趣的选择。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
In constituency parsing, span-based decoding is an important direction. However, for Chinese sentences, because of their linguistic characteristics, it is necessary to utilize other models to perform word segmentation first, which introduces a series of uncertainties and generally leads to errors in the computation of the constituency tree afterward. This work proposes a method for joint Chinese word segmentation and Span-based Constituency Parsing by adding extra labels to individual Chinese characters on the parse trees. Through experiments, the proposed algorithm outperforms the recent models for joint segmentation and constituency parsing on CTB 5.1.
translated by 谷歌翻译
Lexicon信息和预先训练的型号,如伯特,已被组合以探索由于各自的优势而探索中文序列标签任务。然而,现有方法通过浅和随机初始化的序列层仅熔断词典特征,并且不会将它们集成到伯特的底层中。在本文中,我们提出了用于汉语序列标记的Lexicon增强型BERT(Lebert),其直接通过Lexicon适配器层将外部词典知识集成到BERT层中。与现有方法相比,我们的模型促进了伯特下层的深层词典知识融合。关于十个任务的十个中文数据集的实验,包括命名实体识别,单词分段和言语部分标记,表明Lebert实现了最先进的结果。
translated by 谷歌翻译
基于宽高的情绪分析(ABSA)是一种细粒度的情绪分析任务。为了更好地理解长期复杂的句子,并获得准确的方面的信息,这项任务通常需要语言和致辞知识。然而,大多数方法采用复杂和低效的方法来结合外部知识,例如,直接搜索图形节点。此外,尚未彻底研究外部知识和语言信息之间的互补性。为此,我们提出了一个知识图形增强网络(kgan),该网络(kgan)旨在有效地将外部知识与明确的句法和上下文信息纳入。特别是,kgan从多个不同的角度来看,即基于上下文,语法和知识的情绪表示。首先,kgan通过并行地了解上下文和句法表示,以完全提取语义功能。然后,KGAN将知识图形集成到嵌入空间中,基于该嵌入空间,基于该嵌入空间,通过注意机制进一步获得了方面特异性知识表示。最后,我们提出了一个分层融合模块,以便以本地到全局方式补充这些多视图表示。关于三个流行的ABSA基准测试的广泛实验证明了我们康复的效果和坚固性。值得注意的是,在罗伯塔的预用模型的帮助下,Kggan实现了最先进的性能的新记录。
translated by 谷歌翻译
近年来,基于变压器的预训练模型已获得了很大的进步,成为自然语言处理中最重要的骨干之一。最近的工作表明,变压器内部的注意力机制可能不需要,卷积神经网络和基于多层感知器的模型也已被研究为变压器替代方案。在本文中,我们考虑了一个用于语言模型预训练的图形循环网络,该网络通过本地令牌级通信为每个序列构建一个图形结构,以及与其他代币解耦的句子级表示。原始模型在受监督培训下的特定领域特定文本分类中表现良好,但是,其通过自我监督的方式学习转移知识的潜力尚未得到充分利用。我们通过优化体系结构并验证其在更通用的语言理解任务(英语和中文)中的有效性来填补这一空白。至于模型效率,我们的模型在基于变压器的模型中而不是二次复杂性,而是具有线性复杂性,并且在推断过程中的性能更有效。此外,我们发现与现有基于注意力的模型相比,我们的模型可以生成更多样化的输出,而背景化的功能冗余性较小。
translated by 谷歌翻译
面向目标的意见单词提取(TOWE)是一项精细的情感分析任务,旨在从句子中提取给定意见目标的相应意见单词。最近,深度学习方法在这项任务上取得了显着进步。然而,由于昂贵的数据注释过程,TOWE任务仍然遭受培训数据的稀缺性。有限的标记数据增加了测试数据和培训数据之间分配变化的风险。在本文中,我们建议利用大量未标记的数据来通过增加模型对变化分布变化的暴露来降低风险。具体而言,我们提出了一种新型的多透明一致性正则化(MGCR)方法,以利用未标记的数据并设计两个专门用于TOWE的过滤器,以在不同的粒度上过滤嘈杂的数据。四个TOWE基准数据集的广泛实验结果表明,与当前的最新方法相比,MGCR的优越性。深入分析还证明了不同粒度过滤器的有效性。我们的代码可在https://github.com/towessl/towessl上找到。
translated by 谷歌翻译
在现实世界中的问题回答场景中,将表格和文本内容均结合的混合形式吸引了越来越多的关注,其中数值推理问题是最典型和最具挑战性的问题之一。现有方法通常采用编码器框架来表示混合内容并生成答案。但是,它无法捕获编码器侧数值,表格架构和文本信息之间的丰富关系。解码器使用一个简单的预定义运算符分类器,该分类器的灵活性不足以处理具有不同表达式的数值推理过程。为了解决这些问题,本文提出了一个\ textbf {re} lational \ textbf {g} raph增强\ textbf {h} ybrid table-text \ textbf {n}带有\ textbf {t textbf {t text} ree decoder(\ textbff recoder(\ textbf) {reghnt})。它模拟了对表 - 文本混合内容的回答的数值问题,作为表达树的生成任务。此外,我们提出了一种新颖的关系图建模方法,该方法模拟了问题,表和段落之间的对齐方式。我们验证了公开可用的Table-Text混合质量质量质量标准(TAT-QA)的模型。拟议的reghnt显着胜过基线模型,并实现最新结果\脚注{我们在〜\ url {https://github.com/lfy79001/reghnt}}}〜(20222)公开发布了源代码和数据-05-05)。
translated by 谷歌翻译