Emotion-cause pair extraction (ECPE) aims to extract emotion clauses and corresponding cause clauses, which have recently received growing attention. Previous methods sequentially encode features with a specified order. They first encode the emotion and cause features for clause extraction and then combine them for pair extraction. This lead to an imbalance in inter-task feature interaction where features extracted later have no direct contact with the former. To address this issue, we propose a novel Pair-Based Joint Encoding (PBJE) network, which generates pairs and clauses features simultaneously in a joint feature encoding manner to model the causal relationship in clauses. PBJE can balance the information flow among emotion clauses, cause clauses and pairs. From a multi-relational perspective, we construct a heterogeneous undirected graph and apply the Relational Graph Convolutional Network (RGCN) to capture the various relationship between clauses and the relationship between pairs and clauses. Experimental results show that PBJE achieves state-of-the-art performance on the Chinese benchmark corpus.
translated by 谷歌翻译
情绪原因对提取(ECPE)任务旨在从文档中提取情绪和原因。我们观察到,在典型的ECPE数据集中,情绪和原因的相对距离分布极为不平衡。现有方法设置了一个固定的大小窗口,以捕获相邻子句之间的关系。但是,他们忽略了遥远条款之间的有效语义联系,从而导致对位置不敏感数据的概括能力差。为了减轻问题,我们提出了一种新型的多晶格语义意识图模型(MGSAG),以共同结合细粒度和粗粒语义特征,而无需距离限制。特别是,我们首先探讨从子句和从文档中提取的关键字之间的语义依赖性,这些文档传达了细颗粒的语义特征,从而获得了关键字增强子句表示。此外,还建立了子句图,以模拟条款之间的粗粒语义关系。实验结果表明,MGSAG超过了现有的最新ECPE模型。特别是,MGSAG在不敏感数据的条件下大大优于其他模型。
translated by 谷歌翻译
情绪引起的提取(ECPE)是情感原因分析的衍生子任务之一(ECA),与情感提取(EE)共享丰富的相关特征(EE)并引起提取(CE)。因此,EE和CE经常被用作更好的特征学习的辅助任务,通过先前的工作通过多任务学习(MTL)框架建模,以实现最新的ECPE结果。但是,现有的基于MTL的方法无法同时建模特定特征和之间的交互作用,或者遭受标签预测的不一致。在这项工作中,我们考虑通过使用新型A^2NET模型执行两种对齐机制来解决以上改善ECPE的挑战。我们首先提出一个功能任务对齐方式,以明确对特定的情感和特定特定功能和共享交互式特征进行建模。此外,还实施了任务跨度的对准,其中ECPE和EE和CE组合之间的标签距离被缩小了以获得更好的标签一致性。对基准的评估表明,我们的方法在所有ECA子任务上的表现都优于当前最佳性能系统。进一步的分析证明了我们提出的一致性机制对任务的重要性。
translated by 谷歌翻译
情感双对提取(ECPE)是情感原因分析中的一项新任务,它从情感文档中提取潜在的情感因子对。最近的研究使用端到端方法来应对ECPE任务。但是,这些方法要么患有标签稀疏问题,要么无法模拟情绪与原因之间的复杂关系。此外,他们都不考虑条款的明确语义信息。为此,我们将ECPE任务转换为文档级机器阅读理解(MRC)任务,并提出了具有重新INK机制(MM-R)的多转移MRC框架。我们的框架可以模拟情绪和原因之间的复杂关系,同时避免产生配对矩阵(标签稀疏问题的主要原因)。此外,多转弯结构可以融合情绪和原因之间的明确语义信息流。关于基准情绪的广泛实验导致语料库证明了我们提出的框架的有效性,该框架的表现优于现有的最新方法。
translated by 谷歌翻译
情绪原因对提取(ECPE)是一项新的任务,旨在从文档中提取潜在的情绪和相应原因。先前的方法重点是建模成对的关系并取得了令人鼓舞的结果。但是,从根本上象征文档的基本结构的条款与差异关系仍处于研究期。在本文中,我们定义了一个新的条款 - 差异关系。为了学习它,我们提出了一个名为EA-GAT的一般条款级编码模型,该模型包括E-GAT和激活排序。 E-GAT旨在从不同类型的子句中汇总信息;激活排序利用个人情感/原因预测和基于排序的映射将条款推向更有利的表示。由于EA-GAT是一个子句级编码模型,因此可以与任何以前的方法广泛集成。实验结果表明,我们的方法比当前的所有方法在中文和英语基准语料库中都具有显着优势,平均$ 2.1 \%$和$ 1.03 \%$ $。
translated by 谷歌翻译
Emotion-cause pair extraction (ECPE), as an emergent natural language processing task, aims at jointly investigating emotions and their underlying causes in documents. It extends the previous emotion cause extraction (ECE) task, yet without requiring a set of pre-given emotion clauses as in ECE. Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes. Such pipeline method, while intuitive, suffers from two critical issues, including error propagation across stages that may hinder the effectiveness, and high computational cost that would limit the practical application of the method. To tackle these issues, we propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner. Specifically, our model regards pair extraction as a link prediction task, and learns to link from emotion clauses to cause clauses, i.e., the links are directional. Emotion extraction and cause extraction are incorporated into the model as auxiliary tasks, which further boost the pair extraction. Experiments are conducted on an ECPE benchmarking dataset. The results show that our proposed model outperforms a range of state-of-the-art approaches.
translated by 谷歌翻译
Causal Emotion Entailment aims to identify causal utterances that are responsible for the target utterance with a non-neutral emotion in conversations. Previous works are limited in thorough understanding of the conversational context and accurate reasoning of the emotion cause. To this end, we propose Knowledge-Bridged Causal Interaction Network (KBCIN) with commonsense knowledge (CSK) leveraged as three bridges. Specifically, we construct a conversational graph for each conversation and leverage the event-centered CSK as the semantics-level bridge (S-bridge) to capture the deep inter-utterance dependencies in the conversational context via the CSK-Enhanced Graph Attention module. Moreover, social-interaction CSK serves as emotion-level bridge (E-bridge) and action-level bridge (A-bridge) to connect candidate utterances with the target one, which provides explicit causal clues for the Emotional Interaction module and Actional Interaction module to reason the target emotion. Experimental results show that our model achieves better performance over most baseline models. Our source code is publicly available at https://github.com/circle-hit/KBCIN.
translated by 谷歌翻译
Predicting emotions expressed in text is a well-studied problem in the NLP community. Recently there has been active research in extracting the cause of an emotion expressed in text. Most of the previous work has done causal emotion entailment in documents. In this work, we propose neural models to extract emotion cause span and entailment in conversations. For learning such models, we use RECCON dataset, which is annotated with cause spans at the utterance level. In particular, we propose MuTEC, an end-to-end Multi-Task learning framework for extracting emotions, emotion cause, and entailment in conversations. This is in contrast to existing baseline models that use ground truth emotions to extract the cause. MuTEC performs better than the baselines for most of the data folds provided in the dataset.
translated by 谷歌翻译
因果情绪综合(CEE)旨在发现对话说法中情感背后的潜在原因。先前的工作将CEE正式为独立的话语对分类问题,并忽略了情感和说话者信息。从新的角度来看,本文考虑了联合框架中的CEE。我们同步对多种话语进行分类,以捕获全球观点中的话语之间的相关性,并提出一个两条注意力模型(TSAM),以有效地模拟说话者在对话历史上的情感影响。具体而言,TSAM包括三个模块:情感注意网络(EAN),说话者注意网络(SAN)和交互模块。 EAN和SAN并行结合了情感和说话者信息,随后的交互模块通过相互的Biaffine转换有效地互换了EAN和SAN之间的相关信息。广泛的实验结果表明,我们的模型实现了新的最新性能(SOTA)性能,并且表现出色的基准。
translated by 谷歌翻译
作为人类认知的重要组成部分,造成效果关系频繁出现在文本中,从文本策划原因关系有助于建立预测任务的因果网络。现有的因果关系提取技术包括基于知识的,统计机器学习(ML)和基于深度学习的方法。每种方法都具有其优点和缺点。例如,基于知识的方法是可以理解的,但需要广泛的手动域知识并具有较差的跨域适用性。由于自然语言处理(NLP)工具包,统计机器学习方法更加自动化。但是,功能工程是劳动密集型的,工具包可能导致错误传播。在过去的几年里,由于其强大的代表学习能力和计算资源的快速增加,深入学习技术吸引了NLP研究人员的大量关注。它们的局限包括高计算成本和缺乏足够的注释培训数据。在本文中,我们对因果关系提取进行了综合调查。我们最初介绍了因果关系提取中存在的主要形式:显式的内部管制因果关系,隐含因果关系和间情态因果关系。接下来,我们列出了代理关系提取的基准数据集和建模评估方法。然后,我们介绍了三种技术的结构化概述了与他们的代表系统。最后,我们突出了潜在的方向存在现有的开放挑战。
translated by 谷歌翻译
基于方面的情感分析(ABSA)是一项精细的情感分析任务,旨在使特定方面的情感极性推断对齐方面和相应的情感。这是具有挑战性的,因为句子可能包含多个方面或复杂(例如,有条件,协调或逆境)的关系。最近,使用图神经网络利用依赖性语法信息是最受欢迎的趋势。尽管取得了成功,但在很大程度上依赖依赖树的方法在准确地建模方面的对准及其单词方面构成了挑战,因为依赖树可能会提供无关的关联的嘈杂信号(例如,“ conj”之间的关系“ conj”之间的关系。图2中的“伟大”和“可怕”。在本文中,为了减轻这个问题,我们提出了一个双轴法意识到的图形注意网络(BISYN-GAT+)。具体而言,bisyn-gat+完全利用句子组成树的语法信息(例如,短语分割和层次结构),以建模每个方面的情感感知环境(称为内在文章)和跨方面的情感关系(称为跨性别的情感)称为Inter-Contept)学习。四个基准数据集的实验表明,BISYN-GAT+的表现始终超过最新方法。
translated by 谷歌翻译
由于其在各个领域的重要性和潜在应用,情感 - 原因对提取(ECPE)是一种自然语言处理的复杂而流行的地区。在本报告中,我们的目标是在线评论领域的ECPE中展示我们的ECPE工作。通过手动注释的数据集,我们探索使用神经网络提取情绪原因对的算法。此外,我们提出了一种使用先前参考资料的模型,并将情感导致对与情感感知的单词嵌入领域的研究相结合,在那里我们将这些嵌入的嵌入式发送到Bi-LSTM层中,这为我们提供了情绪相关的条款。随着有限数据集的约束,我们实现了。我们报告的总体范围包括全面的文献审查,通过提出对管道的改进以及特定域的算法开发和实施来修改ECPE的引用方法的全面文献审查,以及修改以前的工作评论评论。
translated by 谷歌翻译
随着自动假新闻检测技术的快速发展,事实提取和验证(发烧)吸引了更多的关注。该任务旨在从数百万个开放域Wikipedia文件中提取最相关的事实证据,然后验证相应索赔的可信度。尽管已经为该任务提出了几种强大的模型,但他们取得了长足的进步,但我们认为他们无法利用多视图上下文信息,因此无法获得更好的性能。在本文中,我们建议整合多视图上下文信息(IMCI)进行事实提取和验证。对于每个证据句子,我们定义两种上下文,即文档内部上下文和文档间的上下文}。文档内上下文由文档标题和同一文档中的所有其他句子组成。文档间的上下文包括所有其他证据,这些证据可能来自不同的文档。然后,我们集成了多视图上下文信息,以编码证据句子以处理任务。我们对发烧1.0共享任务的实验结果表明,我们的IMCI框架在事实提取和验证方面取得了长足的进步,并以72.97%的胜利发烧得分达到了最先进的表现,在线上获得了75.84%的标签准确度盲测。我们还进行消融研究以检测多视图上下文信息的影响。我们的代码将在https://github.com/phoenixsecularbird/imci上发布。
translated by 谷歌翻译
言语的数字,例如隐喻和讽刺,在文学作品和口语对话中无处不在。这对自然语言理解构成了巨大的挑战,因为语音的数字通常偏离表面上表达更深层次的语义含义的含义。先前的研究强调了数字的文学方面,很少从计算语言学的观点提供全面的探索。在本文中,我们首先提出了象征性单元的概念,该单元是人物的载体。然后,我们选择了中文常用的12种类型的数字,并构建中文语料库以进行上下文化的图形识别(配置)。与以前的令牌级别或句子级别对应物不同,配置旨在从话语级别的上下文中提取象征性单元,并将象征性单元分类为正确的图类型。在配置时,设计了三个任务,即图形提取,图类型分类和图形识别,并使用最新技术来实现基准。我们进行彻底的实验,并表明所有三个任务对于现有模型都充满挑战,因此需要进一步研究。我们的数据集和代码可在https://github.com/pku-tangent/configure上公开获取。
translated by 谷歌翻译
自动推荐向特定法律案件的相关法律文章引起了很多关注,因为它可以大大释放人工劳动力,从而在大型法律数据库中寻找。然而,目前的研究只支持粗粒度推荐,其中所有相关文章都预测为整体,而无需解释每种文章与之相关的具体事实。由于一个案例可以由许多支持事实形成,因此遍历它们来验证推荐结果的正确性可能是耗时的。我们认为,在每个单一的事实和法律文章之间学习细粒度的对应,对于准确可靠的AI系统至关重要。通过这种动机,我们执行开创性的研究并创建一个手动注释的事实 - 文章的语料库。我们将学习视为文本匹配任务,并提出一个多级匹配网络来解决它。为了帮助模型更好地消化法律文章的内容,我们以随机森林的前提结论对形式解析物品。实验表明,解析的形式产生了更好的性能,结果模型超越了其他流行的文本匹配基线。此外,我们与先前的研究相比,并发现建立细粒度的事实 - 文章对应物可以通过大幅度提高建议准确性。我们最好的系统达到了96.3%的F1得分,使其具有实际使用潜力。它还可以显着提高法律决策预测的下游任务,将F1增加到12.7%。
translated by 谷歌翻译
捕获该段落中的单词中复杂语言结构和长期依赖性的能力对于话语级关系提取(DRE)任务是必不可少的。图形神经网络(GNNS)是编码依赖图的方法之一,它在先前的RE中有效地显示了。然而,对GNN的接受领域得到了相对较少的关注,这对于需要话语理解的非常长的文本的情况可能是至关重要的。在这项工作中,我们利用图形汇集的想法,并建议在DRE任务上使用汇集解凝框架。汇集分支减少了图形尺寸,使GNN能够在更少的层内获得更大的接收领域; UnoDooling分支将池化图恢复为其原始分辨率,以便可以提取实体提及的表示。我们提出子句匹配(cm),这是一个新的语言启发图形汇集方法,用于NLP任务。两个DE DATASET上的实验表明,我们的模型在需要建模长期依赖性时显着改善基线,这表明了汇集了解冻框架的有效性和我们的CM汇集方法。
translated by 谷歌翻译
Predicting personality traits based on online posts has emerged as an important task in many fields such as social network analysis. One of the challenges of this task is assembling information from various posts into an overall profile for each user. While many previous solutions simply concatenate the posts into a long document and then encode the document by sequential or hierarchical models, they introduce unwarranted orders for the posts, which may mislead the models. In this paper, we propose a dynamic deep graph convolutional network (D-DGCN) to overcome the above limitation. Specifically, we design a learn-to-connect approach that adopts a dynamic multi-hop structure instead of a deterministic structure, and combine it with a DGCN module to automatically learn the connections between posts. The modules of post encoder, learn-to-connect, and DGCN are jointly trained in an end-to-end manner. Experimental results on the Kaggle and Pandora datasets show the superior performance of D-DGCN to state-of-the-art baselines. Our code is available at https://github.com/djz233/D-DGCN.
translated by 谷歌翻译
Simile recognition involves two subtasks: simile sentence classification that discriminates whether a sentence contains simile, and simile component extraction that locates the corresponding objects (i.e., tenors and vehicles). Recent work ignores features other than surface strings. In this paper, we explore expressive features for this task to achieve more effective data utilization. Particularly, we study two types of features: 1) input-side features that include POS tags, dependency trees and word definitions, and 2) decoding features that capture the interdependence among various decoding decisions. We further construct a model named HGSR, which merges the input-side features as a heterogeneous graph and leverages decoding features via distillation. Experiments show that HGSR significantly outperforms the current state-of-the-art systems and carefully designed baselines, verifying the effectiveness of introduced features. Our code is available at https://github.com/DeepLearnXMU/HGSR.
translated by 谷歌翻译
在视觉上丰富的文件(VRD)上的结构化文本理解是文档智能的重要组成部分。由于VRD中的内容和布局的复杂性,结构化文本理解是一项有挑战性的任务。大多数现有的研究将此问题与两个子任务结尾:实体标记和实体链接,这需要整体地了解令牌和段级别的文档的上下文。但是,很少的工作已经关注有效地从不同层次提取结构化数据的解决方案。本文提出了一个名为structext的统一框架,它对于处理两个子任务是灵活的,有效的。具体地,基于变压器,我们引入了一个段令牌对齐的编码器,以处理不同粒度水平的实体标记和实体链接任务。此外,我们设计了一种具有三个自我监督任务的新型预训练策略,以学习更丰富的代表性。 Structext使用现有屏蔽的视觉语言建模任务和新句子长度预测和配对框方向任务,以跨文本,图像和布局结合多模态信息。我们评估我们在分段级别和令牌级别的结构化文本理解的方法,并表明它优于最先进的同行,在Funsd,Srie和Ephoie数据集中具有显着优越的性能。
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译