Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user's dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response's quality while ignoring the correlations and fusions between the user's dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users' dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
translated by 谷歌翻译
Conversational AI has become an increasingly prominent and practical application of machine learning. However, existing conversational AI techniques still suffer from various limitations. One such limitation is a lack of well-developed methods for incorporating auxiliary information that could help a model understand conversational context better. In this paper, we explore how persona-based information could help improve the quality of response generation in conversations. First, we provide a literature review focusing on the current state-of-the-art methods that utilize persona information. We evaluate two strong baseline methods, the Ranking Profile Memory Network and the Poly-Encoder, on the NeurIPS ConvAI2 benchmark dataset. Our analysis elucidates the importance of incorporating persona information into conversational systems. Additionally, our study highlights several limitations with current state-of-the-art methods and outlines challenges and future research directions for advancing personalized conversational AI technology.
translated by 谷歌翻译
个性化对话代理对于对话系统非常重要,以产生更具体,一致,并从事和吸引力的反应。然而,大多数当前对话的个性化方法依赖于推理期间的明确人物描述,严重限制其应用。在本文中,我们提出了一种新颖的方法,该方法将根据对话历史来预测人物信息,以个性化对话代理而不依赖于推理期间的任何明确的人格描述。 Personachat数据集上的实验结果表明,当在对话剂的预测轮廓上调节(即“自身角色”)时,所提出的方法可以提高所产生的响应的一致性,并在预测的角色调节时改善所产生的响应的接合对话伙伴(即“他们的角色”)。我们还发现培训的角色预测模型可以成功转移到其他数据集,并帮助生成更相关的响应。
translated by 谷歌翻译
知识驱动的对话世代最近取得了非凡的突破。与一般的对话系统相比,卓越的知识对话系统可以通过预先提供的知识产生更多信息和知识渊博的响应。但是,在实际应用中,对话系统无法事先提供相应的知识。为了解决该问题,我们设计了一个名为DRKQG的知识驱动的对话系统(\ emph {通过查询生成动态检索知识,以获取信息性对话响应})。具体而言,系统可以分为两个模块:查询生成模块和对话生成模块。首先,利用时间感知机制来捕获上下文信息,并可以生成查询以检索知识。然后,我们集成了复制机制和变压器,该机制允许响应生成模块产生从上下文和检索知识中得出的响应。 LIC2022,语言和情报技术竞赛的实验结果表明,我们的模块在自动评估指标上的大幅度优于基线模型,而BAIDU语言学团队的人类评估表明,我们的系统在事实上取得了令人印象深刻的结果,实际上是正确的,知识渊博。
translated by 谷歌翻译
基于检索的对话响应选择旨在为给定多转中下文找到候选集的正确响应。基于预先训练的语言模型(PLMS)的方法对此任务产生了显着的改进。序列表示在对话背景和响应之间的匹配程度中扮演关键作用。然而,我们观察到相同上下文共享的不同的上下文响应对始终在由PLM计算的序列表示中具有更大的相似性,这使得难以区分来自负面的正响应。由此激励,我们提出了一种基于PLMS的响应选择任务的新颖\ TextBF {f} ine- \ textbf {g}下载\ textbf {g} unfrstive(fgc)学习方法。该FGC学习策略有助于PLMS在细粒中产生每个对话的更可区分的匹配表示,并进一步提高选择正反应的预测。两个基准数据集的实证研究表明,所提出的FGC学习方法一般可以提高现有PLM匹配模型的模型性能。
translated by 谷歌翻译
在本文中,我们介绍了基于大型预训练的语言模型(PLM)pangu-alpha(Zeng等,2021)的中国预训练的开放域对话生成模型。与其他对大量对话数据进行培训的预训练的对话模型不同,我们旨在通过继承PLM的有价值的语言能力和知识来构建强大的对话模型,并以相对较少的数据和计算成本构建强大的对话模型。为此,我们训练大型PLM Pangu-Alpha的Pangu-bot,该机器人已被证明在各种中国自然语言任务上表现出色。我们研究了pangu-bot产生的响应的不同方面,包括响应质量,知识和安全性。我们表明,Pangu-Bot优于最先进的中国对话系统(CDIALGPT(Wang等,2020),Eva(Zhou等,2021),EVA2.0(Gu等,2022)) W.R.T.以上三个方面。我们还证明,可以轻松地部署pangu-bot,以在没有进一步训练的情况下产生情感反应。在整个经验分析中,我们还指出,Pangu-bot响应质量,知识正确性和安全性仍然远非完美,进一步的探索对于建立可靠且智能的对话系统是必不可少的。我们的型号和代码将在https://github.com/huawei-noah/pretretaining-language-model/tree/master/master/pangu-bot上提供。
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Persona-based dialogue systems aim to generate consistent responses based on historical context and predefined persona. Unlike conventional dialogue generation, the persona-based dialogue needs to consider both dialogue context and persona, posing a challenge for coherent training. Specifically, this requires a delicate weight balance between context and persona. To achieve that, in this paper, we propose an effective framework with Persona-Adaptive Attention (PAA), which adaptively integrates the weights from the persona and context information via our designed attention. In addition, a dynamic masking mechanism is applied to the PAA to not only drop redundant information in context and persona but also serve as a regularization mechanism to avoid overfitting. Experimental results demonstrate the superiority of the proposed PAA framework compared to the strong baselines in both automatic and human evaluation. Moreover, the proposed PAA approach can perform equivalently well in a low-resource regime compared to models trained in a full-data setting, which achieve a similar result with only 20% to 30% of data compared to the larger models trained in the full-data setting. To fully exploit the effectiveness of our design, we designed several variants for handling the weighted information in different ways, showing the necessity and sufficiency of our weighting and masking designs.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
学习高质量的对话表示对于解决各种面向对话的任务至关重要,尤其是考虑到对话系统通常会遇到数据稀缺。在本文中,我们介绍了对话句子嵌入(DSE),这是一种自我监督的对比学习方法,它学习有效的对话表示,适合各种对话任务。 DSE通过连续进行与对比度学习的正面对话的连续对话来从对话中学习。尽管它很简单,但DSE的表现能力比其他对话表示和普遍的句子表示模型要好得多。我们评估DSE的五个下游对话任务,这些任务检查了不同语义粒度的对话表示。几次射击和零射击设置的实验表明,DSE的表现要优于基线。例如,它在6个数据集中的1-Shot意图分类中比最强的无监督基线实现了13%的平均绩效提高。我们还提供了有关模型的好处和局限性的分析。
translated by 谷歌翻译
End-to-end (E2E) task-oriented dialogue (ToD) systems are prone to fall into the so-called 'likelihood trap', resulting in generated responses which are dull, repetitive, and often inconsistent with dialogue history. Comparing ranked lists of multiple generated responses against the 'gold response' (from training data) reveals a wide diversity in response quality, with many good responses placed lower in the ranked list. The main challenge, addressed in this work, is then how to reach beyond greedily generated system responses, that is, how to obtain and select such high-quality responses from the list of overgenerated responses at inference without availability of the gold response. To this end, we propose a simple yet effective reranking method which aims to select high-quality items from the lists of responses initially overgenerated by the system. The idea is to use any sequence-level (similarity) scoring function to divide the semantic space of responses into high-scoring versus low-scoring partitions. At training, the high-scoring partition comprises all generated responses whose similarity to the gold response is higher than the similarity of the greedy response to the gold response. At inference, the aim is to estimate the probability that each overgenerated response belongs to the high-scoring partition, given only previous dialogue history. We validate the robustness and versatility of our proposed method on the standard MultiWOZ dataset: our methods improve a state-of-the-art E2E ToD system by 2.4 BLEU, 3.2 ROUGE, and 2.8 METEOR scores, achieving new peak results. Additional experiments on the BiTOD dataset and human evaluation further ascertain the generalisability and effectiveness of the proposed framework.
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
Due to the lack of human resources for mental health support, there is an increasing demand for employing conversational agents for support. Recent work has demonstrated the effectiveness of dialogue models in providing emotional support. As previous studies have demonstrated that seekers' persona is an important factor for effective support, we investigate whether there are benefits to modeling such information in dialogue models for support. In this paper, our empirical analysis verifies that persona has an important impact on emotional support. Therefore, we propose a framework for dynamically inferring and modeling seekers' persona. We first train a model for inferring the seeker's persona from the conversation history. Accordingly, we propose PAL, a model that leverages persona information and, in conjunction with our strategy-based controllable generation method, provides personalized emotional support. Automatic and manual evaluations demonstrate that our proposed model, PAL, achieves state-of-the-art results, outperforming the baselines on the studied benchmark. Our code and data are publicly available at https://github.com/chengjl19/PAL.
translated by 谷歌翻译
医学对话生成是一项重要但具有挑战性的任务。以前的大多数作品都依赖于注意力机制和大规模预处理的语言模型。但是,这些方法通常无法从长时间的对话历史中获取关键信息,从而产生准确和信息丰富的响应,因为医疗实体通常散布在多种话语中以及它们之间的复杂关系。为了减轻此问题,我们提出了一个具有关键信息召回(Medpir)的医疗响应生成模型,该模型建立在两个组件上,即知识吸引的对话图形编码器和召回增强的生成器。知识吸引的对话图编码器通过利用话语中的实体之间的知识关系,并使用图形注意力网络对话图来构建对话图。然后,召回增强的发电机通过在产生实际响应之前生成对话的摘要来增强这些关键信息的使用。两个大型医学对话数据集的实验结果表明,Medpir在BLEU分数和医疗实体F1度量中的表现优于强大的基准。
translated by 谷歌翻译
在零拍摄的情况下建立对话的生成系统仍然是一个巨大的挑战,因为对话生成中典型的零击方法很大程度上取决于大规模的预训练的语言生成模型,例如GPT-3和T5。由于缺乏相应的平行对话COLIDA,对无繁琐语言模型的零摄像对话生成的研究受到限制。在本文中,我们提出了一个简单但有效的多语言学习框架,用于零拍对对话(称为mulzdg),该框架可以有效地将知识从带有大规模培训样本的英语语料库转移到具有零样本的非英语语料库。此外,MulzDG可以被视为一种多语言数据增强方法,以提高资源丰富的语言的性能。首先,我们通过从单语英文数据集随机选择的翻译说法来构建多语言代码转换对话数据集。然后,我们使用MulzDG来培训基于代码转换数据集的统一的多语言对话模型。 mulzdg可以在不同语言之间进行隐性的语义一致性。关于DailyDialog和DSTC7数据集的实验表明,与有足够示例的培训相比,MulzDG不仅在零击中的情况下实现竞争性能,而且还可以大大提高源语言的性能。
translated by 谷歌翻译
会话推荐系统(CRS)旨在通过自然语言对话推荐给用户的合适项目。对于开发有效的CRSS,主​​要技术问题是如何准确地推断用户偏好从非常有限的对话环境。为了解决问题,有希望的解决方案是纳入外部数据以丰富上下文信息。然而,先前的研究主要集中在针对某些特定类型的外部数据量身定制的融合模型,这是不普遍的模型,并利用多型外部数据。为了有效利用多型外部数据,我们提出了一种新型粗对对比学习框架,以改善CRS的数据语义融合。在我们的方法中,我们首先从不同的数据信号中提取并代表多粒度语义单元,然后以粗略的方式对齐相关的多型语义单元。为了实现这一框架,我们设计了用于建模用户偏好的粗粒细粒和细粒度的程序,前者侧重于更通用,粗粒粗粒语义融合,后者侧重于更具体,细粒度的语义融合。可以扩展这样的方法以包含更多种类的外部数据。两个公共CRS数据集的大量实验已经证明了我们在两种建议和对话任务中的方法的有效性。
translated by 谷歌翻译
会话推荐系统(CRS)旨在主动引起用户偏好,并通过自然语言对话推荐高质量的项目。通常,CRS由建议模块组成,以预测用户的首选项目和对话模块,以生成适当的响应。要开发有效的CR,必须无缝整合两个模块。现有作品要么设计语义一致性策略,要么共享两个模块之间的知识资源和表示。但是,这些方法仍然依靠不同的体系结构或技术来开发两个模块,因此很难进行有效的模块集成。为了解决这个问题,我们根据知识增强的及时学习提出了一个名为UNICRS的统一CRS模型。我们的方法将建议和对话子任务统一到及时学习范式中,并根据固定的预训练的语言模型(PLM)利用知识增强的提示来以统一的方法来实现两个子任务。在及时的设计中,我们包括融合的知识表示,特定于任务的软令牌和对话环境,它们可以提供足够的上下文信息以适应CRS任务的PLM。此外,对于建议子任务,我们还将生成的响应模板作为提示的重要组成部分结合起来,以增强两个子任务之间的信息交互。对两个公共CRS数据集进行的广泛实验证明了我们方法的有效性。
translated by 谷歌翻译
当前的个性化对话中的作品主要有助于代理人表现出一致的个性并推动更有用的回应。但是,我们发现大多数以前模型的生成的响应往往是以自我为中心的,对话中的用户几乎不关心。此外,我们认为类似人类的对话基本上是基于推断另一方角色的信息而构建的。由此激励,我们通过检测隐性用户角色提出了一种新颖的个性化对话生成器。因为很难为每个用户收集大量详细的角色,所以我们试图对用户的潜在角色及其从对话历史记录进行建模,而没有外部知识。使用条件变异推断对感知和推子变量进行了构想。两个潜在变量模拟了人们意识到彼此角色并在对话中产生相应表达的过程。最后,提出了后歧视的正规化以增强训练程序。实证研究表明,与最先进的方法相比,我们的方法更关心用户的角色,并在整个评估中实现了相当大的推动力。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译