Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译
Nowadays, the current neural network models of dialogue generation(chatbots) show great promise for generating answers for chatty agents. But they are short-sighted in that they predict utterances one at a time while disregarding their impact on future outcomes. Modelling a dialogue's future direction is critical for generating coherent, interesting dialogues, a need that has led traditional NLP dialogue models that rely on reinforcement learning. In this article, we explain how to combine these objectives by using deep reinforcement learning to predict future rewards in chatbot dialogue. The model simulates conversations between two virtual agents, with policy gradient methods used to reward sequences that exhibit three useful conversational characteristics: the flow of informality, coherence, and simplicity of response (related to forward-looking function). We assess our model based on its diversity, length, and complexity with regard to humans. In dialogue simulation, evaluations demonstrated that the proposed model generates more interactive responses and encourages a more sustained successful conversation. This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues.
translated by 谷歌翻译
预先接受的语言模型(PLM)在神经对话建模中标志着巨大的飞跃。虽然PLMS在大型文本语料库上进行预先培训,但通常在具有特定领域知识和对话风格的稀缺对话数据上进行微调。然而,在大型预先训练模型中充分利用现有知识的同时定制语言模型仍然是一个挑战。在本文中,我们提出了一种预先接受训练的对话建模的新方法,将对话生成问题作为一个快速学习任务。而不是在有限的对话数据上进行微调,我们的方法,DialogPrompt学习针对对话背景优化的连续提示嵌入,从而从大型预训练模型中促进了知识。为了鼓励模型更好地利用提示嵌入,提示编码器被设计为在输入对话框上下文中的条件。流行对话数据集的实验表明,我们的方法显着优于微调基线和通用及时学习方法。此外,人类评估强烈支持对DialialPrompt的优越性在响应生成质量方面。
translated by 谷歌翻译
在口头对话系统中,我们的目标是部署人工智能,以建立可以与人类交流的自动化对话剂。对话系统越来越多地旨在超越仅仅模仿对话,而且随着时间的推移,这些交互也会改善。在本次调查中,我们概述了多年来制定对话系统的方法的广泛概述。对话系统的不同用例范围从基于任务的系统到开放域聊天动机和需要特定的系统。从简单的规则的系统开始,研究已经朝着越来越复杂的建筑培训,这些建筑在大规模的数据集语料库中培训,如深度学习系统。激进了类似人类对话的直觉,通过加强学习将情绪纳入自然语言发生器的进展。虽然我们看到对某些指标的高度边际改善的趋势,但我们发现指标存在有限的理由,评估实践并不统一。要得出结论,我们标志着这些问题并突出了可能的研究方向。
translated by 谷歌翻译
真实的人类对话数据是复杂,异质和嘈杂的,从该数据中构建开放域对话系统仍然是一项艰巨的任务。实际上,此类对话数据仍然包含大量信息和知识,但是,它们没有得到充分探索。在本文中,我们展示了现有的开放域对话生成方法,这些方法记住上下文响应配对的数据,并使用自动回归或编码模型模型不利于培训数据。与当前的方法不同,使用外部知识,我们探索了一个检索生成培训框架,该培训框架可以通过将它们视为“证据”来利用异质和嘈杂的培训数据。特别是,我们使用Bertscore进行检索,这给出了证据和一代的更好品质。公开可用数据集的实验表明,我们的方法可以帮助模型产生更好的响应,即使此类培训数据通常会留下深刻的印象为低质量数据。这种性能增益与通过扩大训练组更好的改进的绩效增益相当,甚至更好。我们还发现,模型性能与检索到的证据的相关性有正相关。此外,我们的方法在零拍实验上表现良好,这表明我们的方法对现实世界数据可能更强大。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
对话研究的最终目标是开发可以在交互式设置中有效使用的系统。为此,我们在第9对话系统技术挑战中介绍了对话框的交互式评估。该曲目由两个子任务组成。第一个子任务涉及建立知识接地的响应生成模型。第二个子任务旨在通过与真实用户的交互式设置进行评估,旨在将对话模型扩展到静态数据集之外。我们的曲目挑战参与者开发强大的响应生成模型,并探索将它们扩展到与真实用户的来回互动的策略。从静态语料库到交互式评估的发展引入了独特的挑战,并促进了对开放域对话系统的更全面评估。本文概述了曲目,包括方法和结果。此外,它提供了有关如何最佳评估开放域对话框模型的见解
translated by 谷歌翻译
在语言处理的神经方法上的最新进展引发了人们对建立智能开放域聊天机器人的兴趣的复兴。但是,即使是最先进的神经聊天机器人也无法在对话框中每个回合产生令人满意的响应。一个实用的解决方案是为相同上下文生成多个响应候选者,然后执行响应排名/选择以确定哪个候选者是最好的。先前的响应选择中的工作通常使用从现有对话框形成的合成数据来训练响应排名者,通过使用地面真理响应作为单个适当的响应并通过随机选择或使用对抗方法来构建不适当的响应。在这项工作中,我们策划了一个数据集,其中为适当的(正)和不适当(负)手动注释了为相同对话框上下文产生的多个响应发生器的响应。我们认为,这样的培训数据可以更好地匹配实际的用例示例,从而使模型能够有效地对响应进行排名。有了这个新数据集,我们对最先进的响应选择方法进行了系统的评估,并证明,使用多个积极候选者和使用手动验证的硬性负面候选者的两种策略都可以与使用相比,可以带来重大的绩效提高对抗性训练数据,例如,召回@1分别增加了3%和13%。
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Long-range context modeling is crucial to both dialogue understanding and generation. The most popular method for dialogue context representation is to concatenate the last-$k$ previous utterances. However, this method may not be ideal for conversations containing long-range dependencies. In this work, we propose DialoGX, a novel encoder-decoder based framework for conversational response generation with a generalized and explainable context representation that can look beyond the last-$k$ utterances. Hence the method is adaptive to conversations with long-range dependencies. The main idea of our approach is to identify and utilize the most relevant historical utterances instead of the last-$k$ utterances in chronological order. We study the effectiveness of our proposed method on both dialogue generation (open-domain) and understanding (DST) tasks. DialoGX achieves comparable performance with the state-of-the-art models on DailyDialog dataset. We also observe performance gain in existing DST models with our proposed context representation strategy on MultiWOZ dataset. We justify our context representation through the lens of psycholinguistics and show that the relevance score of previous utterances agrees well with human cognition which makes DialoGX explainable as well.
translated by 谷歌翻译
会话代理已成为简单任务允许情况的一般人群的组成部分。然而,这些系统尚未对各种和少数群体的任何社会影响,例如,帮助患有神经系统障碍的人,例如ALS和言语,语言和社交交流障碍的人。语言模型技术可以发挥巨大作用,以帮助这些用户进行日常沟通和社交互动。要启用此群体,我们构建了一个对话系统,可以使用CUES或关键字的用户控制。我们构建可以在用于控制响应生成的对话响应上下文中建立相关提示的模型,并可以加快通信。我们还介绍了一个关键字丢失来限制模型输出。我们在定性和定量上展示我们的模型可以有效地将关键字诱导到模型响应中,而不会降低响应的质量。在使用退行性障碍的人的使用情况的背景下,我们展示了对我们的提示或关键字预测器和可控对话系统的人类评估,并显示我们的模型比没有控制的模型更好地表现更好。我们的研究表明,在结束到结束响应生成模型的关键字控制是强大的,可以使用户能够与退行性疾病启用和赋予日常通信的日常沟通。
translated by 谷歌翻译
虽然通常可以使用丰富的开放域文本数据,并且可能包括有趣的现象(幽默,讽刺,移情等),大多数是用于语言处理任务的设计,并且通常采用非交流格式。在这项工作中,我们朝着使用生成的对话网络自动生成对话数据迈出了一步,旨在从可用的语言和知识数据的广度中受益,并培训开放式域社交对话代理。我们使用自动指标和人类评估符在主题聊天数据集上有或没有知识的对话中评估我们的方法。我们的结果表明,对于没有知识基础的对话,GCN可以从种子数据中概括,产生新颖的对话,这些对话较小,但更具吸引力,并且对于知识的对话,它可以产生更多以知识为中心,流利和引人入胜的对话。具体而言,我们表明,对于使用10 \%种子数据的开放域对话,我们的方法靠近使用100%数据的基线,而对于知识接地的对话,它仅使用1%数据,关于人类参与性,流利性和相关性的评级。
translated by 谷歌翻译
We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent opendomain dialogue systems.
translated by 谷歌翻译
Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data -- examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
translated by 谷歌翻译
会话推荐系统(CRS)已成为一个新兴的研究主题,试图通过交互式对话进行建议,这些对话通常由发电和建议模块组成。 CRS的先前工作倾向于将更多的外部和领域特定知识纳入项目评论,以提高性能。尽管事实的收集和注释特定于外部领域的信息需要大量的人类努力并脱离了普遍性,但过多的额外知识在它们之间带来了更大的困难。因此,我们建议从上下文中充分发现和提取内部知识。我们将实体级别和上下文级别的表示形式捕获为对建议的共同模拟用户的偏好,在这种情况下,时间吸引的注意力旨在强调实体级表示中最近出现的项目。我们进一步使用预训练的巴特来初始化生成模块,以减轻数据稀缺性并增强上下文建模。除了在流行数据集(REDIAIL)上进行实验外,我们还包括一个多域数据集(OpenDialKg)来显示我们模型的有效性。两个数据集的实验都表明,我们的模型在大多数评估指标上都具有更好的性能,其外部知识较少,并且可以很好地推广到其他领域。对建议和生成任务的其他分析证明了我们在不同情况下模型的有效性。
translated by 谷歌翻译
以任务为导向的对话系统(TDSS)主要在离线设置或人类评估中评估。评估通常仅限于单转或非常耗时。作为替代方案,模拟用户行为的用户模拟器使我们能够考虑一组广泛的用户目标,以生成类似人类的对话以进行模拟评估。使用现有的用户模拟器来评估TDSS是具有挑战性的,因为用户模拟器主要旨在优化TDSS的对话策略,并且评估功能有限。此外,对用户模拟器的评估是一个开放的挑战。在这项工作中,我们提出了一个用于端到端TDS评估的隐喻用户模拟器,如果它在与系统的交互中模拟用户的类似思维,则定义模拟器是隐喻的。我们还提出了一个基于测试人员的评估框架,以生成变体,即具有不同功能的对话系统。我们的用户模拟器构建了一个隐喻的用户模型,该模型通过参考遇到新项目时的先验知识来帮助模拟器进行推理。我们通过检查模拟器与变体之间的模拟相互作用来估计模拟器的质量。我们的实验是使用三个TDS数据集进行的。与基于议程的模拟器和三个数据集上的SEQ2SEQ模型相比,隐喻用户模拟器与手动评估的一致性更好。我们的测试人员框架展示了效率,并且可以更好地概括和可扩展性,因为它可以适用于多个域中的对话和多个任务,例如对话建议和电子商务对话。
translated by 谷歌翻译
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., I don't know) regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations. Input: What are you doing? −0.86 I don't know. −1.09 Get out of here. −1.03 I don't know! −1.09 I'm going home. −1.06 Nothing. −1.09 Oh my god! −1.09 Get out of the way. −1.10 I'm talking to you. Input: what is your name? −0.91 I don't know. ... −0.92 I don't know! −1.55 My name is Robert. −0.92 I don't know, sir. −1.58 My name is John. −0.97 Oh, my god! −1.59 My name's John. Input: How old are you? −0.79 I don't know. ... −1.06 I'm fine.−1.64 Twenty-five. −1.17 I'm all right.−1.66 Five. −1.17 I'm not sure.−1.71 Eight.
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译