GPT-2经常在故事生成模型中适应,因为它提供了强大的生成能力。但是,它仍然无法生成一致的故事并缺乏多样性。目前的故事生成模型利用诸如情节或致法性的附加信息进入GPT-2以引导生成过程。这些方法侧重于提高故事的发电质量,而我们的工作表明质量和多样性。我们探索BERT和GPT-2的组合构建一个变形式自动化器(VAE),并通过添加其他目标来扩展它来学习故事主题和话语关系等全局功能。我们的评估表明我们的增强型VAE可以提供更好的质量和多样性折衷,从而产生更少的重复故事内容,并学习更具信息丰富的潜在变量。
translated by 谷歌翻译
在过去的几年中,在各种文本生成任务中见证了各种自动编码器的优势。但是,由于文本的顺序性质,自动回归解码器倾向于忽略潜在变量,然后降低到简单的语言模型,称为KL消失的问题,当VAE与基于变压器的结构结合时,这将进一步恶化。为了改善这个问题,我们提出了一种新型变化变压器框架Della。德拉(Della)从较低层的层中得知一系列层的潜在变量,每个变量都从下层的层中推断出,并通过低级张量产品与隐藏状态紧密耦合。通过这种方式,Della强迫这些后部潜在变量将其与整个计算路径深入融合,从而结合了更多信息。从理论上讲,我们可以将我们的方法视为纠缠潜在变量,以避免通过层减少后验信息,从而使DELLA即使没有任何退火或阈值技巧,也可以使DELLA获得更高的非零KL值。与多个强大的基线相比,对四个无条件和三个条件生成任务的实验表明,Della可以更好地减轻KL消失并改善质量和多样性。
translated by 谷歌翻译
长篇小说(LSG)是自然语言处理中的垂灾目标之一。与大多数文本生成任务不同,LSG要求基于更短的文本输入输出丰富的内容的长话,并且通常存在信息稀疏性。在本文中,我们提出了\ emph {topnet}来缓解这个问题,通过利用神经主题建模的最新进步来获得高质量的骨架词来补充短输入。特别是,而不是直接生成故事,首先学会将简短的文本输入映射到低维主题分布(由主题模型预先分配)。基于此潜在主题分布,我们可以使用主题模型的重建解码器来对与故事的骨骼相互相关的单词序列。两个基准数据集的实验表明,我们的框架在骨架词选择中非常有效,在自动评估和人类评估中显着优于最先进的模型。
translated by 谷歌翻译
The standard recurrent neural network language model (rnnlm) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an rnn-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.
translated by 谷歌翻译
当前的个性化对话中的作品主要有助于代理人表现出一致的个性并推动更有用的回应。但是,我们发现大多数以前模型的生成的响应往往是以自我为中心的,对话中的用户几乎不关心。此外,我们认为类似人类的对话基本上是基于推断另一方角色的信息而构建的。由此激励,我们通过检测隐性用户角色提出了一种新颖的个性化对话生成器。因为很难为每个用户收集大量详细的角色,所以我们试图对用户的潜在角色及其从对话历史记录进行建模,而没有外部知识。使用条件变异推断对感知和推子变量进行了构想。两个潜在变量模拟了人们意识到彼此角色并在对话中产生相应表达的过程。最后,提出了后歧视的正规化以增强训练程序。实证研究表明,与最先进的方法相比,我们的方法更关心用户的角色,并在整个评估中实现了相当大的推动力。
translated by 谷歌翻译
Conditional variational models, using either continuous or discrete latent variables, are powerful for open-domain dialogue response generation. However, previous works show that continuous latent variables tend to reduce the coherence of generated responses. In this paper, we also found that discrete latent variables have difficulty capturing more diverse expressions. To tackle these problems, we combine the merits of both continuous and discrete latent variables and propose a Hybrid Latent Variable (HLV) method. Specifically, HLV constrains the global semantics of responses through discrete latent variables and enriches responses with continuous latent variables. Thus, we diversify the generated responses while maintaining relevance and coherence. In addition, we propose Conditional Hybrid Variational Transformer (CHVT) to construct and to utilize HLV with transformers for dialogue generation. Through fine-grained symbolic-level semantic information and additive Gaussian mixing, we construct the distribution of continuous variables, prompting the generation of diverse expressions. Meanwhile, to maintain the relevance and coherence, the discrete latent variable is optimized by self-separation training. Experimental results on two dialogue generation datasets (DailyDialog and Opensubtitles) show that CHVT is superior to traditional transformer-based variational mechanism w.r.t. diversity, relevance and coherence metrics. Moreover, we also demonstrate the benefit of applying HLV to fine-tuning two pre-trained dialogue models (PLATO and BART-base).
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
最近的知识接地对话框方法通过从外部文本文档中包含信息来生成响应。这些方法不需要在训练期间知道确切的文件,并依赖于使用检索系统来从大型索引获取相关文档。用于生成响应的文档被建模为潜在的变量,其先验概率需要估计。诸如rag等型号,在从索引中检索的文档上边缘化文档概率,以定义对端到端优化的日志似然丢失函数。在本文中,我们开发了上述技术的变分方法,据称,我们最大化证据下限(ELBO)。使用三个公开可用的开放式对话数据集的集合,我们展示了与地面真相响应的信息的后部分布如何允许在训练期间更好地逼近客观函数。为了克服与大型知识收集相关的抽样相关的挑战,我们开发了一种高效的方法来近似eLBO。据我们所知,我们是第一个适用于开放式无监督知识接地对话系统的变分培训。
translated by 谷歌翻译
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions. In this paper, we develop a topic-informed discrete latent variable model for semantic textual similarity, which learns a shared latent space for sentence-pair representation via vector quantization. Compared with previous models limited to local semantic contexts, our model can explore richer semantic information via topic modeling. We further boost the performance of semantic similarity by injecting the quantized representation into a transformer-based language model with a well-designed semantic-driven attention mechanism. We demonstrate, through extensive experiments across various English language datasets, that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
translated by 谷歌翻译
建立模型以检测社交媒体上的疫苗态度是具有挑战性的,因为涉及的复合材料,通常涉及复杂的方面以及带注释的数据的有限可用性。现有方法在很大程度上依赖于需要大量注释和预定义方面类别的监督培训。取而代之的是,为了利用现在可用于疫苗接种的大量未注释的数据,我们提出了一种新型的半监督方法,用于疫苗态度检测,称为Vadet。基于语言模型的变异自动编码体系结构用于从未标记的数据中学习域的主题信息。然后,该模型通过一些手动注释的用户态度进行了微调。我们验证了VADET对带注释的数据的有效性,并验证了对疫苗意见注释的现有疫苗接种语料库。我们的结果表明,Vadet能够学习分离的立场和方面主题,并且在立场检测和推文聚类上都优于现有的基于方面的情感分析模型。
translated by 谷歌翻译
预训练的语言模型(PLM)无法生成长形式的叙事文本,因为它们不考虑全局结构。结果,生成的文本通常是不巧妙的,重复的或缺乏内容的。故事发电的最新工作以提示,关键字或语义框架的形式重新引入了明确的内容计划。经过大型平行语料库的培训,这些模型可以生成更合乎逻辑的事件序列,从而产生更满足的故事。但是,这些中间表示通常不使用自然语言,并且不需要微调就无法使用。我们建议使用现成的PLM生成故事情节,同时保持内容计划的好处,以产生凝聚力和满足的故事。我们提出的方法ScratchPlot首先提示PLM构成内容计划。然后,我们生成故事的身体并以内容计划结束。此外,我们通过使用其他PLM来对生成的(故事,结尾)对进行排名。我们用各种基线基准测试我们的方法,并在人类和自动评估中取得了卓越的结果。
translated by 谷歌翻译
对事件序列的预测对于信息检索和自然语言处理中的许多现实世界应用至关重要。在事件序列预测中,未来的活动生成(FEG)是一项具有挑战性的任务,因为它不仅需要流利的文本生成,而且需要常识性推理才能保持整个事件故事的逻辑连贯性。在本文中,我们提出了一个新颖的可解释的FEG框架COEP。它突出并整合了两种类型的事件知识,对直接事件事件关系的顺序知识以及推论知识,这些知识反映了事件之间的中间角色心理学(例如意图,原因,反应),这些心理本质地将故事推向了故事。为了减轻知识遗忘问题,我们为每种类型的知识设计了两个模块,即IM和GM,它们是通过及时调整组合的。首先,IM专注于理解推论知识,以产生常识性解释并为通用汽车提供软提示向量。我们还设计了一种对比歧视器,以提高概括能力。其次,GM通过用IM的指导对直接顺序知识进行建模来生成未来事件。自动和人类评估表明,我们的方法可以产生更连贯,具体和逻辑的未来事件。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
Creating an essay based on a few given topics is a challenging NLP task. Although several effective methods for this problem, topic-to-essay generation, have appeared recently, there is still much room for improvement, especially in terms of the coverage of the given topics and the coherence of the generated text. In this paper, we propose a novel approach called TegFormer which utilizes the Transformer architecture where the encoder is enriched with domain-specific contexts while the decoder is enhanced by a large-scale pre-trained language model. Specifically, a \emph{Topic-Extension} layer capturing the interaction between the given topics and their domain-specific contexts is plugged into the encoder. Since the given topics are usually concise and sparse, such an additional layer can bring more topic-related semantics in to facilitate the subsequent natural language generation. Moreover, an \emph{Embedding-Fusion} module that combines the domain-specific word embeddings learnt from the given corpus and the general-purpose word embeddings provided by a GPT-2 model pre-trained on massive text data is integrated into the decoder. Since GPT-2 is at a much larger scale, it contains a lot more implicit linguistic knowledge which would help the decoder to produce more grammatical and readable text. Extensive experiments have shown that the pieces of text generated by TegFormer have better topic coverage and higher text coherence than those from SOTA topic-to-essay techniques, according to automatic and human evaluations. As revealed by ablation studies, both the Topic-Extension layer and the Embedding-Fusion module contribute substantially to TegFormer's performance advantage.
translated by 谷歌翻译
Self-training (ST) has prospered again in language understanding by augmenting the fine-tuning of pre-trained language models when labeled data is insufficient. However, it remains challenging to incorporate ST into attribute-controllable language generation. Augmented by only self-generated pseudo text, generation models over-emphasize exploitation of the previously learned space, suffering from a constrained generalization boundary. We revisit ST and propose a novel method, DuNST to alleviate this problem. DuNST jointly models text generation and classification with a shared Variational AutoEncoder and corrupts the generated pseudo text by two kinds of flexible noise to disturb the space. In this way, our model could construct and utilize both pseudo text from given labels and pseudo labels from available unlabeled text, which are gradually refined during the ST process. We theoretically demonstrate that DuNST can be regarded as enhancing exploration towards the potential real text space, providing a guarantee of improved performance. Experiments on three controllable generation tasks show that DuNST could significantly boost control accuracy while maintaining comparable generation fluency and diversity against several strong baselines.
translated by 谷歌翻译
Recent advances in deep learning research, such as transformers, have bolstered the ability for automated agents to generate creative texts similar to those that a human would write. By default, transformer decoders can only generate new text with respect to previously generated text. The output distribution of candidate tokens at any position is conditioned on previously selected tokens using a self-attention mechanism to emulate the property of autoregression. This is inherently limiting for tasks such as controllable story generation where it may be necessary to condition on future plot events when writing a story. In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning. Transformer decoders are typically pretrained on the task of completing a context, one token at a time, by means of self-attention. Future Sight additionally enables a decoder to attend to an encoded future plot event. This motivates the decoder to expand on the context in a way that logically concludes with the provided future. During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction. We evaluate the efficacy of our approach on a story generation task with human evaluators.
translated by 谷歌翻译
目前,基于端到端深度学习的开放域对话系统仍然是黑匣子模型,使其易于与数据驱动的模型生成无关的内容。具体而言,由于缺乏指导培训的先验知识,潜在变量在潜在空间中与不同的语义纠缠在一起。为了解决这个问题,本文提议通过涉及介绍量表特征分离的认知方法来利用生成模型。特别是,该模型将宏观指导类别知识和微观级别的开放域对话数据集成到培训中,并将先验知识利用到潜在空间中,从而使模型能够将潜在变量置于介镜范围内的潜在变量。此外,我们为开放域对话提出了一个新的指标,可以客观地评估潜在空间分布的解释性。最后,我们在不同的数据集上验证了我们的模型,并在实验上证明我们的模型能够比其他模型产生更高的质量和更容易解释的对话。
translated by 谷歌翻译
我们研究可以从有限,凸面和良好的控制空间中生成任意自然语言文本(例如所有英语句子)的模型。我们称它们为通用Vec2Text模型。这样的模型将允许在矢量空间中做出语义决策(例如,通过强化学习),而自然语言产生由VEC2Text模型处理。我们提出了四个所需的特性:这种VEC2Text模型应具有的普遍性,多样性,流利性和语义结构,我们提供了定量和定性的方法来评估它们。我们通过将瓶颈添加到250m参数变压器模型中来实现VEC2Text模型,并通过从大型Web语料库中提取的400m句子(10B代币)对其进行自动编码目标进行训练。我们提出了一种基于往返翻译的简单数据增强技术,并在广泛的实验中表明,所得的VEC2Text模型令人惊讶地导致矢量空间,从而满足我们的四个所需属性,并且该模型强烈超过了标准和denoso自动编码器的表现。
translated by 谷歌翻译