Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples. However, they are not effective enough in modeling both the latent space and the control, leaving controlled text with low quality and diversity. In this work, we propose a novel control framework using probability density estimation in the latent space. Our method utilizes an invertible transformation function, the Normalizing Flow, that maps the complex distributions in the latent space to simple Gaussian distributions in the prior space. Thus, we can perform sophisticated and flexible control in the prior space and feed the control effects back into the latent space owing to the one-one-mapping property of invertible transformations. Experiments on single-attribute controls and multi-attribute control reveal that our method outperforms several strong baselines on attribute relevance and text quality and achieves the SOTA. Further analysis of control strength adjustment demonstrates the flexibility of our control strategy.
translated by 谷歌翻译
Self-training (ST) has prospered again in language understanding by augmenting the fine-tuning of pre-trained language models when labeled data is insufficient. However, it remains challenging to incorporate ST into attribute-controllable language generation. Augmented by only self-generated pseudo text, generation models over-emphasize exploitation of the previously learned space, suffering from a constrained generalization boundary. We revisit ST and propose a novel method, DuNST to alleviate this problem. DuNST jointly models text generation and classification with a shared Variational AutoEncoder and corrupts the generated pseudo text by two kinds of flexible noise to disturb the space. In this way, our model could construct and utilize both pseudo text from given labels and pseudo labels from available unlabeled text, which are gradually refined during the ST process. We theoretically demonstrate that DuNST can be regarded as enhancing exploration towards the potential real text space, providing a guarantee of improved performance. Experiments on three controllable generation tasks show that DuNST could significantly boost control accuracy while maintaining comparable generation fluency and diversity against several strong baselines.
translated by 谷歌翻译
真实世界的文本应用程序通常涉及组成广泛的文本控制操作,例如编辑文本W.R.T.属性,操纵关键字和结构,并生成所需属性的新文本。事先的工作通常会学习/芬太尼语言模型(LM)以执行操作的个人或特定子集。最近的研究以插件方式研究了合并操作,通常在复杂序列空间中以昂贵的搜索或优化进行了研究。本文提出了一种新的有效方法,用于在紧凑的文本潜在空间中进行可复合的文本操作。文本潜在矢量的低维度和不同性使我们能够基于给定的任意插入运算符(例如属性分类器)基于普通微分方程(ODE)开发有效的采样器。通过通过有效的适应性将预告片的LMS(例如GPT2)连接到潜在空间,然后我们将采样向量解码为所需的文本序列。灵活的方法允许使用来自不同域中的任何相关数据获取的各种控制操作员(情感,时态,形式,关键字等)。实验表明,在我们的方法中构成这些操作员可以生成或编辑高质量文本,从而在发电质量和效率方面显着改善了以前的方法。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Text style transfer aims to alter the style of a sentence while preserving its content. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and often uses cycle construction to train models. Since cycle construction helps to improve the style transfer ability of the model by rebuilding transferred sentences back to original-style sentences, it brings about a content loss in unsupervised text style transfer tasks. In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation. Instead of the typical encoder-decoder scheme, StyleFlow can not only conduct the forward process to obtain the output, but also infer to the input through the output. We design an attention-aware coupling layers to disentangle the content representations and the style representations of a sentence. Besides, we propose a data augmentation method based on Normalizing Flow to improve the robustness of the model. Experiment results demonstrate that our model preserves content effectively and achieves the state-of-the-art performance on the most metrics.
translated by 谷歌翻译
最近的生成机器学习模型的进展重新推出了密码猜测领域的研究兴趣。基于GAN的数据驱动密码猜测方法和深度潜变量模型的方法显示了令人印象深刻的泛化性能,并为密码猜测提供了引人注目的属性。在本文中,我们提出了Passflow,一种基于流的生成模型方法来猜测。基于流的模型允许精确的对数似然计算和优化,这实现了精确潜在的变量推断。此外,基于流的模型提供了有意义的潜在空间表示,这使得能够探索潜在空间和插值的特定子空间。我们展示了生成流量的适用性到密码猜测的背景下,脱离了主要限于图像生成的连续空间的流网络的先前应用。我们显示Passflow能够在使用培训集中的密码猜测任务中以前的最先进的GaN的方法,这是一个训练集,该训练集是小于前一体的训练集。此外,生成的样本的定性分析表明,通信流可以准确地模拟原始密码的分布,甚至是不匹配的样本非常类似于人类的密码。
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译
Existing reference-free metrics have obvious limitations for evaluating controlled text generation models. Unsupervised metrics can only provide a task-agnostic evaluation result which correlates weakly with human judgments, whereas supervised ones may overfit task-specific data with poor generalization ability to other datasets. In this paper, we propose an unsupervised reference-free metric called CTRLEval, which evaluates controlled text generation from different aspects by formulating each aspect into multiple text infilling tasks. On top of these tasks, the metric assembles the generation probabilities from a pre-trained language model without any model training. Experimental results show that our metric has higher correlations with human judgments than other baselines, while obtaining better generalization of evaluating generated texts from different models and with different qualities.
translated by 谷歌翻译
Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to autoregressive language generation. We instead view diffusion as a complementary method that can augment the generative capabilities of existing pre-trained language models. We demonstrate that continuous diffusion models can be learned in the latent space of a pre-trained encoder-decoder model, enabling us to sample continuous latent representations that can be decoded into natural language with the pre-trained decoder. We show that our latent diffusion models are more effective at sampling novel text from data distributions than a strong autoregressive baseline and also enable controllable generation.
translated by 谷歌翻译
扩散模型是一类深入生成模型,在具有密集理论建立的各种任务上显示出令人印象深刻的结果。尽管与其他最先进的模型相比,扩散模型的样本合成质量和多样性令人印象深刻,但它们仍然遭受了昂贵的抽样程序和次优可能的估计。最近的研究表明,对提高扩散模型的性能的热情非常热情。在本文中,我们对扩散模型的现有变体进行了首次全面综述。具体而言,我们提供了扩散模型的第一个分类法,并将它们分类为三种类型,即采样加速增强,可能性最大化的增强和数据将来增强。我们还详细介绍了其他五个生成模型(即变异自动编码器,生成对抗网络,正常流量,自动回归模型和基于能量的模型),并阐明扩散模型与这些生成模型之间的连接。然后,我们对扩散模型的应用进行彻底研究,包括计算机视觉,自然语言处理,波形信号处理,多模式建模,分子图生成,时间序列建模和对抗性纯化。此外,我们提出了与这种生成模型的发展有关的新观点。
translated by 谷歌翻译
潜在空间基于能量的模型(EBM),也称为基于能量的先验,引起了对生成建模的日益兴趣。由于其在潜在空间的配方和强大的建模能力方面的灵活性所推动,最近构建的作品已经进行了有趣的尝试,目的是针对文本建模的解释性。但是,潜在空间EBM还继承了数据空间中EBM的一些缺陷。实践中退化的MCMC抽样质量会导致培训中的发电质量和不稳定差,尤其是在具有复杂潜在结构的数据上。受到最近的努力的启发,该努力利用扩散恢复的可能性学习是解决抽样问题的一种方法,我们在变异学习框架中引入了扩散模型和潜在空间EBM之间的新型共生,这是潜在扩散能量基于能量的模型。我们与信息瓶颈共同开发基于几何聚类的正则化,以进一步提高学到的潜在空间的质量。对几个具有挑战性的任务进行的实验证明了我们模型在可解释的文本建模上的优越性能而不是强大的同行。
translated by 谷歌翻译
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
translated by 谷歌翻译
我们研究是否使用两个条件型号$ p(x | z)$和$ q(z | x)$,以使用循环的两个条件型号,我们如何建模联合分配$ p(x,z)$。这是通过观察到深入生成模型的动机,除了可能的型号$ p(x | z)$,通常也使用推理型号$ q(z | x)$来提取表示,但它们通常依赖不表征的先前分配$ P(z)$来定义联合分布,这可能会使后塌和歧管不匹配等问题。为了探讨仅使用$ p(x | z)$和$ q(z | x)$模拟联合分布的可能性,我们研究其兼容性和确定性,对应于其条件分布一致的联合分布的存在和唯一性跟他们。我们为可操作的等价标准开发了一般理论,以实现兼容性,以及足够的确定条件。基于该理论,我们提出了一种新颖的生成建模框架来源,仅使用两个循环条件模型。我们开发方法以实现兼容性和确定性,并使用条件模型适合和生成数据。通过预先删除的约束,Cygen更好地适合数据并捕获由合成和现实世界实验支持的更多代表性特征。
translated by 谷歌翻译
Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and the model's performance in a downstream task. We propose MuCoLa -- a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MuCoLa on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.
translated by 谷歌翻译
大规模的预训练的语言模型在自然语言生成任务上取得了巨大的成功。但是,很难控制预先训练的语言模型来生成具有所需属性的句子,例如主题和情感等。最近,贝叶斯可控的语言模型(BCLM)已被证明在可控制的语言生成中有效。 BCLM并没有微调预训练的语言模型的参数,而是使用外部歧视器来指导预训练的语言模型的生成。但是,BCLMS训练和推断之间的不匹配限制了模型的性能。为了解决这个问题,在这项工作中,我们为可控语言生成提出了一个“双子座歧视者”,以减轻小计算成本的不匹配问题。我们在两个可控的语言生成任务上测试了我们的方法:情感控制和主题控制。在这两项任务上,我们的方法都达到了新的最先进的结果,从而可以自动评估。
translated by 谷歌翻译
现代生成型号在包括图像或文本生成和化学分子建模的各种任务中获得优异的品质。然而,现有方法往往缺乏通过所要求的属性产生实例的基本能力,例如照片中的人的年龄或产生的分子的重量。包含此类额外的调节因子将需要重建整个架构并从头开始优化参数。此外,难以解除选定的属性,以便仅在将其他属性中执行不变的同时执行编辑。为了克服这些限制,我们提出插件(插件生成网络),这是一种简单而有效的生成技术,可以用作预先训练的生成模型的插件。我们的方法背后的想法是使用基于流的模块将纠缠潜在的潜在表示转换为多维空间,其中每个属性的值被建模为独立的一维分布。因此,插件可以生成具有所需属性的新样本,以及操作现有示例的标记属性。由于潜在代表的解散,我们甚至能够在数据集中的稀有或看不见的属性组合生成样本,例如具有灰色头发的年轻人,有妆容的男性或胡须的女性。我们将插入与GaN和VAE模型组合并将其应用于图像和化学分子建模的条件生成和操纵。实验表明,插件保留了骨干型号的质量,同时添加控制标记属性值的能力。
translated by 谷歌翻译
Generative models are becoming ever more powerful, being able to synthesize highly realistic images. We propose an algorithm for taming these models - changing the probability that the model will produce a specific image or image category. We consider generative models that are powered by normalizing flows, which allows us to reason about the exact generation probability likelihood for a given image. Our method is general purpose, and we exemplify it using models that generate human faces, a subdomain with many interesting privacy and bias considerations. Our method can be used in the context of privacy, e.g., removing a specific person from the output of a model, and also in the context of de-biasing by forcing a model to output specific image categories according to a given target distribution. Our method uses a fast fine-tuning process without retraining the model from scratch, achieving the goal in less than 1% of the time taken to initially train the generative model. We evaluate qualitatively and quantitatively, to examine the success of the taming process and output quality.
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译