Dialogue models are able to generate coherent and fluent responses, but they can still be challenging to control and may produce non-engaging, unsafe results. This unpredictability diminishes user trust and can hinder the use of the models in the real world. To address this, we introduce DialGuide, a novel framework for controlling dialogue model behavior using natural language rules, or guidelines. These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent. We evaluate DialGuide on three tasks in open-domain dialogue response generation: guideline selection, response generation, and response entailment verification. Our dataset contains 10,737 positive and 15,467 negative dialogue context-response-guideline triplets across two domains - chit-chat and safety. We provide baseline models for the tasks and benchmark their performance. We also demonstrate that DialGuide is effective in the dialogue safety domain, producing safe and engaging responses that follow developer guidelines.
translated by 谷歌翻译
Many real-world problems not only have complicated nonconvex functional constraints but also use a large number of data points. This motivates the design of efficient stochastic methods on finite-sum or expectation constrained problems. In this paper, we design and analyze stochastic inexact augmented Lagrangian methods (Stoc-iALM) to solve problems involving a nonconvex composite (i.e. smooth+nonsmooth) objective and nonconvex smooth functional constraints. We adopt the standard iALM framework and design a subroutine by using the momentum-based variance-reduced proximal stochastic gradient method (PStorm) and a postprocessing step. Under certain regularity conditions (assumed also in existing works), to reach an $\varepsilon$-KKT point in expectation, we establish an oracle complexity result of $O(\varepsilon^{-5})$, which is better than the best-known $O(\varepsilon^{-6})$ result. Numerical experiments on the fairness constrained problem and the Neyman-Pearson classification problem with real data demonstrate that our proposed method outperforms an existing method with the previously best-known complexity result.
translated by 谷歌翻译
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TextGrad, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TextGrad can be baked into adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TextGrad not only in attack generation for robustness evaluation but also in adversarial defense.
translated by 谷歌翻译
We integrate contrastive learning (CL) with adversarial learning to co-optimize the robustness and accuracy of code models. Different from existing works, we show that code obfuscation, a standard code transformation operation, provides novel means to generate complementary `views' of a code that enable us to achieve both robust and accurate code models. To the best of our knowledge, this is the first systematic study to explore and exploit the robustness and accuracy benefits of (multi-view) code obfuscations in code models. Specifically, we first adopt adversarial codes as robustness-promoting views in CL at the self-supervised pre-training phase. This yields improved robustness and transferability for downstream tasks. Next, at the supervised fine-tuning stage, we show that adversarial training with a proper temporally-staggered schedule of adversarial code generation can further improve robustness and accuracy of the pre-trained code model. Built on the above two modules, we develop CLAWSAT, a novel self-supervised learning (SSL) framework for code by integrating $\underline{\textrm{CL}}$ with $\underline{\textrm{a}}$dversarial vie$\underline{\textrm{w}}$s (CLAW) with $\underline{\textrm{s}}$taggered $\underline{\textrm{a}}$dversarial $\underline{\textrm{t}}$raining (SAT). On evaluating three downstream tasks across Python and Java, we show that CLAWSAT consistently yields the best robustness and accuracy ($\textit{e.g.}$ 11$\%$ in robustness and 6$\%$ in accuracy on the code summarization task in Python). We additionally demonstrate the effectiveness of adversarial learning in CLAW by analyzing the characteristics of the loss landscape and interpretability of the pre-trained models.
translated by 谷歌翻译
尽管促进机器学习(ML)公平的最新进展激增,但现有的主流方法主要需要培训或填充神经网络的整个权重以满足公平标准。但是,由于较大的计算和存储成本,低数据效率和模型隐私问题,对于那些大规模训练的模型来说,这通常是不可行的。在本文中,我们提出了一种称为FairreProgragr的新的通用公平学习范式,该范式结合了模型重编程技术。具体而言,Fairreprogrogram考虑了固定的神经模型,而是将输入一组扰动(称为公平触发器)附加到,该触发触发器在Min-Max公式下朝着公平标准调整为公平触发器。我们进一步介绍了一个信息理论框架,该框架解释了为什么以及在什么条件下,使用公平触发器可以实现公平目标。我们从理论和经验上都表明,公平触发器可以通过提供错误的人口统计信息来有效地掩盖固定ML模型的输出预测中的人口偏见,从而阻碍模型利用正确的人口统计信息来进行预测。对NLP和CV数据集进行的广泛实验表明,与在两个广泛使用的公平标准下,基于培训成本和数据依赖性的基于重新培训的方法相比,我们的方法可以实现更好的公平性改进。
translated by 谷歌翻译
这项工作解决了中央机器学习问题的问题,即在分布(OOD)测试集上的性能降解问题。这个问题在基于医学成像的诊断系统中尤为明显,该系统似乎是准确的,但在新医院/数据集中进行测试时失败。最近的研究表明,该系统可能会学习快捷方式和非相关功能,而不是可推广的功能,即所谓的良好功能。我们假设对抗性训练可以消除快捷方式功能,而显着性训练可以滤除非相关功能。两者都是OOD测试集的性能降解的滋扰功能。因此,我们为深度神经网络制定了一种新颖的模型培训方案,以学习分类和/或检测任务的良好功能,以确保在OOD测试集上的概括性性能。实验结果定性和定量证明了我们使用基准CXR图像数据集在分类任务上的基准CXR图像数据集的出色性能。
translated by 谷歌翻译
尽管基于大型神经模型的聊天机器人通常可以在开放域对话中产生流利的响应,但一种显着的错误类型是矛盾或与上述对话转弯的不一致性。以前的工作将机器人响应中的矛盾检测视为类似于自然语言推断的任务,例如检测一对机器人话语之间的矛盾。但是,对话中的话语可能包含共同引用或省略号,并且使用这些话语可能并不总是足以识别矛盾。这项工作旨在通过重写所有机器人话语来恢复前因和省略号来改善矛盾检测。我们策划了一个新的数据集来重写话语,并在其上构建了重写模型。我们从经验上证明,该模型可以产生令人满意的重写,以使机器人说话更加完整。此外,使用重写的话语可以显着提高矛盾的检测性能,例如AUPR和关节准确度得分(检测矛盾以及证据)分别增加6.5%和4.5%(绝对增加)。
translated by 谷歌翻译
图卷积网络(GCN)最近在学习图形结构数据方面取得了巨大的经验成功。为了解决由于相邻特征的递归嵌入而导致的可伸缩性问题,已经提出了图形拓扑抽样来降低训练GCN的记忆和计算成本,并且在许多经验研究中,它与没有拓扑采样的人达到了可比的测试性能。据我们所知,本文为半监督节点分类的训练(最多)三层GCN提供了图形拓扑采样的第一个理论理由。我们正式表征了图形拓扑抽样的一些足够条件,以使GCN训练导致概括误差减少。此外,我们的方法可以解决跨层的重量的非凸相互作用,这在GCN的现有理论分析中尚未探索。本文表征了图结构和拓扑抽样对概括性能和样本复杂性的影响,理论发现也通过数值实验证明了合理性。
translated by 谷歌翻译
目的是对临床文本去识别的自然语言处理(NLP)模型的评估取决于临床注释的可用性,临床注释通常由于隐私问题而受到限制。 NLP沙盒是一种通过采用联合模型到数据的方法来减轻NLP模型缺乏数据和评估框架的方法。这使得无偏见的联合模型评估无需共享多个机构的敏感数据。材料和方法我们利用Synapse协作框架,容器化软件和OpenAPI Generator来构建NLP沙盒(NLPSANDBOX.IO)。我们使用来自三个机构的数据评估了两个最先进的NLP去识别注释模型Philter和Neuroner。我们使用来自外部验证站点的数据进一步验证了模型性能。结果我们通过去识别临床模型评估证明了NLP沙箱的有用性。外部开发人员能够将其模型纳入NLP沙盒模板中,并提供用户体验反馈。讨论我们证明了使用NLP沙箱对临床文本去识别模型进行多站点评估的可行性,而无需共享数据。标准化模型和数据模式可以使模型传输和实现平稳。为了概括NLP沙箱,数据所有者和模型开发人员需要进行工作,以开发合适和标准化的模式,并调整其数据或模型以适合模式。结论NLP沙箱降低了利用临床数据进行NLP模型评估的障碍,并促进了联合会的NLP模型的联合,多站点,无偏见的评估。
translated by 谷歌翻译
班级学习(CIL)遭受了学习新添加的课程和保留先前学习的课堂知识之间臭名昭著的困境。通过存储重播的历史数据可以减轻灾难性的遗忘问题,这会导致内存开销以及预测更新。为了解决这一难题,我们建议在持续学习中利用“免费”外部未标记的数据查询。我们首先提出了一个带有查询的未标记数据(CIL-QUD)方案的CIL,其中我们仅存储一些过去的训练样本作为锚点,并每次都使用它们来查询相关的未标记示例。除了新的和过去存储的数据外,通过学习 - 验证(LWF)正规化器和班级平衡培训,有效地利用了查询未标记的未标记。除了保留对过去和当前任务的模型概括外,我们下一步研究CIL-QUD的对抗性鲁棒性问题。受到未标记的数据学习强大模型的成功启发,我们探索了一种新的鲁棒性感知的CIL设置,在此设置中,随着新任务不断出现,学习的对手鲁棒性必须抵制遗忘并被转移。尽管现有的选项很容易失败,但我们显示了查询的未标记数据可以继续受益,并无缝将CIL-QUD扩展到其可靠的版本RCIL-QUD中。广泛的实验表明,与以前的最新CIL方法相比,CIL-QUD在CIFAR-10和CIFAR-100上实现了可观的准确性。此外,Rcil-Qud确立了鲁棒性意识CIL的第一个强大里程碑。代码可在https://github.com/vita-group/cil-qud中找到。
translated by 谷歌翻译