Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle. We introduce MaRCo, a detoxification algorithm that combines controllable generation and text rewriting methods using a Product of Experts with autoencoder language models (LMs). MaRCo uses likelihoods under a non-toxic LM (expert) and a toxic LM (anti-expert) to find candidate words to mask and potentially replace. We evaluate our method on several subtle toxicity and microaggressions datasets, and show that it not only outperforms baselines on automatic metrics, but MaRCo's rewrites are preferred 2.1 $\times$ more in human evaluation. Its applicability to instances of subtle toxicity is especially promising, demonstrating a path forward for addressing increasingly elusive online hate.
translated by 谷歌翻译
We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
有毒语言检测系统通常会错误地将包含少数群体群体提及的毒性的错误标记文本,因为这些群体通常是在线仇恨的目标。这种对虚假相关性的过度依赖也导致系统在检测隐式有毒语言方面挣扎。为了帮助缓解这些问题,我们创建了Toxigen,这是一个新的大规模和机器生成的数据集,该数据集是274K有毒和良性陈述,约有13个少数群体。我们开发了一个基于示范的提示框架和一种对抗性分类器的解码方法,以使用大量预处理的语言模型生成微妙的有毒和良性文本。以这种方式控制机器的生成使毒素可以比以前的人写文本的资源更大的规模和大约人口组覆盖隐式有毒文本。我们对毒素的一个充满挑战的子集进行人体评估,发现注释者难以区分机器生成的文本和人类写的语言。我们还发现,94.5%的有毒例子被人类注释者标记为仇恨言论。我们使用三个公开可用的数据集,我们表明,对我们的数据进行毒性分类器的填充可以大大提高其在人体编写数据上的性能。我们还证明,毒素可用于抵抗机器生成的毒性,因为鉴定在我们的评估子集中大大改善了分类器。我们的代码和数据可以在https://github.com/microsoft/toxigen上找到。
translated by 谷歌翻译
终身体验和学习知识导致对常见情况倾向于展开的情况的共同期望。这些知识使人们能够毫不费力地解释故事叙述并确定突出的事件。我们使用GPT-3研究自传式与想象故事中的事件叙事流程的差异,是迄今为止创建的最大神经语言模型之一。日记的故事是由人群撰写的关于最近经验丰富的活动或同一主题的想象事件。为了分析这些故事的事件的叙述流程,我们测量了句子*顺序*,它比较了与上述故事上下文的句子的概率。我们发现,想象的故事比自传故事更高的顺序,并且当自新召回时,自传故事的顺序高度较高。通过在故事句子中的事件的注释,我们发现故事类型包含类似的主要突出事件的比例,但自传故事是在事实上的小事中的密集。此外,与想象的故事相比,自传故事包含与第一人称,认知过程,时间,空间,数字,社交词和核心驱动器和需求相关的更多具体的单词和词汇。我们的调查结果强调了调查记忆和认知的机会,具有大规模的统计语言模型。
translated by 谷歌翻译
语言的感知毒性可能会因某人的身份和信仰而有所不同,但是在收集有毒语言数据集时往往忽略这种变化,从而导致数据集和模型偏差。我们寻求理解谁,为什么,以及毒性注释的偏见背后。在两个在线研究中具有人口统计地和政治上的参与者,我们调查了注释者身份(世卫组织)和信仰的影响(为什么),从社会心理学研究中汲取仇恨言语,自由言论,种族主义信念,政治倾向等。我们解除了通过考虑三个特征的帖子作为毒性的毒性:反黑色语言,非洲裔美国英语(AAE)方言和粗俗。我们的结果显示了注释者身份和信仰之间的强有力的协会及其毒性评级。值得注意的是,更保守的注释者和那些对我们的种族信仰规模的评分的人不太可能对毒黑语言归因于毒性,但更有可能将AAE归因于毒性。我们还提供了一个案例研究,说明了流行的毒性检测系统的评级如何自然地反映特定的信念和观点。我们的调查结果要求社会变量中的毒性标签,这提高了对有毒语言注释和检测的巨大影响。
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and Con-ceptNet (Speer et al., 2017). Contrary to many conventional KBs that store knowledge with canonical templates, commonsense KBs only store loosely structured open-text descriptions of knowledge. We posit that an important step toward automatic commonsense completion is the development of generative models of commonsense knowledge, and propose COMmonsEnse Transformers (COMET ) that learn to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of commonsense modeling, our investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs. Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which approaches human performance for these resources. Our findings suggest that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
translated by 谷歌翻译
We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on inferential knowledge organized as typed if-then relations with variables (e.g., "if X pays Y a compliment, then Y will likely return the compliment"). We propose nine if-then relation types to distinguish causes vs. effects, agents vs. themes, voluntary vs. involuntary events, and actions vs. mental states. By generatively training on the rich inferential knowledge described in ATOMIC, we show that neural models can acquire simple commonsense capabilities and reason about previously unseen events. Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
translated by 谷歌翻译
Kirilov et al (2019) develop a metric, called Panoptic Quality (PQ), to evaluate image segmentation methods. The metric is based on a confusion table, and compares a predicted to a ground truth segmentation. The only non straightforward part in this comparison is to align the segments in the two segmentations. A metric only works well if that alignment is a partial bijection. Kirilov et al (2019) list 3 desirable properties for a definition of alignment: it should be simple, interpretable and effectively computable. There are many definitions guaranteeing a partial bijection and these 3 properties. We present the weakest: one that is both sufficient and necessary to guarantee that the alignment is a partial bijection. This new condition is effectively computable and natural. It simply says that the number of correctly predicted elements (in image segmentation, the pixels) should be larger than the number of missed, and larger than the number of spurious elements. This is strictly weaker than the proposal in Kirilov et al (2019). In formulas, instead of |TP|> |FN\textbar| + |FP|, the weaker condition requires that |TP|> |FN| and |TP| > |FP|. We evaluate the new alignment condition theoretically and empirically.
translated by 谷歌翻译