语言模型(LM)在全球许多基于语言的应用空间中变得普遍。尽管这些LMS正在改善我们与数字产品的日常互动,但无论是开放式语言还是由这些模型生成的文本仍然揭示了对特定人群的任何偏见,因此仍然存在担忧,从而冒着某种产品的可用性风险。有必要确定这些模型是否具有偏见以改善这些模型的公平性。这一差距激发了我们正在进行的工作,在该工作中,我们通过残疾镜头测量了GPT-3生成的文本的两个方面。
translated by 谷歌翻译
Generated texts from large pretrained language models have been shown to exhibit a variety of harmful, human-like biases about various demographics. These findings prompted large efforts aiming to understand and measure such effects, with the goal of providing benchmarks that can guide the development of techniques mitigating these stereotypical associations. However, as recent research has pointed out, the current benchmarks lack a robust experimental setup, consequently hindering the inference of meaningful conclusions from their evaluation metrics. In this paper, we extend these arguments and demonstrate that existing techniques and benchmarks aiming to measure stereotypes tend to be inaccurate and consist of a high degree of experimental noise that severely limits the knowledge we can gain from benchmarking language models based on them. Accordingly, we propose a new framework for robustly measuring and quantifying biases exhibited by generative language models. Finally, we use this framework to investigate GPT-3's occupational gender bias and propose prompting techniques for mitigating these biases without the need for fine-tuning.
translated by 谷歌翻译
大型语言模型会产生类似人类的文本,这些文本推动了越来越多的应用。但是,最近的文献以及越来越多的现实世界观察表明,这些模型可以产生有毒,有偏见,不真实或其他有害的语言。尽管正在进行评估语言模型危害的工作,但要远见卓识转换出可能出现的危害可能会引起严格的基准。为了促进这种翻译,我们概述了六种表征有害文本的方式,这些方法在设计新基准时值得明确考虑。然后,我们将这些特征用作镜头来识别现有基准中的趋势和差距。最后,我们将它们应用于视角API的案例研究,这是一种毒性分类器,被广泛用于HARS基准。我们的特征提供了一块桥梁,可以在远见和有效评估之间转化。
translated by 谷歌翻译
We present a robust methodology for evaluating biases in natural language generation(NLG) systems. Previous works use fixed hand-crafted prefix templates with mentions of various demographic groups to prompt models to generate continuations for bias analysis. These fixed prefix templates could themselves be specific in terms of styles or linguistic structures, which may lead to unreliable fairness conclusions that are not representative of the general trends from tone varying prompts. To study this problem, we paraphrase the prompts with different syntactic structures and use these to evaluate demographic bias in NLG systems. Our results suggest similar overall bias trends but some syntactic structures lead to contradictory conclusions compared to past works. We show that our methodology is more robust and that some syntactic structures prompt more toxic content while others could prompt less biased generation. This suggests the importance of not relying on a fixed syntactic structure and using tone-invariant prompts. Introducing syntactically-diverse prompts can achieve more robust NLG (bias) evaluation.
translated by 谷歌翻译
大型语言模型(LLMS)最近在生成流利文本方面表现出了令人印象深刻的能力。 LLM还显示出一种令人震惊的倾向,倾向于再现社会偏见,例如性别与职业或种族或种族和犯罪行为之间的刻板印象。像种族和性别一样,道德是一个重要的社会变量。我们的道德偏见会影响我们如何接受他人及其论点。我预计LLM的明显道德能力将在其对人类社会环境的影响中发挥重要作用。这项工作调查了LLMS是否复制与政治团体相关的道德偏见,我称这是道德模仿的能力。我使用道德基础理论中的工具来衡量模型中的道德内容,在促使自由和保守的政治身份促使该模型产生的文本中,使用了道德基础理论中的工具来探讨GPT-3(175B参数语言模型)的这一假设。结果表明,大型语言模型确实是道德模仿。当带有政治身份的提示时,GPT-3产生了反映相应道德偏见的文本。道德模仿可能有助于通过道德重新建立社会群体之间的理解。令人担忧的是,它还可以加强两极分化的观点,加剧现有的社会挑战。我希望这项工作鼓励进一步调查道德模仿能力,包括如何利用它来实现社会善良并最大程度地降低其风险。
translated by 谷歌翻译
随着政治态度在美国的意识形态上存在分歧,政治言论在lingus言中有所不同。美国政党之间不断扩大的两极分化是由于它们之间的相互理解的侵蚀而加速了。我们的目的是通过一个框架来使这些社区相互了解,该框架使用社区语言模型社区LM对社区特定的回答进行了针对社区的回答。在我们的框架中,我们在Twitter上确定了每个社区的党派成员,并在他们撰写的推文上进行了微调LMS。然后,我们使用对相应的LMS的及时探测两组的世界观,并提示对美国国家选举研究(ANES)2020年探索性测试调查提出对公共人物和群体的意见。我们将LMS与ANES调查结果产生的响应进行比较,并找到一定级别的对齐水平,该级别大大超过了几种基线方法。我们的工作旨在表明,我们可以使用社区LMS来查询任何一群人的世界观,以提供足够大的社交媒体讨论或媒体饮食。
translated by 谷歌翻译
GPT-3等大型语言模型是优秀的几次学习者,允许他们通过自然文本提示来控制。最近的研究报告称,基于及时的直接分类消除了对微调的需求,但缺乏数据和推理可扩展性。本文提出了一种新的数据增强技术,利用大规模语言模型来生成来自真实样本的混合的现实文本样本。我们还建议利用语言模型预测的软标签,从大规模语言模型中有效地蒸馏知识并同时创建文本扰动。我们对各种分类任务进行数据增强实验,并显示我们的方法非常优于现有的文本增强方法。消融研究和定性分析为我们的方法提供了更多的见解。
translated by 谷歌翻译
Fairness has become a trending topic in natural language processing (NLP), which addresses biases targeting certain social groups such as genders and religions. However, regional bias in language models (LMs), a long-standing global discrimination problem, still remains unexplored. This paper bridges the gap by analysing the regional bias learned by the pre-trained language models that are broadly used in NLP tasks. In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups. We accordingly propose a HiErarchical Regional Bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with respect to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at https://github.com/Bernard-Yang/HERB.
translated by 谷歌翻译
Task agnostic generative pretraining (GPT) has recently proved promising for zero- and few-shot learning, gradually diverting attention from the expensive supervised learning paradigm. Although the community is accumulating knowledge as to capabilities of English-language autoregressive models such as GPT-3 adopting this generative approach, scholarship about these models remains acutely Anglocentric. Consequently, the community currently has serious gaps in its understanding of this class of models, their potential, and their societal impacts in diverse settings, linguistic traditions, and cultures. To alleviate this issue for Arabic, a collection of diverse languages and language varieties with more than $400$ million population, we introduce JASMINE, a suite of powerful Arabic autoregressive Transformer language models ranging in size between 300 million-13 billion parameters. We pretrain our new models with large amounts of diverse data (400GB of text) from different Arabic varieties and domains. We evaluate JASMINE extensively in both intrinsic and extrinsic settings, using a comprehensive benchmark for zero- and few-shot learning across a wide range of NLP tasks. We also carefully develop and release a novel benchmark for both automated and human evaluation of Arabic autoregressive models focused at investigating potential social biases, harms, and toxicity in these models. We aim to responsibly release our models with interested researchers, along with code for experimenting with them
translated by 谷歌翻译
GPT-3 (Generative Pre-trained Transformer 3) is a large-scale autoregressive language model developed by OpenAI, which has demonstrated impressive few-shot performance on a wide range of natural language processing (NLP) tasks. Hence, an intuitive application is to use it for data annotation. In this paper, we investigate whether GPT-3 can be used as a good data annotator for NLP tasks. Data annotation is the process of labeling data that could be used to train machine learning models. It is a crucial step in the development of NLP systems, as it allows the model to learn the relationship between the input data and the desired output. Given the impressive language capabilities of GPT-3, it is natural to wonder whether it can be used to effectively annotate data for NLP tasks. In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks. Through this analysis, we aim to provide insight into the potential of GPT-3 as a general-purpose data annotator in NLP.
translated by 谷歌翻译
有毒语言检测系统通常会错误地将包含少数群体群体提及的毒性的错误标记文本,因为这些群体通常是在线仇恨的目标。这种对虚假相关性的过度依赖也导致系统在检测隐式有毒语言方面挣扎。为了帮助缓解这些问题,我们创建了Toxigen,这是一个新的大规模和机器生成的数据集,该数据集是274K有毒和良性陈述,约有13个少数群体。我们开发了一个基于示范的提示框架和一种对抗性分类器的解码方法,以使用大量预处理的语言模型生成微妙的有毒和良性文本。以这种方式控制机器的生成使毒素可以比以前的人写文本的资源更大的规模和大约人口组覆盖隐式有毒文本。我们对毒素的一个充满挑战的子集进行人体评估,发现注释者难以区分机器生成的文本和人类写的语言。我们还发现,94.5%的有毒例子被人类注释者标记为仇恨言论。我们使用三个公开可用的数据集,我们表明,对我们的数据进行毒性分类器的填充可以大大提高其在人体编写数据上的性能。我们还证明,毒素可用于抵抗机器生成的毒性,因为鉴定在我们的评估子集中大大改善了分类器。我们的代码和数据可以在https://github.com/microsoft/toxigen上找到。
translated by 谷歌翻译
基于变压器的语言模型能够生成流利的文本,并在各种自然语言生成任务中有效地适应。但是,已证明在大型未标记的网络文本语料库中鉴定的语言模型已被证明会遭受堕落的有毒内容和社会偏见行为的损害,从而阻碍了他们的安全部署。提出了各种排毒方法来减轻语言模型的毒性;但是,这些方法是在包含与性别,种族或宗教相关的特定社会身份的提示条件下进行排毒语言模型的。在这项研究中,我们提出了增强氧化。一种基于强化学习的方法,用于降低语言模型中的毒性。我们应对语言模型中的安全性挑战,并提出了一种新的奖励模型,该模型能够检测有毒内容并减轻对毒性预测中社会身份的意外偏见。该实验表明,用于语言模型排毒的增强方法化方法优于自动评估指标中现有的排毒方法,这表明我们在语言模型排毒中的方法能力和对生成内容中社会认同的意外偏见的能力较小。
translated by 谷歌翻译
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-ofthe-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous nonsparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks. We also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.
translated by 谷歌翻译
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fillin-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. To address this, we develop AUTOPROMPT, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Using AUTO-PROMPT, we show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent state-of-the-art supervised models. We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning.
translated by 谷歌翻译
许多文献表明,基于及时的学习是使用大型预训练的语言模型的有效方法。最近的作品还展示了通过插入适当的提示来指导聊天机器人输出的可能性。基于梯度的方法通常用于扰动提示。但是,某些语言模型甚至无法为公众提供。在这项工作中,我们首先探讨了提示和加强学习(RL)与转向模型的生成的组合,而无需访问任何模型的参数。其次,为了减少培训工作并增强对看不见的任务的普遍性,我们应用多任务学习以使模型学会更好地对新任务进行推广。实验结果表明,我们提出的方法可以成功控制几个最新的(SOTA)对话模型,而无需访问其参数。此外,该模型证明了与基线模型更少的步骤快速适应看不见的任务的强大能力。
translated by 谷歌翻译
语言可以用作再现和执行有害刻板印象和偏差的手段,并被分析在许多研究中。在本文中,我们对自然语言处理中的性别偏见进行了304篇论文。我们分析了社会科学中性别及其类别的定义,并将其连接到NLP研究中性别偏见的正式定义。我们调查了在对性别偏见的研究中应用的Lexica和数据集,然后比较和对比方法来检测和减轻性别偏见。我们发现对性别偏见的研究遭受了四个核心限制。 1)大多数研究将性别视为忽视其流动性和连续性的二元变量。 2)大部分工作都在单机设置中进行英语或其他高资源语言进行。 3)尽管在NLP方法中对性别偏见进行了无数的论文,但我们发现大多数新开发的算法都没有测试他们的偏见模型,并无视他们的工作的伦理考虑。 4)最后,在这一研究线上发展的方法基本缺陷涵盖性别偏差的非常有限的定义,缺乏评估基线和管道。我们建议建议克服这些限制作为未来研究的指导。
translated by 谷歌翻译
聊天和个人助理形式的对话系统正在越来越纳入人们的生命。现代对话系统可能会考虑采用拟人的人物,模仿社会人口统计团体对用户来说更接近和值得信赖。但是,通过一个人的通过可能导致偏见的采用。在本文中,我们向对话系统中的角色偏见提供了第一个大规模研究,并对不同社会阶层,性取向,种族和性别的人物进行分析。我们将人格偏见定义为响应的有害差异(例如,不同的冒险程度,与有害陈述的不同程度)产生从采用不同的人口统计学。此外,我们介绍了一个开源框架,UnitPersonabias,以探索对话系统中的角色偏见。通过分析搅拌机和对话对话系统,我们观察到,与不使用任何一个人的人,采用人物实际上可以减少有害响应。此外,我们发现角色选择可以影响所生成的响应中的危害程度,因此应在部署前系统地进行系统。我们还分析了角色如何导致对特定人口统计数据的不同危害。
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译
在基于文本的分类器中测试公平性问题的一种常见方法是通过使用反事实来:如果更改输入中的敏感属性,则分类器输出是否会更改?现有的反事实生成方法通常依赖于单词列表或模板,产生不考虑语法,上下文或微妙敏感属性引用的简单反事实,并且可能会错过WordList创建者未考虑的问题。在本文中,我们介绍了一项为克服这些缺点而产生的反事实的任务,并证明了如何利用大型语言模型(LLM)来在此任务上取得进展。我们表明,这种基于LLM的方法可以产生现有方法无法实现的复杂反事实,从而比较了民事评论数据集中各种反事实生成方法的性能,并在评估毒性分类器时显示出它们的价值。
translated by 谷歌翻译
已显示在文本上训练的NLP模型可以重现人类的刻板印象,当系统大规模部署系统时,可以放大边缘化组的危害。我们适应了Koch等人的代理 - 信号 - 局势(ABC)刻板印象模型。(2016年)从社会心理学作为系统研究和发现语言模型(LMS)中刻板印象群体特征关联的框架。我们介绍了用于测量语言模型的刻板印象关联的灵敏度测试(集合)。为了使用ABC模型评估集合和其他措施,我们从美国受试者那里收集小组特征判断,以与英语LM刻板印象进行比较。最后,我们扩展了此框架以测量相互切换身份的LM定型观念。
translated by 谷歌翻译