大型语言模型(LLMS)最近在生成流利文本方面表现出了令人印象深刻的能力。 LLM还显示出一种令人震惊的倾向,倾向于再现社会偏见,例如性别与职业或种族或种族和犯罪行为之间的刻板印象。像种族和性别一样,道德是一个重要的社会变量。我们的道德偏见会影响我们如何接受他人及其论点。我预计LLM的明显道德能力将在其对人类社会环境的影响中发挥重要作用。这项工作调查了LLMS是否复制与政治团体相关的道德偏见,我称这是道德模仿的能力。我使用道德基础理论中的工具来衡量模型中的道德内容,在促使自由和保守的政治身份促使该模型产生的文本中,使用了道德基础理论中的工具来探讨GPT-3(175B参数语言模型)的这一假设。结果表明,大型语言模型确实是道德模仿。当带有政治身份的提示时,GPT-3产生了反映相应道德偏见的文本。道德模仿可能有助于通过道德重新建立社会群体之间的理解。令人担忧的是,它还可以加强两极分化的观点,加剧现有的社会挑战。我希望这项工作鼓励进一步调查道德模仿能力,包括如何利用它来实现社会善良并最大程度地降低其风险。
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
我们建议并探讨可以将语言模型作为社会科学研究中特定人类亚人群的有效代理进行研究的可能性。人工智能工具的实践和研究应用有时受到有问题的偏见(例如种族主义或性别歧视)的限制,这些偏见通常被视为模型的统一特性。我们表明,一个这样的工具中的“算法偏见”(GPT-3语言模型)既是细粒度又是人口统计相关的,这意味着适当的条件会导致其准确地仿真来自各种人类的响应分布亚组。我们将此属性称为“算法忠诚度”,并在GPT-3中探索其范围。我们通过将模型调节在美国进行的多项大型调查中的数千个社会人口统计背景故事中调节,从而创建“硅样本”。然后,我们比较硅和人类样品,以证明GPT-3中包含的信息远远超出了表面相似性。它是细微的,多方面的,并反映了特征人类态度的思想,态度和社会文化背景之间的复杂相互作用。我们建议,具有足够算法的忠诚度的语言模型构成了一种新颖而有力的工具,可以促进各种学科的人类和社会的理解。
translated by 谷歌翻译
道德框架和情感会影响各种在线和离线行为,包括捐赠,亲环境行动,政治参与,甚至参与暴力抗议活动。自然语言处理中的各种计算方法(NLP)已被用来从文本数据中检测道德情绪,但是为了在此类主观任务中取得更好的性能,需要大量的手工注销训练数据。事实证明,以前对道德情绪注释的语料库已被证明是有价值的,并且在NLP和整个社会科学中都产生了新的见解,但仅限于Twitter。为了促进我们对道德修辞的作用的理解,我们介绍了道德基础Reddit语料库,收集了16,123个reddit评论,这些评论已从12个不同的子雷迪维特策划,由至少三个训练有素的注释者手工注释,用于8种道德情绪(即护理,相称性,平等,纯洁,权威,忠诚,瘦道,隐含/明确的道德)基于更新的道德基础理论(MFT)框架。我们使用一系列方法来为这种新的语料库(例如跨域分类和知识转移)提供基线道德句子分类结果。
translated by 谷歌翻译
当前机器学习的大部分基础的大型数据集提出了有关不适当内容的严重问题,例如冒犯,侮辱,威胁或可能引起焦虑。这要求增加数据集文档,例如使用数据表。它们除其他主题外,还鼓励反思数据集的组成。到目前为止,该文档是手动完成的,因此可能是乏味且容易出错的,尤其是对于大型图像数据集。在这里,我们询问了机器是否可以帮助我们反思不适当的内容的“循环”问题,回答了数据表中的问题16。为此,我们建议使用存储在预训练的变压器模型中的信息来协助我们进行文档过程。具体而言,基于社会 - 道德价值数据集的及时调整引导剪辑以识别潜在的不适当的内容,从而减少了人工的劳动。然后,我们根据使用视觉模型生成的字幕来记录使用单词云找到的不适当图像。两个流行的大规模计算机视觉数据集的文档(ImageNet和OpenImages)以这种方式产生,这表明机器确实可以帮助数据集创建者回答有关不适当图像内容的问题16。
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
随着政治态度在美国的意识形态上存在分歧,政治言论在lingus言中有所不同。美国政党之间不断扩大的两极分化是由于它们之间的相互理解的侵蚀而加速了。我们的目的是通过一个框架来使这些社区相互了解,该框架使用社区语言模型社区LM对社区特定的回答进行了针对社区的回答。在我们的框架中,我们在Twitter上确定了每个社区的党派成员,并在他们撰写的推文上进行了微调LMS。然后,我们使用对相应的LMS的及时探测两组的世界观,并提示对美国国家选举研究(ANES)2020年探索性测试调查提出对公共人物和群体的意见。我们将LMS与ANES调查结果产生的响应进行比较,并找到一定级别的对齐水平,该级别大大超过了几种基线方法。我们的工作旨在表明,我们可以使用社区LMS来查询任何一群人的世界观,以提供足够大的社交媒体讨论或媒体饮食。
translated by 谷歌翻译
大型语言模型会产生类似人类的文本,这些文本推动了越来越多的应用。但是,最近的文献以及越来越多的现实世界观察表明,这些模型可以产生有毒,有偏见,不真实或其他有害的语言。尽管正在进行评估语言模型危害的工作,但要远见卓识转换出可能出现的危害可能会引起严格的基准。为了促进这种翻译,我们概述了六种表征有害文本的方式,这些方法在设计新基准时值得明确考虑。然后,我们将这些特征用作镜头来识别现有基准中的趋势和差距。最后,我们将它们应用于视角API的案例研究,这是一种毒性分类器,被广泛用于HARS基准。我们的特征提供了一块桥梁,可以在远见和有效评估之间转化。
translated by 谷歌翻译
语言的感知毒性可能会因某人的身份和信仰而有所不同,但是在收集有毒语言数据集时往往忽略这种变化,从而导致数据集和模型偏差。我们寻求理解谁,为什么,以及毒性注释的偏见背后。在两个在线研究中具有人口统计地和政治上的参与者,我们调查了注释者身份(世卫组织)和信仰的影响(为什么),从社会心理学研究中汲取仇恨言语,自由言论,种族主义信念,政治倾向等。我们解除了通过考虑三个特征的帖子作为毒性的毒性:反黑色语言,非洲裔美国英语(AAE)方言和粗俗。我们的结果显示了注释者身份和信仰之间的强有力的协会及其毒性评级。值得注意的是,更保守的注释者和那些对我们的种族信仰规模的评分的人不太可能对毒黑语言归因于毒性,但更有可能将AAE归因于毒性。我们还提供了一个案例研究,说明了流行的毒性检测系统的评级如何自然地反映特定的信念和观点。我们的调查结果要求社会变量中的毒性标签,这提高了对有毒语言注释和检测的巨大影响。
translated by 谷歌翻译
语言模型可以根据给定的文化背景产生有害和偏置的输出并表现出不良行为。我们提出了一种将语言模型适应社会(PALM)与值目标数据集的过程,以通过在反映预定的一组目标值集合的数据集上进行制备和微调来显着地改变模型行为的迭代过程。我们使用三个指标评估我们的进程:具有人类评估的定量指标,将输出遵守目标值,毒性评分对产出;和定性度量分析与给定社会类别相关的最常见的单词。通过每次迭代,我们根据来自评估的观察到的缺点添加其他培训数据集示例。与基线和控制模型相比,PALMS在所有指标上显着更好地为广泛的GPT-3语言模型尺寸进行了基线和控制模型,而不会影响能力完整性。我们发现PALMS的有效性随模型规模而增加。我们表明,显着调整语言模型行为与小型手腕策划数据集是可行的。
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
越来越多的监督委员会和监管机构试图监视和管理对人们生活做出决定的算法。先前的工作已经探讨了人们如何认为应该做出算法的决策,但是对诸如社会人工学或直接经验之类的个人因素如何以决策情景的方式影响了他们的道德观点。我们通过探索人们对程序算法公平的一个方面的看法(在算法决定中使用特定功能的公平性)迈出了填补这一空白的一步,这与他们的(i)人口统计学(年龄,教育,性别,种族,政治观点,政治观点,政治观点, )和(ii)算法决策方案的个人经历。我们发现,具有算法决策背景的政治观点和个人经验会极大地影响对使用不同特征进行保释决策的公平性的看法。利用我们的结果,我们讨论了对利益相关者参与和算法监督的影响,包括需要考虑在构成监督和监管机构时考虑多样性的多个维度。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Drawing from the resources of psychoanalysis and critical media studies, in this paper we develop an analysis of Large Language Models (LLMs) as automated subjects. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which AI behaviour, including its productions of bias and harm, can be analysed. First, we introduce language models, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of language models, culminating with the releases, in 2022, of systems that realise state-of-the-art natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be helpful, truthful and harmless by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer, its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
translated by 谷歌翻译
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-ofthe-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous nonsparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks. We also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.
translated by 谷歌翻译
We present Second Thought, a new learning paradigm that enables language models (LMs) to re-align with human values. By modeling the chain-of-edits between value-unaligned and value-aligned text, with LM fine-tuning and additional refinement through reinforcement learning, Second Thought not only achieves superior performance in three value alignment benchmark datasets but also shows strong human-value transfer learning ability in few-shot scenarios. The generated editing steps also offer better interpretability and ease for interactive error correction. Extensive human evaluations further confirm its effectiveness.
translated by 谷歌翻译
在基于文本的分类器中测试公平性问题的一种常见方法是通过使用反事实来:如果更改输入中的敏感属性,则分类器输出是否会更改?现有的反事实生成方法通常依赖于单词列表或模板,产生不考虑语法,上下文或微妙敏感属性引用的简单反事实,并且可能会错过WordList创建者未考虑的问题。在本文中,我们介绍了一项为克服这些缺点而产生的反事实的任务,并证明了如何利用大型语言模型(LLM)来在此任务上取得进展。我们表明,这种基于LLM的方法可以产生现有方法无法实现的复杂反事实,从而比较了民事评论数据集中各种反事实生成方法的性能,并在评估毒性分类器时显示出它们的价值。
translated by 谷歌翻译
This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Using prompt engineering, we generated messages that could be used to raise awareness and compared them to retweeted human-generated messages via computational and human evaluation methods. The system was easy to use and prolific, and computational analyses revealed that the AI-generated messages were on par with human-generated ones in terms of sentiment, reading ease, and semantic content. Also, the human evaluation study showed that AI-generated messages ranked higher in message quality and clarity. We discuss the theoretical, practical, and ethical implications of these results.
translated by 谷歌翻译
道德是人类最长的智力努力之一。近年来,AI和NLP的领域试图撰写与学习系统的与人类相互作用的学习系统,应该被限制为行为道德。该静脉中的一个提议是建立道德模型,可以采取任意文本,并输出关于所描述的情况的道德判断。在这项工作中,我们专注于对最近提出的Delphi模型的单一案例研究,并为该项目的建议自动化道德判决提供了批评。通过对Delphi的审计,我们检查更广泛的问题,适用于任何类似的尝试。我们讨论了机器道德如何通过专注于技术的当前和近期使用技术的方式来讨论机器伦理,以透明度,民主价值观,并允许直接的责任。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译