Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm's use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court. This Essay examines these concerns through the lens of American antidiscrimination law-more particularly, through Title
translated by 谷歌翻译
The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
translated by 谷歌翻译
本文确定了数据驱动系统中的数据最小化和目的限制的两个核心数据保护原理。虽然当代数据处理实践似乎与这些原则的赔率达到差异,但我们证明系统可以在技术上使用的数据远远少于目前的数据。此观察是我们详细的技术法律分析的起点,揭示了妨碍了妨碍了实现的障碍,并举例说明了在实践中应用数据保护法的意外权衡。我们的分析旨在向辩论提供关于数据保护对欧盟人工智能发展的影响,为数据控制员,监管机构和研究人员提供实际行动点。
translated by 谷歌翻译
Artificial intelligence is not only increasingly used in business and administration contexts, but a race for its regulation is also underway, with the EU spearheading the efforts. Contrary to existing literature, this article suggests, however, that the most far-reaching and effective EU rules for AI applications in the digital economy will not be contained in the proposed AI Act - but have just been enacted in the Digital Markets Act. We analyze the impact of the DMA and related EU acts on AI models and their underlying data across four key areas: disclosure requirements; the regulation of AI training data; access rules; and the regime for fair rankings. The paper demonstrates that fairness, in the sense of the DMA, goes beyond traditionally protected categories of non-discrimination law on which scholarship at the intersection of AI and law has so far largely focused on. Rather, we draw on competition law and the FRAND criteria known from intellectual property law to interpret and refine the DMA provisions on fair rankings. Moreover, we show how, based on CJEU jurisprudence, a coherent interpretation of the concept of non-discrimination in both traditional non-discrimination and competition law may be found. The final part sketches specific proposals for a comprehensive framework of transparency, access, and fairness under the DMA and beyond.
translated by 谷歌翻译
业务分析(BA)的广泛采用带来了财务收益和提高效率。但是,当BA以公正的影响为决定时,这些进步同时引起了人们对法律和道德挑战的不断增加。作为对这些关注的回应,对算法公平性的新兴研究涉及算法输出,这些算法可能会导致不同的结果或其他形式的对人群亚组的不公正现象,尤其是那些在历史上被边缘化的人。公平性是根据法律合规,社会责任和效用是相关的;如果不充分和系统地解决,不公平的BA系统可能会导致社会危害,也可能威胁到组织自己的生存,其竞争力和整体绩效。本文提供了有关算法公平的前瞻性,注重BA的评论。我们首先回顾有关偏见来源和措施的最新研究以及偏见缓解算法。然后,我们对公用事业关系的详细讨论进行了详细的讨论,强调经常假设这两种构造之间经常是错误的或短视的。最后,我们通过确定企业学者解决有效和负责任的BA的关键的有影响力的公开挑战的机会来绘制前进的道路。
translated by 谷歌翻译
Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 150 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to specific research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent, and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.
translated by 谷歌翻译
部署的AI系统通常不起作用。它们可以随意地构造,不加选择地部署并欺骗性地促进。然而,尽管有这一现实,但学者,新闻界和决策者对功能的关注很少。这导致技术和政策解决方案的重点是“道德”或价值一致的部署,通常会跳过先前的问题,即给定系统功能或完全提供任何好处。描述各种功能失败的危害,我们分析一组案例研究,以创建已知的AI功能问题的分类法。然后,我们指出的是政策和组织响应,这些策略和组织响应经常被忽略,并在功能成为重点后变得更容易获得。我们认为功能是一项有意义的AI政策挑战,是保护受影响社区免受算法伤害的必要第一步。
translated by 谷歌翻译
值得信赖的人工智能(AI)已成为一个重要的话题,因为在AI系统及其创造者中的信任已经丢失。研究人员,公司和政府具有远离技术开发,部署和监督的边缘化群体的长期和痛苦的历史。结果,这些技术对小群体的有用甚至有害。我们争辩说,渴望信任的任何AI开发,部署和监测框架必须纳入女权主义,非剥削参与性设计原则和强大,外部和持续监测和测试。我们还向考虑到透明度,公平性和问责制的可靠性方面的重要性,特别是考虑对任何值得信赖的AI系统的核心价值观的正义和转移权力。创建值得信赖的AI通过资金,支持和赋予Grassroots组织,如AI Queer等基层组织开始,因此AI领域具有多样性和纳入可信和有效地发展的可信赖AI。我们利用AI的专家知识Queer通过其多年的工作和宣传来讨论以及如何以及如何在数据集和AI系统中使用如何以及如何在数据集和AI系统中使用以及沿着这些线路的危害。基于此,我们分享了对AI的性别方法,进一步提出了Queer认识论并分析它可以带来AI的好处。我们还讨论了如何在愿景中讨论如何使用此Queer认识论,提出与AI和性别多样性和隐私和酷儿数据保护相关的框架。
translated by 谷歌翻译
公平性是确保机器学习(ML)预测系统不会歧视特定个人或整个子人群(尤其是少数族裔)的重要要求。鉴于观察公平概念的固有主观性,文献中已经引入了几种公平概念。本文是一项调查,说明了通过大量示例和场景之间的公平概念之间的微妙之处。此外,与文献中的其他调查不同,它解决了以下问题:哪种公平概念最适合给定的现实世界情景,为什么?我们试图回答这个问题的尝试包括(1)确定手头现实世界情景的一组与公平相关的特征,(2)分析每个公平概念的行为,然后(3)适合这两个元素以推荐每个特定设置中最合适的公平概念。结果总结在决策图中可以由从业者和政策制定者使用,以导航相对较大的ML目录。
translated by 谷歌翻译
如果未来的AI系统在新的情况下是可靠的安全性,那么他们将需要纳入指导它们的一般原则,以便强烈地认识到哪些结果和行为将是有害的。这样的原则可能需要得到约束力的监管制度的支持,该法规需要广泛接受的基本原则。它们还应该足够具体用于技术实施。本文从法律中汲取灵感,解释了负面的人权如何履行此类原则的作用,并为国际监管制度以及为未来的AI系统建立技术安全限制的基础。
translated by 谷歌翻译
“算法公平性”的新兴领域提供了一种用于推理算法预测和决策的公平的一组新颖的方法。甚至作为算法公平已经成为提高域名在此类公共政策中平等的努力的突出成分,它也面临着显着的限制和批评。最基本的问题是称为“公平性不可能”的数学结果(公平的数学定义之间的不相容性)。此外,满足公平标准的许多算法实际上加剧了压迫。这两个问题呼吁质疑算法公平是否可以在追求平等中发挥富有成效的作用。在本文中,我将这些问题诊断为算法公平方法的乘积,并提出了该领域的替代路径。 “正式算法公平”的主导方法遭受了基本限制:它依赖于狭窄的分析框架,这些分析框架仅限于特定决策过程,孤立于这些决定的背景。鉴于这种缺点,我借鉴了法律和哲学的实质性平等的理论,提出了一种替代方法:“实质性算法公平。”实质性算法公平性采用更广泛的范围来分析公平性,超出特定决策点,以考虑社会等级,以及算法促进的决策的影响。因此,实质性算法公平表明,改革,使压迫压迫和逃避公平的不可能性。此外,实质性算法公平呈现出算法公平领域的新方向:远离“公平性”的正式数学模型,并朝着算法促进平等的实质性评估。
translated by 谷歌翻译
教育技术,以及他们部署的学校教育系统,制定了特定的意识形态,了解有关知识的重要事项以及学习者应该如何学习。作为人工智能技术 - 在教育和超越 - 可能导致边缘社区的不公平结果,已经制定了各种方法来评估和减轻AI的有害影响。然而,我们争辩于本文认为,在AI模型中的性能差异的基础上评估公平的主导范式是面对教育AI系统(RE)生产的系统性不公平。我们在批判理论和黑色女权主义奖学金中汲取了结构性不公正的镜头,以批判性地审查了几个普遍研究的和广泛采用的教育AI类别,并探讨了他们如何融入和重现结构不公正和不公平的历史遗产和不公平的历史遗产。他们模型绩效的奇偶阶段。我们关闭了替代愿景,为教育ai提供更公平的未来。
translated by 谷歌翻译
本文总结并评估了追求人工智能(AI)系统公平性的各种方法,方法和技术。它检查了这些措施的优点和缺点,并提出了定义,测量和防止AI偏见的实际准则。特别是,它警告了一些简单而常见的方法来评估AI系统中的偏见,并提供更复杂和有效的替代方法。该论文还通过在高影响力AI系统的不同利益相关者之间提供通用语言来解决该领域的广泛争议和困惑。它描述了涉及AI公平的各种权衡,并提供了平衡它们的实用建议。它提供了评估公平目标成本和收益的技术,并定义了人类判断在设定这些目标中的作用。本文为AI从业者,组织领导者和政策制定者提供了讨论和指南,以及针对更多技术受众的其他材料的各种链接。提供了许多现实世界的例子,以从实际角度阐明概念,挑战和建议。
translated by 谷歌翻译
机器学习显着增强了机器人的能力,使他们能够在人类环境中执行广泛的任务并适应我们不确定的现实世界。机器学习各个领域的最新作品强调了公平性的重要性,以确保这些算法不会再现人类的偏见并导致歧视性结果。随着机器人学习系统在我们的日常生活中越来越多地执行越来越多的任务,了解这种偏见的影响至关重要,以防止对某些人群的意外行为。在这项工作中,我们从跨学科的角度进行了关于机器人学习公平性的首次调查,该研究跨越了技术,道德和法律挑战。我们提出了偏见来源的分类法和由此产生的歧视类型。使用来自不同机器人学习域的示例,我们研究了不公平结果和减轻策略的场景。我们通过涵盖不同的公平定义,道德和法律考虑以及公平机器人学习的方法来介绍该领域的早期进步。通过这项工作,我们旨在为公平机器人学习中的开创性发展铺平道路。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning (ML) fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the potential interplay between AI and xenophobia on social media and recommendation systems, healthcare, immigration, employment, as well as biases in large pre-trained models. These help inform our recommendations towards an inclusive, xenophilic design of future AI systems.
translated by 谷歌翻译
\ EMPH {人工智能}(AI)系统越来越多地参与影响我们生活的决策,确保自动决策是公平的,道德已经成为最优先事项。直观地,我们觉得类似人的决定,人工代理人的判断应该必然地以一些道德原则为基础。然而,如果有关决定所基础的所有有关因素的全部信息,可以真正伦理(人类或人为)和公平(根据任何道德理论)和公平(根据公平的任何概念)的规定在决策时。这提出了两个问题:(1)在设置中,我们依赖使用通过监督学习获得的分类器的AI系统,存在一些感应/泛化,即使在学习期间也可能不存在一些相关属性。 (2)根据游戏揭示任何 - 无论是道德的纯策略都不可避免地易于剥削,建模这些决定。此外,在许多游戏中,只能通过使用混合策略来获得纳什均衡,即实现数学上最佳结果,决定必须随机化。在本文中,我们认为,在监督学习设置中,存在至少以及确定性分类器的随机分类器,因此在许多情况下可能是最佳选择。我们支持我们的理论效果,具有一个实证研究,表明对随机人工决策者的积极社会态度,并讨论了与使用与当前的AI政策和标准化举措相关的随机分类器相关的一些政策和实施问题。
translated by 谷歌翻译
大语言模型的兴起的一个关注点是它们可能造成重大伤害的潜力,尤其是在偏见,淫秽,版权和私人信息方面进行预处理。新兴的道德方法试图过滤预处理的材料,但是这种方法是临时的,未能考虑到上下文。我们提供了一种以法律为基础的过滤方法,该方法直接解决了过滤材料的权衡。首先,我们收集并提供了一堆法律,这是一个256GB(以及增长)的开源英语法律和行政数据数据集,涵盖法院意见,合同,行政规则和立法记录。对一堆法律进行预处理可能有助于解决有望改善司法接触的法律任务。其次,我们提炼政府已制定的法律规范将有毒或私人内容限制为可行的研究人员,并讨论我们的数据集如何反映这些规范。第三,我们展示了一堆法律如何为研究人员提供直接从数据中学习此类过滤规则的机会,从而为基于模型的处理提供了令人兴奋的新研究方向。
translated by 谷歌翻译
算法公平吸引了机器学习社区越来越多的关注。文献中提出了各种定义,但是它们之间的差异和联系并未清楚地解决。在本文中,我们回顾并反思了机器学习文献中先前提出的各种公平概念,并试图与道德和政治哲学,尤其是正义理论的论点建立联系。我们还从动态的角度考虑了公平的询问,并进一步考虑了当前预测和决策引起的长期影响。鉴于特征公平性的差异,我们提出了一个流程图,该流程图包括对数据生成过程,预测结果和诱导的影响的不同类型的公平询问的隐式假设和预期结果。本文展示了与任务相匹配的重要性(人们希望执行哪种公平性)和实现预期目的的手段(公平分析的范围是什么,什么是适当的分析计划)。
translated by 谷歌翻译