如今,深入学习模型已广泛部署,以解决各种各样的任务。但是,很少关注关联的法律方面。 2016年,欧盟批准了2018年生效的一般数据保护法规。其主要理由是通过经营所谓的“数据经济”的方式来保护其公民的隐私和数据保护。由于数据是现代人工智能的燃料,因此认为GDPR可以部分适用于一系列算法的决策制定任务,然后更具结构化的AI法规生效。同时,AI不应允许不希望的信息泄漏与创建的目的偏离。在这项工作中,我们提出了DISP,这是一种深入学习模型的方法,该方法删除了与AI处理的数据相关的某些私人类别相关的信息。特别是,分配是一种正规化策略,在培训时间删除了属于同一私人班级的功能,从而隐藏了私人课程会员资格的信息。我们对最先进的深度学习模型的实验显示了分配的有效性,最大程度地降低了我们希望保持私人的班级的提取风险。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on.We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
translated by 谷歌翻译
随着全球人口越来越多的人口驱动世界各地的快速城市化,有很大的需要蓄意审议值得生活的未来。特别是,随着现代智能城市拥抱越来越多的数据驱动的人工智能服务,值得记住技术可以促进繁荣,福祉,城市居住能力或社会正义,而是只有当它具有正确的模拟补充时(例如竭尽全力,成熟机构,负责任治理);这些智能城市的最终目标是促进和提高人类福利和社会繁荣。研究人员表明,各种技术商业模式和特征实际上可以有助于极端主义,极化,错误信息和互联网成瘾等社会问题。鉴于这些观察,解决了确保了诸如未来城市技术基岩的安全,安全和可解释性的哲学和道德问题,以为未来城市的技术基岩具有至关重要的。在全球范围内,有能够更加人性化和以人为本的技术。在本文中,我们分析和探索了在人以人为本的应用中成功部署AI的安全,鲁棒性,可解释性和道德(数据和算法)挑战的关键挑战,特别强调这些概念/挑战的融合。我们对这些关键挑战提供了对现有文献的详细审查,并分析了这些挑战中的一个可能导致他人的挑战方式或帮助解决其他挑战。本文还建议了这些域的当前限制,陷阱和未来研究方向,以及如何填补当前的空白并导致更好的解决方案。我们认为,这种严谨的分析将为域名的未来研究提供基准。
translated by 谷歌翻译
从公共机器学习(ML)模型中泄漏数据是一个越来越重要的领域,因为ML的商业和政府应用可以利用多个数据源,可能包括用户和客户的敏感数据。我们对几个方面的当代进步进行了全面的调查,涵盖了非自愿数据泄漏,这对ML模型很自然,潜在的恶毒泄漏是由隐私攻击引起的,以及目前可用的防御机制。我们专注于推理时间泄漏,这是公开可用模型的最可能场景。我们首先在不同的数据,任务和模型体系结构的背景下讨论什么是泄漏。然后,我们提出了跨非自愿和恶意泄漏的分类法,可用的防御措施,然后进行当前可用的评估指标和应用。我们以杰出的挑战和开放性的问题结束,概述了一些有希望的未来研究方向。
translated by 谷歌翻译
鉴于对机器学习模型的访问,可以进行对手重建模型的培训数据?这项工作从一个强大的知情对手的镜头研究了这个问题,他们知道除了一个之外的所有培训数据点。通过实例化混凝土攻击,我们表明重建此严格威胁模型中的剩余数据点是可行的。对于凸模型(例如Logistic回归),重建攻击很简单,可以以封闭形式导出。对于更常规的模型(例如神经网络),我们提出了一种基于训练的攻击策略,该攻击策略接收作为输入攻击的模型的权重,并产生目标数据点。我们展示了我们对MNIST和CIFAR-10训练的图像分类器的攻击的有效性,并系统地研究了标准机器学习管道的哪些因素影响重建成功。最后,我们从理论上调查了有多差异的隐私足以通过知情对手减轻重建攻击。我们的工作提供了有效的重建攻击,模型开发人员可以用于评估超出以前作品中考虑的一般设置中的个别点的记忆(例如,生成语言模型或访问培训梯度);它表明,标准模型具有存储足够信息的能力,以实现培训数据点的高保真重建;它表明,差异隐私可以成功减轻该参数制度中的攻击,其中公用事业劣化最小。
translated by 谷歌翻译
对协作学习的实证攻击表明,深度神经网络的梯度不仅可以披露训练数据的私有潜在属性,还可以用于重建原始数据。虽然先前的作品试图量化了梯度的隐私风险,但这些措施没有建立理论上对梯度泄漏的理解了解,而不是跨越攻击者的概括,并且不能完全解释通过实际攻击在实践中通过实证攻击观察到的内容。在本文中,我们介绍了理论上激励的措施,以量化攻击依赖和攻击无关方式的信息泄漏。具体而言,我们展示了$ \ mathcal {v} $ - 信息的适应,它概括了经验攻击成功率,并允许量化可以从任何所选择的攻击模型系列泄漏的信息量。然后,我们提出了独立的措施,只需要共享梯度,用于量化原始和潜在信息泄漏。我们的经验结果,六个数据集和四种流行型号,揭示了第一层的梯度包含最高量的原始信息,而(卷积)特征提取器层之后的(完全连接的)分类层包含最高的潜在信息。此外,我们展示了如何在训练期间诸如梯度聚集的技术如何减轻信息泄漏。我们的工作为更好的防御方式铺平了道路,例如基于层的保护或强聚合。
translated by 谷歌翻译
良好的培训数据是开发有用的ML应用程序的先决条件。但是,在许多域中,现有数据集不能由于隐私法规(例如,从医学研究)而被共享。这项工作调查了一种简单而非规范的方法,可以匿名数据综合来使第三方能够受益于此类私人数据。我们探讨了从不切实际,任务相关的刺激中隐含地学习的可行性,这通过激发训练有素的深神经网络(DNN)的神经元来合成。因此,神经元励磁用作伪生成模型。刺激数据用于培训新的分类模型。此外,我们将此框架扩展以抑制与特定个人相关的表示。我们使用开放和大型闭合临床研究的睡眠监测数据,并评估(1)最终用户是否可以创建和成功使用定制分类模型进行睡眠呼吸暂停检测,并且(2)研究中参与者的身份受到保护。广泛的比较实证研究表明,在刺激上培训的不同算法能够在与原始模型相同的任务上成功概括。然而,新和原始模型之间的架构和算法相似性在性能方面发挥着重要作用。对于类似的架构,性能接近使用真实数据(例如,精度差为0.56 \%,Kappa系数差为0.03-0.04)。进一步的实验表明,刺激可以在很大程度上成功地匿名匿名研究临床研究的参与者。
translated by 谷歌翻译
数十年来,计算机系统持有大量个人数据。一方面,这种数据丰度允许在人工智能(AI),尤其是机器学习(ML)模型中突破。另一方面,它可能威胁用户的隐私并削弱人类与人工智能之间的信任。最近的法规要求,可以从一般情况下从计算机系统中删除有关用户的私人信息,特别是根据要求从ML模型中删除(例如,“被遗忘的权利”)。虽然从后端数据库中删除数据应该很简单,但在AI上下文中,它不够,因为ML模型经常“记住”旧数据。现有的对抗攻击证明,我们可以从训练有素的模型中学习私人会员或培训数据的属性。这种现象要求采用新的范式,即机器学习,以使ML模型忘记了特定的数据。事实证明,由于缺乏共同的框架和资源,最近在机器上学习的工作无法完全解决问题。在本调查文件中,我们试图在其定义,场景,机制和应用中对机器进行彻底的研究。具体而言,作为最先进的研究的类别集合,我们希望为那些寻求机器未学习的入门及其各种表述,设计要求,删除请求,算法和用途的人提供广泛的参考。 ML申请。此外,我们希望概述范式中的关键发现和趋势,并突出显示尚未看到机器无法使用的新研究领域,但仍可以受益匪浅。我们希望这项调查为ML研究人员以及寻求创新隐私技术的研究人员提供宝贵的参考。我们的资源是在https://github.com/tamlhp/awesome-machine-unlearning上。
translated by 谷歌翻译
The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
translated by 谷歌翻译
可信的研究环境(TRE)S是安全和安全的环境,其中研究人员可以访问敏感数据。随着电子健康记录(EHR),医学成像和基因组数据等医疗数据的增长和多样性,通常在使用人工智能(AI)和机器学习子场(ML)的使用增加医疗领域。这产生了披露从TRES的新类型输出的希望,例如培训的机器学习模型。虽然特定的指导方针和政策存在于TRES中的统计披露控制,但它们并不令人满意地涵盖这些新类型的输出请求。在本文中,我们定义了在TRES内医疗保健机器学习的应用程序和披露的一些挑战。我们描述了各种漏洞,引入AI带来了TRES。我们还提供了与培训ML模型的披露相关的不同类型和风险水平的介绍。我们终于描述了开发和调整政策和工具的新研究机会,以安全地披露从TRES的机器学习输出。
translated by 谷歌翻译
我们调查分裂学习的安全 - 一种新颖的协作机器学习框架,通过需要最小的资源消耗来实现峰值性能。在本文中,我们通过介绍客户私人培训集重建的一般攻击策略来揭示议定书的脆弱性并展示其固有的不安全。更突出地,我们表明恶意服务器可以积极地劫持分布式模型的学习过程,并将其纳入不安全状态,从而为客户端提供推动攻击。我们实施不同的攻击调整,并在各种数据集中测试它们以及现实的威胁方案。我们证明我们的攻击能够克服最近提出的防御技术,旨在提高分裂学习议定书的安全性。最后,我们还通过扩展以前设计的联合学习的攻击来说明协议对恶意客户的不安全性。要使我们的结果可重复,我们会在https://github.com/pasquini-dario/splitn_fsha提供的代码。
translated by 谷歌翻译
在培训机器学习模型期间,它们可能会存储或“了解”有关培训数据的更多信息,而不是预测或分类任务所需的信息。属性推理攻击旨在从给定模型的培训数据中提取统计属性,而无需访问培训数据本身,从而利用了这一点。这些属性可能包括图片的质量,以识别相机模型,以揭示产品的目标受众的年龄分布或在计算机网络中使用恶意软件攻击的随附的主机类型。当攻击者可以访问所有模型参数时,即在白色盒子方案中,此攻击尤其准确。通过捍卫此类攻击,模型所有者可以确保其培训数据,相关的属性以及其知识产权保持私密,即使他们故意共享自己的模型,例如协作培训或模型泄漏。在本文中,我们介绍了属性,这是针对白盒属性推理攻击的有效防御机制,独立于培训数据类型,模型任务或属性数量。属性通过系统地更改目标模型的训练的权重和偏见来减轻属性推理攻击,从而使对手无法提取所选属性。我们在三个不同的数据集(包括表格数据和图像数据)以及两种类型的人工神经网络(包括人造神经网络)上进行了经验评估属性。我们的研究结果表明,以良好的隐私性权衡取舍,可以保护机器学习模型免受财产推理攻击的侵害,既有效又可靠。此外,我们的方法表明该机制也有效地取消了多个特性。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
Artificial intelligence is not only increasingly used in business and administration contexts, but a race for its regulation is also underway, with the EU spearheading the efforts. Contrary to existing literature, this article suggests, however, that the most far-reaching and effective EU rules for AI applications in the digital economy will not be contained in the proposed AI Act - but have just been enacted in the Digital Markets Act. We analyze the impact of the DMA and related EU acts on AI models and their underlying data across four key areas: disclosure requirements; the regulation of AI training data; access rules; and the regime for fair rankings. The paper demonstrates that fairness, in the sense of the DMA, goes beyond traditionally protected categories of non-discrimination law on which scholarship at the intersection of AI and law has so far largely focused on. Rather, we draw on competition law and the FRAND criteria known from intellectual property law to interpret and refine the DMA provisions on fair rankings. Moreover, we show how, based on CJEU jurisprudence, a coherent interpretation of the concept of non-discrimination in both traditional non-discrimination and competition law may be found. The final part sketches specific proposals for a comprehensive framework of transparency, access, and fairness under the DMA and beyond.
translated by 谷歌翻译
Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases.Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data.Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15.Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).
translated by 谷歌翻译
网络威胁情报(CTI)共享是减少攻击者和捍卫者之间信息不对称的重要活动。但是,由于数据共享和机密性之间的紧张关系,这项活动带来了挑战,这导致信息保留通常会导致自由骑士问题。因此,共享的信息仅代表冰山一角。当前的文献假设访问包含所有信息的集中数据库,但是由于上述张力,这并不总是可行的。这会导致不平衡或不完整的数据集,需要使用技术扩展它们。我们展示了这些技术如何导致结果和误导性能期望。我们提出了一个新颖的框架,用于从分布式数据中提取有关事件,漏洞和妥协指标的分布式数据,并与恶意软件信息共享平台(MISP)一起证明其在几种实际情况下的使用。提出和讨论了CTI共享的政策影响。拟议的系统依赖于隐私增强技术和联合处理的有效组合。这使组织能够控制其CTI,并最大程度地减少暴露或泄漏的风险,同时为共享的好处,更准确和代表性的结果以及更有效的预测性和预防性防御能力。
translated by 谷歌翻译
本文确定了数据驱动系统中的数据最小化和目的限制的两个核心数据保护原理。虽然当代数据处理实践似乎与这些原则的赔率达到差异,但我们证明系统可以在技术上使用的数据远远少于目前的数据。此观察是我们详细的技术法律分析的起点,揭示了妨碍了妨碍了实现的障碍,并举例说明了在实践中应用数据保护法的意外权衡。我们的分析旨在向辩论提供关于数据保护对欧盟人工智能发展的影响,为数据控制员,监管机构和研究人员提供实际行动点。
translated by 谷歌翻译
Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training.Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners-for example, medical institutions that may want to apply deep learning methods to clinical records-are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning.In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neuralnetwork model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacypreserving deep learning on benchmark datasets.
translated by 谷歌翻译
Covid-19的早期检测是一个持续的研究领域,可以帮助潜在患者的潜在患者进行分类,监测和一般健康评估,并可能降低应对冠状病毒大流行病的医院的运营压力。在文献中使用了不同的机器学习技术,用于使用常规临床数据(血液测试和生命体征)来检测冠状病毒。使用这些型号时,数据漏洞和信息泄漏可以带来声誉损害并导致医院的法律问题。尽管如此,保护避免潜在敏感信息泄漏的医疗保健模型是一个被人吸引人的研究区。在这项工作中,我们检查了两种机器学习方法,旨在预测使用常规收集和易于使用的临床数据的患者的Covid-19状态。我们雇用对抗性培训来探索强大的深度学习架构,保护与有关患者的人口统计信息相关的属性。我们在这项工作中检查的两种模型旨在保持对抗对抗攻击和信息泄漏的敏感信息。在一系列使用来自牛津大学医院的数据集,Bedfordshire医院NHS Foundation Trust,大学医院伯明翰NHS基金会信托,而朴茨茅斯医院大学NHS信任我们训练并测试两个神经网络,以使用来自基本实验室血液的信息预测PCR测试结果的神经网络对患者到达医院的测试和生命体征。我们评估其每个模型的隐私水平可以提供和展示我们提出的架构对可比基线的效力和稳健性。我们的主要贡献之一是,我们专门针对具有内置机制的有效Covid-19检测模型的开发,以便选择性地保护对抗对抗攻击的敏感属性。
translated by 谷歌翻译