使用转移学习将预先训练的“源模型”调整为下游“目标任务”可以大大提高性能,而似乎没有缺点。在这项工作中,我们证明毕竟可能存在一个缺点:偏差转移或源模型偏见的趋势,即使将模型调整为目标类别后,也可以持续存在。通过合成和自然实验的组合,我们表明偏差转移(a)是在现实设置中(例如,在图像网或其他标准数据集上进行预训练时)以及(b)即使明确数据也可能发生(b) - 偏见。随着转移学习的模型越来越多地在现实世界中部署,我们的工作突出了理解预训练源模型的局限性的重要性。代码可从https://github.com/madrylab/bias-transfer获得
translated by 谷歌翻译
人们普遍认为,在传输学习中,包括更多的预训练数据可以转化为更好的性能。但是,最近的证据表明,从源数据集中删除数据实际上也可以提供帮助。在这项工作中,我们仔细研究了源数据集在转移学习中的作用,并提出了探索其对下游性能的影响的框架。我们的框架产生了新的功能,例如精确转移学习脆弱性以及检测诸如数据渗漏等病理和源数据集中存在误导示例之类的病理。特别是,我们证明,消除通过框架确定的有害数据点可改善来自ImageNet的转移学习绩效,以了解各种目标任务。代码可从https://github.com/madrylab/data-transfer获得
translated by 谷歌翻译
现有的方法用于隔离数据集中的硬群和虚假相关性通常需要人为干预。这可以使这些方法具有劳动密集型和特定于数据集的特定方式。为了解决这些缺点,我们提出了一种自动提炼模型故障模式的可扩展方法。具体而言,我们利用线性分类器来识别一致的误差模式,然后又诱导这些故障模式作为特征空间内的方向的自然表示。我们证明,该框架使我们能够发现并自动为培训数据集中的子群体提起挑战,并进行干预以改善模型对这些亚群的绩效。可在https://github.com/madrylab/failure-directions上找到代码
translated by 谷歌翻译
为了改善模型概括,模型设计师通常会隐式或显式地限制其模型使用的功能。在这项工作中,我们通过将其视为数据的不同观点来探讨利用此类特征先验的设计空间。具体而言,我们发现经过多种功能先验训练的模型具有较少的重叠故障模式,因此可以更有效地组合。此外,我们证明,在其他(未标记的)数据上共同训练此类模型使他们能够纠正彼此的错误,这反过来又导致对虚假相关性的更好的概括和韧性。可在https://github.com/madrylab/copriors上找到代码
translated by 谷歌翻译
我们通过直接重写其预测规则,介绍一种修改分类器的行为的方法。我们的方法几乎不需要额外的数据收集,可以应用于各种设置,包括将模型调整为新环境,并修改它以忽略杂散功能。我们的代码可在https://github.com/madrylab/editingclassifers获得。
translated by 谷歌翻译
Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems. Training these networks is computationally expensive and requires vast amounts of training data. Selling such pre-trained models can, therefore, be a lucrative business model. Unfortunately, once the models are sold they can be easily copied and redistributed. To avoid this, a tracking mechanism to identify models as the intellectual property of a particular vendor is necessary.In this work, we present an approach for watermarking Deep Neural Networks in a black-box way. Our scheme works for general classification tasks and can easily be combined with current learning algorithms. We show experimentally that such a watermark has no noticeable impact on the primary task that the model is designed for and evaluate the robustness of our proposal against a multitude of practical attacks. Moreover, we provide a theoretical analysis, relating our approach to previous work on backdooring.
translated by 谷歌翻译
机器学习容易受到对抗操作的影响。先前的文献表明,在训练阶段,攻击者可以操纵数据和数据采样程序以控制模型行为。一个共同的攻击目标是种植后门,即迫使受害者模型学会识别只有对手知道的触发因素。在本文中,我们引入了一类新的后门攻击类,这些攻击隐藏在模型体系结构内,即在用于训练的功能的电感偏置中。这些后门很容易实现,例如,通过为其他人将在不知不觉中重复使用的后式模型体系结构发布开源代码。我们证明,模型架构后门代表了一个真正的威胁,与其他方法不同,可以从头开始进行完整的重新训练。我们将建筑后门背后的主要构建原理(例如输入和输出之间的链接)形式化,并描述对它们的一些可能的保护。我们评估了对不同尺度的计算机视觉基准测试的攻击,并证明在各种培训环境中,潜在的脆弱性无处不在。
translated by 谷歌翻译
预训练模型(PTM)已被广泛用于各种下游任务。 PTM的参数分布在Internet上,可能会遭受后门攻击。在这项工作中,我们演示了PTMS的普遍脆弱性,在该工作中,可以通过任意下游任务中的后门攻击轻松控制PTMS。具体而言,攻击者可以添加一个简单的预训练任务,该任务将触发实例的输出表示限制为预定义的向量,即神经元级后门攻击(NEUBA)。如果在微调过程中未消除后门功能,则触发器可以通过预定义的矢量预测固定标签。在自然语言处理(NLP)和计算机视觉(CV)的实验中,我们表明Neuba绝对可以控制触发实例的预测,而无需了解下游任务。最后,我们将几种防御方法应用于Neuba,并发现模型修剪是通过排除后门神经元来抵抗Neuba的有希望的方向。我们的发现听起来是红色警报,用于广泛使用PTM。我们的源代码和模型可在\ url {https://github.com/thunlp/neuba}上获得。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
We conduct a systematic study of backdoor vulnerabilities in normally trained Deep Learning models. They are as dangerous as backdoors injected by data poisoning because both can be equally exploited. We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities. We find that natural backdoors are widely existing, with most injected backdoor attacks having natural correspondences. We categorize these natural backdoors and propose a general detection framework. It finds 315 natural backdoors in the 56 normally trained models downloaded from the Internet, covering all the different categories, while existing scanners designed for injected backdoors can at most detect 65 backdoors. We also study the root causes and defense of natural backdoors.
translated by 谷歌翻译
后门是针对深神经网络(DNN)的强大攻击。通过中毒训练数据,攻击者可以将隐藏的规则(后门)注入DNN,该规则仅在包含攻击特异性触发器的输入上激活。尽管现有工作已经研究了各种DNN模型的后门攻击,但它们仅考虑静态模型,这些模型在初始部署后保持不变。在本文中,我们研究了后门攻击对时变DNN模型更现实的情况的影响,其中定期更新模型权重以处理数据分布的漂移。具体而言,我们从经验上量化了后门针对模型更新的“生存能力”,并检查攻击参数,数据漂移行为和模型更新策略如何影响后门生存能力。我们的结果表明,即使攻击者会积极增加触发器的大小和毒药比,即使在几个模型更新中,一次射击后门攻击(即一次仅中毒训练数据)也无法幸免。为了保持模型更新影响,攻击者必须不断将损坏的数据引入培训管道。这些结果共同表明,当模型更新以学习新数据时,它们也将后门“忘记”为隐藏的恶意功能。旧培训数据之间的分配变化越大,后门被遗忘了。利用这些见解,我们应用了智能学习率调度程序,以进一步加速模型更新期间的后门遗忘,这阻止了单发后门在单个模型更新中幸存下来。
translated by 谷歌翻译
已知性别偏见存在于大规模的视觉数据集中,并且可以在下游模型中反映甚至扩大。许多先前的作品通常通过尝试从图像中删除性别表达信息来减轻性别偏见。为了理解这些方法的可行性和实用性,我们研究了大规模视觉数据集中存在的$ \ textit {gengender伪像} $。我们将$ \ textit {性别伪像} $定义为与性别相关的视觉提示,专门针对那些由现代图像分类器学习并具有可解释的人类推论的线索。通过我们的分析,我们发现性别伪像在可可和开放型数据集中无处不在,从低级信息(例如,颜色通道的平均值)到图像的高级组成(例如姿势和姿势和姿势,,,,,,,,,地和图像的平均值),无处不在(例如,姿势和姿势,姿势和姿势,,,姿势和姿势,是姿势和姿势,是姿势和姿势,是姿势和姿势的平均值)。人的位置)。鉴于性别文物的流行,我们声称试图从此类数据集中删除性别文物的尝试是不可行的。取而代之的是,责任在于研究人员和从业人员意识到数据集中图像的分布是高度性别的,因此开发了对各组之间这些分配变化的强大方法。
translated by 谷歌翻译
Backdoor attacks represent one of the major threats to machine learning models. Various efforts have been made to mitigate backdoors. However, existing defenses have become increasingly complex and often require high computational resources or may also jeopardize models' utility. In this work, we show that fine-tuning, one of the most common and easy-to-adopt machine learning training operations, can effectively remove backdoors from machine learning models while maintaining high model utility. Extensive experiments over three machine learning paradigms show that fine-tuning and our newly proposed super-fine-tuning achieve strong defense performance. Furthermore, we coin a new term, namely backdoor sequela, to measure the changes in model vulnerabilities to other attacks before and after the backdoor has been removed. Empirical evaluation shows that, compared to other defense methods, super-fine-tuning leaves limited backdoor sequela. We hope our results can help machine learning model owners better protect their models from backdoor threats. Also, it calls for the design of more advanced attacks in order to comprehensively assess machine learning models' backdoor vulnerabilities.
translated by 谷歌翻译
有关后门毒物攻击的广泛文献研究了使用“数字触发图案”的后门攻击和防御措施。相比之下,“物理后门”使用物理对象作为触发器,直到最近才被确定,并且在质量上足够不同,可以抵抗针对数字触发后门的所有防御。对物理后门的研究受到了访问大型数据集的限制,该数据集包含包含与分类目标共同位置的物理对象的真实图像。构建这些数据集是时间和劳动力密集的。这项工作旨在应对有关物理后门攻击研究的可访问性挑战。我们假设在流行数据集(例如Imagenet)中可能存在天然存在的物理共同存在的对象。一旦确定,这些数据的仔细重新标记可以将它们转化为训练样本,以进行物理后门攻击。我们提出了一种方法,可以通过在现有数据集中识别这些潜在触发器的这些亚集,以及它们可能毒害的特定类别。我们称这些天然存在的触发级子集自然后门数据集。我们的技术成功地识别了广泛可用的数据集中的自然后门,并在行为上等同于在手动策划数据集中训练的模型。我们发布我们的代码,以使研究社区可以创建自己的数据集,以研究物理后门攻击。
translated by 谷歌翻译
后门攻击已成为深度神经网络(DNN)的主要安全威胁。虽然现有的防御方法在检测或擦除后以后展示了有希望的结果,但仍然尚不清楚是否可以设计强大的培训方法,以防止后门触发器首先注入训练的模型。在本文中,我们介绍了\ emph {反后门学习}的概念,旨在培训\ emph {Clean}模型给出了后门中毒数据。我们将整体学习过程框架作为学习\ emph {clean}和\ emph {backdoor}部分的双重任务。从这种观点来看,我们确定了两个后门攻击的固有特征,因为他们的弱点2)后门任务与特定类(后门目标类)相关联。根据这两个弱点,我们提出了一般学习计划,反后门学习(ABL),在培训期间自动防止后门攻击。 ABL引入了标准培训的两级\ EMPH {梯度上升}机制,帮助分离早期训练阶段的后台示例,2)在后续训练阶段中断后门示例和目标类之间的相关性。通过对多个基准数据集的广泛实验,针对10个最先进的攻击,我们经验证明,后卫中毒数据上的ABL培训模型实现了与纯净清洁数据训练的相同性能。代码可用于\ url {https:/github.com/boylyg/abl}。
translated by 谷歌翻译
机器学习模型通常使用诸如“依靠人的存在来检测网球拍”的虚假模式,这不概括。在这项工作中,我们介绍了一个端到端的管道,用于识别和减轻图像分类器的虚假模式。我们首先找到“模型对网球拍预测的模式,如果我们隐藏人民的时间似的63%。”然后,如果模式是虚幻的,我们通过新颖的数据增强来减轻它。我们展示了这种方法识别了一种多样化的杂散模式,并且它通过产生一个模型来减轻它们,这些模型在虚假图案对虚假模式对分布偏移不有用和更鲁棒的分布上进行更准确。
translated by 谷歌翻译
Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy (r = 0.99 and 0.96, respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from Ima-geNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time.We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
translated by 谷歌翻译