Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
Adversarial robustness assessment for video recognition models has raised concerns owing to their wide applications on safety-critical tasks. Compared with images, videos have much high dimension, which brings huge computational costs when generating adversarial videos. This is especially serious for the query-based black-box attacks where gradient estimation for the threat models is usually utilized, and high dimensions will lead to a large number of queries. To mitigate this issue, we propose to simultaneously eliminate the temporal and spatial redundancy within the video to achieve an effective and efficient gradient estimation on the reduced searching space, and thus query number could decrease. To implement this idea, we design the novel Adversarial spatial-temporal Focus (AstFocus) attack on videos, which performs attacks on the simultaneously focused key frames and key regions from the inter-frames and intra-frames in the video. AstFocus attack is based on the cooperative Multi-Agent Reinforcement Learning (MARL) framework. One agent is responsible for selecting key frames, and another agent is responsible for selecting key regions. These two agents are jointly trained by the common rewards received from the black-box threat models to perform a cooperative prediction. By continuously querying, the reduced searching space composed of key frames and key regions is becoming precise, and the whole query number becomes less than that on the original video. Extensive experiments on four mainstream video recognition models and three widely used action recognition datasets demonstrate that the proposed AstFocus attack outperforms the SOTA methods, which is prevenient in fooling rate, query number, time, and perturbation magnitude at the same.
translated by 谷歌翻译
在过去的十年中,深度学习急剧改变了传统的手工艺特征方式,具有强大的功能学习能力,从而极大地改善了传统任务。然而,最近已经证明了深层神经网络容易受到对抗性例子的影响,这种恶意样本由小型设计的噪音制作,误导了DNNs做出错误的决定,同时仍然对人类无法察觉。对抗性示例可以分为数字对抗攻击和物理对抗攻击。数字对抗攻击主要是在实验室环境中进行的,重点是改善对抗性攻击算法的性能。相比之下,物理对抗性攻击集中于攻击物理世界部署的DNN系统,这是由于复杂的物理环境(即亮度,遮挡等),这是一项更具挑战性的任务。尽管数字对抗和物理对抗性示例之间的差异很小,但物理对抗示例具有特定的设计,可以克服复杂的物理环境的效果。在本文中,我们回顾了基于DNN的计算机视觉任务任务中的物理对抗攻击的开发,包括图像识别任务,对象检测任务和语义细分。为了完整的算法演化,我们将简要介绍不涉及身体对抗性攻击的作品。我们首先提出一个分类方案,以总结当前的物理对抗攻击。然后讨论现有的物理对抗攻击的优势和缺点,并专注于用于维持对抗性的技术,当应用于物理环境中时。最后,我们指出要解决的当前身体对抗攻击的问题并提供有前途的研究方向。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
最近的研究表明,深神经网络(DNN)易受对抗的对抗性斑块,这引入了对输入的可察觉而且局部化的变化。尽管如此,现有的方法都集中在图像上产生对抗性补丁,视频中的对应于视频的探索。与图像相比,攻击视频更具挑战性,因为它不仅需要考虑空间线索,而且需要考虑时间线索。为了缩短这种差距,我们在本文中介绍了一种新的对抗性攻击,子弹屏幕评论(BSC)攻击,攻击了BSC的视频识别模型。具体地,通过增强学习(RL)框架产生对抗性BSC,其中环境被设置为目标模型,并且代理商扮演选择每个BSC的位置和透明度的作用。通过不断查询目标模型和接收反馈,代理程序逐渐调整其选择策略,以实现具有非重叠BSC的高鬼速。由于BSC可以被视为一种有意义的补丁,将它添加到清洁视频不会影响人们对视频内容的理解,也不会引起人们的怀疑。我们进行广泛的实验,以验证所提出的方法的有效性。在UCF-101和HMDB-51数据集中,我们的BSC攻击方法可以在攻击三个主流视频识别模型时达到约90 \%的愚蠢速率,同时仅在视频中封闭\无文无线8 \%区域。我们的代码可在https://github.com/kay -ck/bsc-attack获得。
translated by 谷歌翻译
近年来,由于深度神经网络的发展,面部识别取得了很大的进步,但最近发现深神经网络容易受到对抗性例子的影响。这意味着基于深神经网络的面部识别模型或系统也容易受到对抗例子的影响。但是,现有的攻击面部识别模型或具有对抗性示例的系统可以有效地完成白色盒子攻击,而不是黑盒模仿攻击,物理攻击或方便的攻击,尤其是在商业面部识别系统上。在本文中,我们提出了一种攻击面部识别模型或称为RSTAM的系统的新方法,该方法可以使用由移动和紧凑型打印机打印的对抗性面膜进行有效的黑盒模仿攻击。首先,RSTAM通过我们提出的随机相似性转换策略来增强对抗性面罩的可传递性。此外,我们提出了一种随机的元优化策略,以结合几种预训练的面部模型来产生更一般的对抗性掩模。最后,我们在Celeba-HQ,LFW,化妆转移(MT)和CASIA-FACEV5数据集上进行实验。还对攻击的性能进行了最新的商业面部识别系统的评估:Face ++,Baidu,Aliyun,Tencent和Microsoft。广泛的实验表明,RSTAM可以有效地对面部识别模型或系统进行黑盒模仿攻击。
translated by 谷歌翻译
人群计数已被广泛用于估计安全至关重要的场景中的人数,被证明很容易受到物理世界中对抗性例子的影响(例如,对抗性斑块)。尽管有害,但对抗性例子也很有价值,对于评估和更好地理解模型的鲁棒性也很有价值。但是,现有的对抗人群计算的对抗性示例生成方法在不同的黑盒模型之间缺乏强大的可传递性,这限制了它们对现实世界系统的实用性。本文提出了与模型不变特征正相关的事实,本文提出了感知的对抗贴片(PAP)生成框架,以使用模型共享的感知功能来定制对对抗性的扰动。具体来说,我们将一种自适应人群密度加权方法手工制作,以捕获各种模型中不变的量表感知特征,并利用密度引导的注意力来捕获模型共享的位置感知。证明它们都可以提高我们对抗斑块的攻击性转移性。广泛的实验表明,我们的PAP可以在数字世界和物理世界中实现最先进的进攻性能,并且以大幅度的优于以前的提案(最多+685.7 MAE和+699.5 MSE)。此外,我们从经验上证明,对我们的PAP进行的对抗训练可以使香草模型的性能受益,以减轻人群计数的几个实际挑战,包括跨数据集的概括(高达-376.0 MAE和-376.0 MAE和-354.9 MSE)和对复杂背景的鲁棒性(上升)至-10.3 MAE和-16.4 MSE)。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译
越来越多的工作表明,深层神经网络容易受到对抗例子的影响。这些采用适用于模型输入的小扰动的形式,这导致了错误的预测。不幸的是,大多数文献都集中在视觉上不可见量的扰动上,该扰动将应用于数字图像上,这些数字图像通常无法通过设计将其部署到物理目标上。我们提出了对抗性划痕:一种新颖的L0黑盒攻击,它采用图像中的划痕形式,并且比其他最先进的攻击具有更大的可部署性。对抗性划痕利用了b \'Ezier曲线,以减少搜索空间的维度,并可能将攻击限制为特定位置。我们在几种情况下测试了对抗划痕,包括公开可用的API和交通标志的图像。结果表明,我们的攻击通常比其他可部署的最先进方法更高的愚弄率更高,同时需要更少的查询并修改很少的像素。
translated by 谷歌翻译
对抗性攻击提供了研究深层学习模式的稳健性的好方法。基于转移的黑盒攻击中的一种方法利用了几种图像变换操作来提高对逆势示例的可转换性,这是有效的,但不能考虑输入图像的特定特征。在这项工作中,我们提出了一种新颖的架构,称为自适应图像转换学习者(AIT1),其将不同的图像变换操作结合到统一的框架中,以进一步提高对抗性示例的可转移性。与现有工作中使用的固定组合变换不同,我们精心设计的转换学习者自适应地选择特定于输入图像的图像变换最有效的组合。关于Imagenet的广泛实验表明,我们的方法在各种设置下显着提高了正常培训的模型和防御模型的攻击成功率。
translated by 谷歌翻译
基于深度学习的图像识别系统已广泛部署在当今世界的移动设备上。然而,在最近的研究中,深入学习模型被证明易受对抗的例子。一种逆势例的一个变种,称为对抗性补丁,由于其强烈的攻击能力而引起了研究人员的注意。虽然对抗性补丁实现了高攻击成功率,但由于补丁和原始图像之间的视觉不一致,它们很容易被检测到。此外,它通常需要对文献中的对抗斑块产生的大量数据,这是计算昂贵且耗时的。为了解决这些挑战,我们提出一种方法来产生具有一个单一图像的不起眼的对抗性斑块。在我们的方法中,我们首先通过利用多尺度发生器和鉴别器来决定基于受害者模型的感知敏感性的补丁位置,然后以粗糙的方式产生对抗性斑块。鼓励修补程序与具有对抗性训练的背景图像一致,同时保留强烈的攻击能力。我们的方法显示了白盒设置中的强烈攻击能力以及通过对具有不同架构和培训方法的各种型号的广泛实验,通过广泛的实验进行黑盒设置的优异转移性。与其他对抗贴片相比,我们的对抗斑块具有最大忽略的风险,并且可以避免人类观察,这是由显着性图和用户评估结果的插图支持的人类观察。最后,我们表明我们的对抗性补丁可以应用于物理世界。
translated by 谷歌翻译
对抗性攻击可以迫使基于CNN的模型通过巧妙地操纵人类侵犯的输入来产生不正确的输出。探索这种扰动可以帮助我们更深入地了解神经网络的脆弱性,并为反对杂项对手提供深入学习的鲁棒性。尽管大量研究着重于图像,音频和NLP的鲁棒性,但仍缺乏视觉对象跟踪的对抗示例(尤其是以黑盒方式)的作品。在本文中,我们提出了一种新颖的对抗性攻击方法,以在黑色框设置下为单个对象跟踪产生噪音,其中仅在跟踪序列的初始框架上添加了扰动,从整个视频剪辑的角度来看,这很难注意到这一点。具体而言,我们将算法分为三个组件,并利用加固学习,以精确地定位重要的框架贴片,同时减少不必要的计算查询开销。与现有技术相比,我们的方法需要在视频的初始化框架上进行更少的查询,以操纵竞争性甚至更好的攻击性能。我们在长期和短期数据集中测试我们的算法,包括OTB100,DOCT2018,UAV123和LASOT。广泛的实验证明了我们方法对三种主流类型的跟踪器类型的有效性:歧视,基于暹罗和强化学习的跟踪器。
translated by 谷歌翻译
Recent studies reveal that deep neural network (DNN) based object detectors are vulnerable to adversarial attacks in the form of adding the perturbation to the images, leading to the wrong output of object detectors. Most current existing works focus on generating perturbed images, also called adversarial examples, to fool object detectors. Though the generated adversarial examples themselves can remain a certain naturalness, most of them can still be easily observed by human eyes, which limits their further application in the real world. To alleviate this problem, we propose a differential evolution based dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously. Specifically, we try to obtain the camouflage texture, which can be rendered over the surface of the object. In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images, making human eyes difficult to distinguish. In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective. In addition, we introduce the differential evolution algorithm to search for the near-optimal areas of the object to attack, improving the adversarial performance under certain attack area limitations. Besides, we also study the performance of adaptive DE_DAC, which can be adapted to the environment. Experiments show that our proposed method could obtain a good trade-off between the fooling human eyes and object detectors under multiple specific scenes and objects.
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译