对抗性扰动对于证明深度学习模型的鲁棒性至关重要。通用的对抗扰动(UAP)可以同时攻击多个图像,因此提供了更统一的威胁模型,从而避免了图像攻击算法。但是,当从不同的图像源绘制图像时(例如,具有不同的图像分辨率)时,现有的UAP生成器不发达。在图像来源的真实普遍性方面,我们将UAP生成的新颖看法是一个定制的几个实例,它利用双杆优化和学习优化的(L2O)技术(L2O)技术,以提高攻击成功率(ASR)(ASR) )。我们首先考虑流行模型不可知的元学习(MAML)框架,以将UAP生成器元素进行。但是,我们看到MAML框架并未直接提供跨图像源的通用攻击,从而要求我们将其与L2O的另一个元学习框架集成在一起。元学习UAP发电机(i)的最终方案的性能(ASR高50%)比预计梯度下降等基线的方案(II)比香草L2O和MAML框架的性能更好(37%)(当适用),(iii)能够同时处理不同受害者模型和图像数据源的UAP生成。
translated by 谷歌翻译
对抗性培训(AT)已成为一种广泛认可的防御机制,以提高深度神经网络对抗对抗攻击的鲁棒性。它解决了最小的最大优化问题,其中最小化器(即,后卫)寻求稳健的模型,以最小化由最大化器(即,攻击者)制成的对抗示例存在的最坏情况训练损失。然而,Min-Max的性质在计算密集并因此难以扩展。同时,快速算法,实际上,许多最近改进的算法,通过替换基于简单的单次梯度标志的攻击生成步骤来简化基于最大化步骤的最小值。虽然易于实施,快速缺乏理论保证,其实际表现可能是不令人满意的,患有强大的对手训练时的鲁棒性灾难性过度。在本文中,我们从双级优化(BLO)的角度来看,旨在快速设计。首先,首先进行关键观察,即快速at的最常用的算法规范等同于使用一些梯度下降型算法来解决涉及符号操作的双级问题。然而,标志操作的离散性使得难以理解算法的性能。基于上述观察,我们提出了一种新的遗传性双层优化问题,设计和分析了一组新的算法(快速蝙蝠)。 FAST-BAT能够捍卫基于符号的投影梯度下降(PGD)攻击,而无需调用任何渐变标志方法和明确的鲁棒正则化。此外,我们经验证明,通过在不诱导鲁棒性灾难性过度的情况下实现卓越的模型稳健性,或患有任何标准精度损失的稳健性,我们的方法优于最先进的快速基线。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
最大限度的训练原则,最大限度地减少最大的对抗性损失,也称为对抗性培训(AT),已被证明是一种提高对抗性鲁棒性的最先进的方法。尽管如此,超出了在对抗环境中尚未经过严格探索的最小最大优化。在本文中,我们展示了如何利用多个领域的最小最大优化的一般框架,以推进不同类型的对抗性攻击的设计。特别是,给定一组风险源,最小化最坏情况攻击损失可以通过引入在域集的概率单纯x上最大化的域权重来重新重整为最小最大问题。我们在三次攻击生成问题中展示了这个统一的框架 - 攻击模型集合,在多个输入下设计了通用扰动,并制作攻击对数据转换的弹性。广泛的实验表明,我们的方法导致对现有的启发式策略以及对培训的最先进的防御方法而言,鲁棒性改善,培训对多种扰动类型具有稳健。此外,我们发现,从我们的MIN-MAX框架中学到的自调整域权重可以提供整体工具来解释跨域攻击难度的攻击水平。代码可在https://github.com/wangjksjtu/minmaxsod中获得。
translated by 谷歌翻译
我们提出了一种新的计算上高效的多阶算法,用于模型 - 不可知的元学习(MAML)。关键启用技术是将MAML解释为BileVel优化(BLO)问题,并将基于符号的SGD(Signsgd)作为BLO的较低级优化器利用。我们表明MAML通过面向标志的镜头,自然地产生交替的优化方案,只需要学习的元模型的一阶梯度。我们术语由此产生的MAML算法标志MAML。与传统的一阶MAML(FO-MAML)算法相比,标志MAML理论上是接地的,因为在元训练期间没有对没有二阶导数的任何假设。在实践中,我们表明,符号MAML在各种几次拍摄图像分类任务中优于FO-MAML,并与MAML相比,它在分类准确性和计算效率之间实现了更加优雅的权衡。
translated by 谷歌翻译
模型不合时宜的元学习(MAML)目前是少量元学习的主要方法之一。尽管它具有有效性,但由于先天的二聚体问题结构,MAML的优化可能具有挑战性。具体而言,MAML的损失格局比其经验风险最小化的对应物更为复杂,可能的鞍点和局部最小化可能更复杂。为了应对这一挑战,我们利用了最近发明的清晰度最小化的最小化,并开发出一种清晰感的MAML方法,我们称其为Sharp MAML。我们从经验上证明,Sharp-MAML及其计算有效的变体可以胜过流行的现有MAML基准(例如,Mini-Imagenet上的$+12 \%$ $精度)。我们通过收敛速率分析和尖锐MAML的概括结合进行了经验研究。据我们所知,这是在双层学习背景下对清晰度感知最小化的第一个经验和理论研究。该代码可在https://github.com/mominabbass/sharp-maml上找到。
translated by 谷歌翻译
我们介绍了SubGD,这是一种新颖的几声学习方法,基于最近的发现,即随机梯度下降更新往往生活在低维参数子空间中。在实验和理论分析中,我们表明模型局限于合适的预定义子空间,可以很好地推广用于几次学习。合适的子空间符合给定任务的三个标准:IT(a)允许通过梯度流量减少训练误差,(b)导致模型良好的模型,并且(c)可以通过随机梯度下降来识别。 SUBGD从不同任务的更新说明的自动相关矩阵的特征组合中标识了这些子空间。明确的是,我们可以识别出低维合适的子空间,用于对动态系统的几次学习,而动态系统具有不同的属性,这些属性由分析系统描述的一个或几个参数描述。这种系统在科学和工程领域的现实应用程序中无处不在。我们在实验中证实了SubGD在三个不同的动态系统问题设置上的优势,在样本效率和性能方面,均超过了流行的几次学习方法。
translated by 谷歌翻译
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
translated by 谷歌翻译
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TextGrad, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TextGrad can be baked into adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TextGrad not only in attack generation for robustness evaluation but also in adversarial defense.
translated by 谷歌翻译
深度学习中的许多任务涉及优化\ emph {输入}到网络以最小化或最大化一些目标;示例包括在生成模型中的潜在空间上的优化,以匹配目标图像,或者对其进行对接扰动的前进扰动以恶化分类器性能。然而,执行这种优化是传统上的昂贵,因为它涉及完全向前和向后通过网络,每个梯度步骤。在单独的工作中,最近的研究线程已经开发了深度均衡(DEQ)模型,一类放弃传统网络深度的模型,而是通过找到单个非线性层的固定点来计算网络的输出。在本文中,我们表明这两个设置之间存在自然协同作用。虽然,对于这些优化问题的天真使用DEQs是昂贵的(由于计算每个渐变步骤所需的时间),我们可以利用基于梯度的优化可以\ emph {本身}作为一个固定点来利用这一事实迭代基本上提高整体速度。也就是说,我们\ EMPH {同时解决了DEQ固定点\ EMPH {和}在网络输入上优化,所有内容都在单个“增强”的DEQ模型中,共同编码原始网络和优化过程。实际上,程序足够快,使我们允许我们有效地\以传统地依赖于“内在”优化循环的任务的{Train} DEQ模型。我们在各种任务中展示了这种策略,例如培训生成模型,同时优化潜在代码,培训模型,以实现逆问题,如去噪,普及训练和基于梯度的元学习。
translated by 谷歌翻译
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two fewshot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
translated by 谷歌翻译
Designing powerful adversarial attacks is of paramount importance for the evaluation of $\ell_p$-bounded adversarial defenses. Projected Gradient Descent (PGD) is one of the most effective and conceptually simple algorithms to generate such adversaries. The search space of PGD is dictated by the steepest ascent directions of an objective. Despite the plethora of objective function choices, there is no universally superior option and robustness overestimation may arise from ill-suited objective selection. Driven by this observation, we postulate that the combination of different objectives through a simple loss alternating scheme renders PGD more robust towards design choices. We experimentally verify this assertion on a synthetic-data example and by evaluating our proposed method across 25 different $\ell_{\infty}$-robust models and 3 datasets. The performance improvement is consistent, when compared to the single loss counterparts. In the CIFAR-10 dataset, our strongest adversarial attack outperforms all of the white-box components of AutoAttack (AA) ensemble, as well as the most powerful attacks existing on the literature, achieving state-of-the-art results in the computational budget of our study ($T=100$, no restarts).
translated by 谷歌翻译
正规化和转移学习是两种流行的技术,可以增强看不见数据的概念,这是机器学习的根本问题。正则化技术是多功能的,因为它们是任务和架构 - 不可知论,但它们不会利用大量数据。传输学习方法学会从一个域转移到另一个域的知识,但可能无法跨解任务和架构拓展,并且可能会引入适应目标任务的新培训成本。为了弥合两者之间的差距,我们提出了一种可转移的扰动,Metaperturb,这是荟萃学会,以提高看不见数据的泛化性能。 Metaperturb实现为基于集的轻量级网络,该网络是不可知的,其尺寸和输入的顺序,它们在整个层上共享。然后,我们提出了一个元学习框架,共同训练了与异构任务相同的扰动功能。正如Metaperturb在层次和任务的不同分布上训练的集合函数,它可以概括为异构任务和架构。通过将不同的神经架构应用于各种规范和微调,验证对特定源域和架构的Metaperturb培训的疗效和普遍性,验证了特定的源域和架构的疗效和普遍性。结果表明,Metaperturb培训的网络显着优于大多数任务和架构的基线,参数大小的忽略不计,并且没有封闭曲调。
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
整合不同域的知识是人类学习的重要特征。学习范式如转移学习,元学习和多任务学习,通过利用新任务的先验知识,鼓励更快的学习和新任务的良好普遍来反映人类学习过程。本文提供了这些学习范例的详细视图以及比较分析。学习算法的弱点是另一个的力量,从而合并它们是文献中的一种普遍的特征。这项工作提供了对文章的文献综述,这些文章融合了两种算法来完成多个任务。这里还介绍了全球通用学习网络,在此介绍了元学习,转移学习和多任务学习的集合,以及一些开放的研究问题和未来研究的方向。
translated by 谷歌翻译
改善深度神经网络(DNN)对抗对抗示例的鲁棒性是安全深度学习的重要而挑战性问题。跨越现有的防御技术,具有预计梯度体面(PGD)的对抗培训是最有效的。对手训练通过最大化分类丢失,通过最大限度地减少从内在最大化生成的逆势示例的丢失来解决\ excepitient {内部最大化}生成侵略性示例的初始最大优化问题。 。因此,衡量内部最大化的衡量标准是如何对对抗性培训至关重要的。在本文中,我们提出了这种标准,即限制优化(FOSC)的一阶静止条件,以定量评估内部最大化中发现的对抗性实例的收敛质量。通过FOSC,我们发现,为了确保更好的稳健性,必须在培训的\ Texit {稍后的阶段}中具有更好的收敛质量的对抗性示例。然而,在早期阶段,高收敛质量的对抗例子不是必需的,甚至可能导致稳健性差。基于这些观察,我们提出了一种\ Texit {动态}培训策略,逐步提高产生的对抗性实例的收敛质量,这显着提高了对抗性培训的鲁棒性。我们的理论和经验结果表明了该方法的有效性。
translated by 谷歌翻译
引起超越预测的对手示例被广泛用于评估和改善机器学习模型的鲁棒性。然而,目前的研究侧重于监督学习任务,依赖于地面真理数据标签,目标目标或从训练有素的分类器的监督。在本文中,我们提出了一种为无监督模型产生对抗性示例的框架,并证明了数据增强的新应用。我们的框架利用相互信息神经估算器作为信息理论相似度措施,以产生未经监督的对抗示例。我们提出了一种新的MinMax算法,可提供可提供的融合保证,以便有效地产生无监督的对抗性示例。我们的框架也可以扩展到受监督的对抗性示例。在使用无监督的对冲示例作为用于模型再检验的简单插件数据增强工具时,在不同无监督的任务和数据集中一直观察到显着的改进,包括数据重建,表示学习和对比学习。我们的结果表明,通过对抗示例研究和改善无监督机器学习的新方法和相当大的优势。
translated by 谷歌翻译
当前的深度神经网络(DNN)容易受到对抗性攻击的影响,在这种攻击中,对输入的对抗扰动可以改变或操纵分类。为了防御此类攻击,已证明一种有效而流行的方法,称为对抗性训练(AT),可通过一种最小的最大强大的训练方法来减轻对抗攻击的负面影响。尽管有效,但尚不清楚它是否可以成功地适应分布式学习环境。分布式优化对多台机器的功能使我们能够扩展大型型号和数据集的强大训练。我们提出了这一点,我们提出了分布式的对抗训练(DAT),这是在多台机器上实施的大批量对抗训练框架。我们证明DAT是一般的,它支持对标记和未标记的数据,多种类型的攻击生成方法以及梯度压缩操作的培训。从理论上讲,我们在优化理论中的标准条件下提供了DAT与一般非凸面设置中一阶固定点的收敛速率。从经验上讲,我们证明DAT要么匹配或胜过最先进的稳健精度,并实现了优美的训练速度(例如,在ImageNet下的Resnet-50上)。代码可在https://github.com/dat-2022/dat上找到。
translated by 谷歌翻译
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS-and which illustrate a diverse set of defense strategies-can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result-showing that a defense was ineffective-this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译