We present AI 2 , the first sound and scalable analyzer for deep neural networks. Based on overapproximation, AI 2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks).The key insight behind AI 2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling us to leverage decades of advances in that area. Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows us to handle real-world neural networks, which are often built out of those types of layers.We present a complete implementation of AI 2 together with an extensive evaluation on 20 neural networks. Our results demonstrate that: (i) AI 2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI 2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, (iii) AI 2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks, and (iv) AI 2 can handle deep convolutional networks, which are beyond the reach of existing methods.
translated by 谷歌翻译
We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.
translated by 谷歌翻译
Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.
translated by 谷歌翻译
Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a symbolic abstraction of reachable values at each layer. This process is repeated from scratch independently for each input (e.g., image) and perturbation (e.g., rotation), leading to an expensive overall proof effort when handling an entire dataset. In this work, we introduce a new method for reducing this verification cost without losing precision based on a key insight that abstractions obtained at intermediate layers for different inputs and perturbations can overlap or contain each other. Leveraging our insight, we introduce the general concept of shared certificates, enabling proof effort reuse across multiple inputs to reduce overall verification costs. We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations. We release our implementation at https://github.com/eth-sri/proof-sharing.
translated by 谷歌翻译
神经网络已广泛应用于垃圾邮件和网络钓鱼检测,入侵预防和恶意软件检测等安全应用程序。但是,这种黑盒方法通常在应用中具有不确定性和不良的解释性。此外,神经网络本身通常容易受到对抗攻击的影响。由于这些原因,人们对可信赖和严格的方法有很高的需求来验证神经网络模型的鲁棒性。对抗性的鲁棒性在处理恶意操纵输入时涉及神经网络的可靠性,是安全和机器学习中最热门的主题之一。在这项工作中,我们在神经网络的对抗性鲁棒性验证中调查了现有文献,并在机器学习,安全和软件工程领域收集了39项多元化研究工作。我们系统地分析了它们的方法,包括如何制定鲁棒性,使用哪种验证技术以及每种技术的优势和局限性。我们从正式验证的角度提供分类学,以全面理解该主题。我们根据财产规范,减少问题和推理策略对现有技术进行分类。我们还展示了使用样本模型在现有研究中应用的代表性技术。最后,我们讨论了未来研究的开放问题。
translated by 谷歌翻译
由于它们在计算机视觉,图像处理和其他人领域的优异性能,卷积神经网络具有极大的普及。不幸的是,现在众所周知,卷积网络通常产生错误的结果 - 例如,这些网络的输入的小扰动可能导致严重的分类错误。近年来提出了许多验证方法,以证明没有此类错误,但这些通常用于完全连接的网络,并且在应用于卷积网络时遭受加剧的可扩展性问题。为了解决这一差距,我们在这里介绍了CNN-ABS框架,特别是旨在验证卷积网络。 CNN-ABS的核心是一种抽象细化技术,它通过拆除卷积连接,以便在这种方式创造原始问题的过度逼近来简化验证问题;如果产生的问题变得过于抽象,它会恢复这些连接。 CNN-ABS旨在使用现有的验证引擎作为后端,我们的评估表明它可以显着提高最先进的DNN验证引擎的性能,平均降低运行时间15.7%。
translated by 谷歌翻译
最近,图形神经网络(GNN)已应用于群集上的调整工作,比手工制作的启发式方法更好地表现了。尽管表现令人印象深刻,但仍然担心这些基于GNN的工作调度程序是否满足用户对其他重要属性的期望,例如防止策略,共享激励和稳定性。在这项工作中,我们考虑对基于GNN的工作调度程序的正式验证。我们解决了几个特定领域的挑战,例如网络,这些挑战比验证图像和NLP分类器时遇到的更深层和规格更丰富。我们开发了拉斯维加斯,这是基于精心设计的算法,将这些调度程序的单步和多步属性验证的第一个通用框架,它们结合了抽象,改进,求解器和证明传输。我们的实验结果表明,与以前的方法相比,维加斯在验证基于GNN的调度程序的重要特性时会达到显着加速。
translated by 谷歌翻译
深度神经网络已被证明容易受到基于语义特征扰动输入的对抗性攻击。现有的鲁棒性分析仪可以建议语义特征社区提高网络的可靠性。但是,尽管这些技术取得了重大进展,但他们仍然很难扩展到深层网络和大型社区。在这项工作中,我们介绍了VEEP,这是一种主动学习方法,将验证过程分为一系列较小的验证步骤,每个验证步骤都会提交给现有的鲁棒性分析仪。关键想法是基于先前的步骤来预测下一个最佳步骤。通过参数回归估算认证速度和灵敏度来预测最佳步骤。我们评估了MNIST,时尚摄影师,CIFAR-10和Imagenet的VEEP,并表明它可以分析各种特征的邻域:亮度,对比度,色相,饱和度和轻度。我们表明,平均而言,鉴于90分钟的超时,VEEP在29分钟内验证了96%的最大认证社区,而现有的拆分接近近距离验证,平均在58分钟内验证了73%的最大认证社区的73%。
translated by 谷歌翻译
神经网络越来越依赖于复杂安全系统(例如自动驾驶汽车)的组成部分。对在更大的验证周期中嵌入神经网络验证的工具和方法的需求很高。但是,由于关注的广泛验证属性,很难进行神经网络验证,通常每个验证属性仅适用于专用求解器中的验证。在本文中,我们展示了最初设计用于验证,验证和仿真金融基础架构的功能编程语言的Imandra如何为神经网络验证提供整体基础架构。我们开发了一个新颖的图书馆Checkinn,该图书馆在Imandra的神经网络上形式化,并涵盖了神经网络验证的不同重要方面。
translated by 谷歌翻译
深度神经网络(DNN)的巨大进步导致了各种任务的最先进的性能。然而,最近的研究表明,DNNS容易受到对抗的攻击,这在将这些模型部署到自动驾驶等安全关键型应用时,这使得非常关注。已经提出了不同的防御方法,包括:a)经验防御,通常可以在不提供稳健性认证的情况下再次再次攻击; b)可认真的稳健方法,由稳健性验证组成,提供了在某些条件下的任何攻击和相应的强大培训方法中的稳健准确性的下限。在本文中,我们系统化了可认真的稳健方法和相关的实用和理论意义和调查结果。我们还提供了在不同数据集上现有的稳健验证和培训方法的第一个全面基准。特别是,我们1)为稳健性验证和培训方法提供分类,以及总结代表性算法的方法,2)揭示这些方法中的特征,优势,局限性和基本联系,3)讨论当前的研究进展情况TNN和4的可信稳健方法的理论障碍,主要挑战和未来方向提供了一个开放的统一平台,以评估超过20种代表可认真的稳健方法,用于各种DNN。
translated by 谷歌翻译
This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
translated by 谷歌翻译
我们考虑了认证深神经网络对现实分布变化的鲁棒性的问题。为此,我们通过提出一个新型的神经符号验证框架来弥合手工制作的规格和现实部署设置之间的差距模型。这种环境引起的一个独特的挑战是,现有的验证者不能紧密地近似sigmoid激活,这对于许多最新的生成模型至关重要。为了应对这一挑战,我们提出了一个通用的元算象来处理乙状结肠激活,该乙状结激素利用反示例引导的抽象细化的经典概念。关键思想是“懒惰地”完善Sigmoid函数的抽象,以排除先前抽象中发现的虚假反示例,从而确保验证过程中的进展,同时保持状态空间较小。 MNIST和CIFAR-10数据集的实验表明,我们的框架在一系列具有挑战性的分配变化方面大大优于现有方法。
translated by 谷歌翻译
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to normbounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
translated by 谷歌翻译
深度神经网络的鲁棒性对于现代AI支持系统至关重要,应正式验证。在广泛的应用中采用了类似乙状结肠的神经网络。由于它们的非线性,通常会过度评估乙状结肠样激活功能,以进行有效的验证,这不可避免地引入了不精确度。已大量的努力致力于找到所谓的更紧密的近似值,以获得更精确的验证结果。但是,现有的紧密定义是启发式的,缺乏理论基础。我们对现有神经元的紧密表征进行了彻底的经验分析,并揭示它们仅在特定的神经网络上是优越的。然后,我们将网络紧密度的概念介绍为统一的紧密度定义,并表明计算网络紧密度是一个复杂的非convex优化问题。我们通过两个有效的,最紧密的近似值从不同的角度绕过复杂性。结果表明,我们在艺术状态下的方法实现了有希望的表现:(i)达到高达251.28%的改善,以提高认证的较低鲁棒性界限; (ii)在卷积网络上表现出更为精确的验证结果。
translated by 谷歌翻译
While deep neural networks (DNNs) have demonstrated impressive performance in solving many challenging tasks, they are limited to resource-constrained devices owing to their demand for computation power and storage space. Quantization is one of the most promising techniques to address this issue by quantizing the weights and/or activation tensors of a DNN into lower bit-width fixed-point numbers. While quantization has been empirically shown to introduce minor accuracy loss, it lacks formal guarantees on that, especially when the resulting quantized neural networks (QNNs) are deployed in safety-critical applications. A majority of existing verification methods focus exclusively on individual neural networks, either DNNs or QNNs. While promising attempts have been made to verify the quantization error bound between DNNs and their quantized counterparts, they are not complete and more importantly do not support fully quantified neural networks, namely, only weights are quantized. To fill this gap, in this work, we propose a quantization error bound verification method (QEBVerif), where both weights and activation tensors are quantized. QEBVerif consists of two analyses: a differential reachability analysis (DRA) and a mixed-integer linear programming (MILP) based verification method. DRA performs difference analysis between the DNN and its quantized counterpart layer-by-layer to efficiently compute a tight quantization error interval. If it fails to prove the error bound, then we encode the verification problem into an equivalent MILP problem which can be solved by off-the-shelf solvers. Thus, QEBVerif is sound, complete, and arguably efficient. We implement QEBVerif in a tool and conduct extensive experiments, showing its effectiveness and efficiency.
translated by 谷歌翻译
We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a ``democratization of AI'' agenda, where network architectures and trained parameter sets are shared publicly. We develop a range of new implementable attack strategies with accompanying analysis, showing that with high probability a stealth attack can be made transparent, in the sense that system performance is unchanged on a fixed validation set which is unknown to the attacker, while evoking any desired output on a trigger input of interest. The attacker only needs to have estimates of the size of the validation set and the spread of the AI's relevant latent space. In the case of deep learning neural networks, we show that a one neuron attack is possible - a modification to the weights and bias associated with a single neuron - revealing a vulnerability arising from over-parameterization. We illustrate these concepts using state of the art architectures on two standard image data sets. Guided by the theory and computational results, we also propose strategies to guard against stealth attacks.
translated by 谷歌翻译
深度神经网络(DNN)已成为实现各种复杂任务的首选技术。但是,正如许多最近的研究所强调的那样,即使是对正确分类的输入的不可察觉的扰动也可能导致DNN错误分类。这使DNNS容易受到攻击者的战略输入操作,并且对环境噪声过敏。为了减轻这种现象,从业人员通过DNNS的“合奏”进行联合分类。通过汇总不同单个DNN的分类输出对相同的输入,基于合奏的分类可以减少因任何单个DNN的随机训练过程的特定实现而导致错误分类的风险。但是,DNN集合的有效性高度依赖于其成员 *在许多不同的输入上没有同时错误 *。在本案例研究中,我们利用DNN验证的最新进展,设计一种方法来识别一种合奏组成,即使输入对对抗性进行了扰动,也不太容易出现同时误差 - 从而导致基于更坚固的集合分类。我们提出的框架使用DNN验证器作为后端,并包括启发式方法,有助于降低直接验证合奏的高复杂性。从更广泛的角度来看,我们的工作提出了一个新颖的普遍目标,以实现正式验证,该目标可能可以改善各种应用领域的现实世界中基于深度学习的系统的鲁棒性。
translated by 谷歌翻译
随着深度学习在关键任务系统中的越来越多的应用,越来越需要对神经网络的行为进行正式保证。确实,最近提出了许多用于验证神经网络的方法,但是这些方法通常以有限的可伸缩性或不足的精度而挣扎。许多最先进的验证方案中的关键组成部分是在网络中可以为特定输入域获得的神经元获得的值计算下限和上限 - 并且这些界限更紧密,验证的可能性越大,验证的可能性就越大。成功。计算这些边界的许多常见算法是符号结合传播方法的变化。其中,利用一种称为后替代的过程的方法特别成功。在本文中,我们提出了一种使背部替代产生更严格的界限的方法。为了实现这一目标,我们制定并最大程度地减少背部固定过程中发生的不精确错误。我们的技术是一般的,从某种意义上说,它可以将其集成到许多现有的符号结合的传播技术中,并且只有较小的修改。我们将方法作为概念验证工具实施,并且与执行背部替代的最先进的验证者相比,取得了有利的结果。
translated by 谷歌翻译
In the past few years, neural architecture search (NAS) has become an increasingly important tool within the deep learning community. Despite the many recent successes of NAS, however, most existing approaches operate within highly structured design spaces, and hence explore only a small fraction of the full search space of neural architectures while also requiring significant manual effort from domain experts. In this work, we develop techniques that enable efficient NAS in a significantly larger design space. To accomplish this, we propose to perform NAS in an abstract search space of program properties. Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance; thus, we can search more efficiently in the abstract search space. To enable this approach, we also propose a novel efficient synthesis procedure, which accepts a set of promising program properties, and returns a satisfying neural architecture. We implement our approach, $\alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties. Starting with a ResNet-34 model, $\alpha$NAS produces a model with slightly improved accuracy on CIFAR-10 but 96% fewer parameters. On ImageNet, $\alpha$NAS is able to improve over Vision Transformer (30% fewer FLOPS and parameters), ResNet-50 (23% fewer FLOPS, 14% fewer parameters), and EfficientNet (7% fewer FLOPS and parameters) without any degradation in accuracy.
translated by 谷歌翻译
间隔分析(或间隔结合传播,IBP)是一种流行的技术,用于验证和培训可提供稳健的深度神经网络,在可靠的机器学习领域是一个根本的挑战。然而,尽管努力实质性努力,解决了这一关键挑战的进展已经停滞不前,呼吁课程是间隔算术是否是前进的可行路径。本文在分析神经网络的间隔算法的局限上,我们提出了两个基本结果。我们的主要不可能性定理表明,对于任何神经网络分类只是三个点,在这些点上有一个有效的规格,间隔分析无法证明。此外,在一个隐藏层神经网络的禁用情况下,我们显示出更强烈的不可能性结果:给定任何RADIUS $ \ alpha <1 $,有一组$ O(\ alpha ^ { - 1})$积分强大的RADIUS $ \ Alpha $,以2美元分开,可以证明没有一个隐式层网络通过间隔分析稳健地进行分类。
translated by 谷歌翻译