作为一个新的编程范式,深度神经网络(DNN)在实践中越来越多地部署,但是缺乏鲁棒性阻碍了他们在安全至关重要的领域中的应用。尽管有用于正式保证的DNN验证DNN的技术,但它们的可伸缩性和准确性有限。在本文中,我们提出了一种新颖的抽象方法,用于可扩展和精确的DNN验证。具体而言,我们提出了一种新颖的抽象来通过过度透明度分解DNN的大小。如果未报告任何虚假反例,验证抽象DNN的结果始终是结论性的。为了消除抽象提出的虚假反例,我们提出了一种新颖的反例引导的改进,该精炼精炼了抽象的DNN,以排除给定的虚假反例,同时仍然过分欣赏原始示例。我们的方法是正交的,并且可以与许多现有的验证技术集成。为了进行演示,我们使用两个有前途和确切的工具Marabou和Planet作为基础验证引擎实施我们的方法,并对广泛使用的基准ACAS XU,MNIST和CIFAR-10进行评估。结果表明,我们的方法可以通过解决更多问题并分别减少86.3%和78.0%的验证时间来提高他们的绩效。与最相关的抽象方法相比,我们的方法是11.6-26.6倍。
translated by 谷歌翻译
由于它们在计算机视觉,图像处理和其他人领域的优异性能,卷积神经网络具有极大的普及。不幸的是,现在众所周知,卷积网络通常产生错误的结果 - 例如,这些网络的输入的小扰动可能导致严重的分类错误。近年来提出了许多验证方法,以证明没有此类错误,但这些通常用于完全连接的网络,并且在应用于卷积网络时遭受加剧的可扩展性问题。为了解决这一差距,我们在这里介绍了CNN-ABS框架,特别是旨在验证卷积网络。 CNN-ABS的核心是一种抽象细化技术,它通过拆除卷积连接,以便在这种方式创造原始问题的过度逼近来简化验证问题;如果产生的问题变得过于抽象,它会恢复这些连接。 CNN-ABS旨在使用现有的验证引擎作为后端,我们的评估表明它可以显着提高最先进的DNN验证引擎的性能,平均降低运行时间15.7%。
translated by 谷歌翻译
随着神经网络作为任务至关重要系统中组成部分的越来越多的整合,越来越需要确保它们满足各种安全性和livesice要求。近年来,已经提出了许多声音和完整的验证方法,但这些方法通常受到严重的可伸缩性限制。最近的工作提出了通过抽象 - 再填充功能增强这种验证技术的增强,这些功能已被证明可以提高可伸缩性:而不是验证大型且复杂的网络,而是验证者构造,然后验证一个较小的网络,其正确性意味着原始的正确性网络。这种方案的缺点是,如果验证较小的网络失败,则验证者需要执行改进步骤,以增加验证网络的大小,然后开始从SCRATCH验证新网络 - 有效地``'浪费''它的早期工作在验证较小的网络方面。在本文中,我们通过使用\ emph {残留推理}来提高基于抽象的神经网络验证的增强:在验证抽象网络时使用信息的过程,以加快对精制网络的验证。本质上,该方法允许验证者存储有关确保正确行为的搜索空间部分的信息,并允许其专注于可能发现错误的区域。我们实施了我们的方法,以扩展到Marabou验证者,并获得了有希望的结果。
translated by 谷歌翻译
深度神经网络的鲁棒性对于现代AI支持系统至关重要,应正式验证。在广泛的应用中采用了类似乙状结肠的神经网络。由于它们的非线性,通常会过度评估乙状结肠样激活功能,以进行有效的验证,这不可避免地引入了不精确度。已大量的努力致力于找到所谓的更紧密的近似值,以获得更精确的验证结果。但是,现有的紧密定义是启发式的,缺乏理论基础。我们对现有神经元的紧密表征进行了彻底的经验分析,并揭示它们仅在特定的神经网络上是优越的。然后,我们将网络紧密度的概念介绍为统一的紧密度定义,并表明计算网络紧密度是一个复杂的非convex优化问题。我们通过两个有效的,最紧密的近似值从不同的角度绕过复杂性。结果表明,我们在艺术状态下的方法实现了有希望的表现:(i)达到高达251.28%的改善,以提高认证的较低鲁棒性界限; (ii)在卷积网络上表现出更为精确的验证结果。
translated by 谷歌翻译
While deep neural networks (DNNs) have demonstrated impressive performance in solving many challenging tasks, they are limited to resource-constrained devices owing to their demand for computation power and storage space. Quantization is one of the most promising techniques to address this issue by quantizing the weights and/or activation tensors of a DNN into lower bit-width fixed-point numbers. While quantization has been empirically shown to introduce minor accuracy loss, it lacks formal guarantees on that, especially when the resulting quantized neural networks (QNNs) are deployed in safety-critical applications. A majority of existing verification methods focus exclusively on individual neural networks, either DNNs or QNNs. While promising attempts have been made to verify the quantization error bound between DNNs and their quantized counterparts, they are not complete and more importantly do not support fully quantified neural networks, namely, only weights are quantized. To fill this gap, in this work, we propose a quantization error bound verification method (QEBVerif), where both weights and activation tensors are quantized. QEBVerif consists of two analyses: a differential reachability analysis (DRA) and a mixed-integer linear programming (MILP) based verification method. DRA performs difference analysis between the DNN and its quantized counterpart layer-by-layer to efficiently compute a tight quantization error interval. If it fails to prove the error bound, then we encode the verification problem into an equivalent MILP problem which can be solved by off-the-shelf solvers. Thus, QEBVerif is sound, complete, and arguably efficient. We implement QEBVerif in a tool and conduct extensive experiments, showing its effectiveness and efficiency.
translated by 谷歌翻译
最近,图形神经网络(GNN)已应用于群集上的调整工作,比手工制作的启发式方法更好地表现了。尽管表现令人印象深刻,但仍然担心这些基于GNN的工作调度程序是否满足用户对其他重要属性的期望,例如防止策略,共享激励和稳定性。在这项工作中,我们考虑对基于GNN的工作调度程序的正式验证。我们解决了几个特定领域的挑战,例如网络,这些挑战比验证图像和NLP分类器时遇到的更深层和规格更丰富。我们开发了拉斯维加斯,这是基于精心设计的算法,将这些调度程序的单步和多步属性验证的第一个通用框架,它们结合了抽象,改进,求解器和证明传输。我们的实验结果表明,与以前的方法相比,维加斯在验证基于GNN的调度程序的重要特性时会达到显着加速。
translated by 谷歌翻译
神经网络已广泛应用于垃圾邮件和网络钓鱼检测,入侵预防和恶意软件检测等安全应用程序。但是,这种黑盒方法通常在应用中具有不确定性和不良的解释性。此外,神经网络本身通常容易受到对抗攻击的影响。由于这些原因,人们对可信赖和严格的方法有很高的需求来验证神经网络模型的鲁棒性。对抗性的鲁棒性在处理恶意操纵输入时涉及神经网络的可靠性,是安全和机器学习中最热门的主题之一。在这项工作中,我们在神经网络的对抗性鲁棒性验证中调查了现有文献,并在机器学习,安全和软件工程领域收集了39项多元化研究工作。我们系统地分析了它们的方法,包括如何制定鲁棒性,使用哪种验证技术以及每种技术的优势和局限性。我们从正式验证的角度提供分类学,以全面理解该主题。我们根据财产规范,减少问题和推理策略对现有技术进行分类。我们还展示了使用样本模型在现有研究中应用的代表性技术。最后,我们讨论了未来研究的开放问题。
translated by 谷歌翻译
随着深度学习在关键任务系统中的越来越多的应用,越来越需要对神经网络的行为进行正式保证。确实,最近提出了许多用于验证神经网络的方法,但是这些方法通常以有限的可伸缩性或不足的精度而挣扎。许多最先进的验证方案中的关键组成部分是在网络中可以为特定输入域获得的神经元获得的值计算下限和上限 - 并且这些界限更紧密,验证的可能性越大,验证的可能性就越大。成功。计算这些边界的许多常见算法是符号结合传播方法的变化。其中,利用一种称为后替代的过程的方法特别成功。在本文中,我们提出了一种使背部替代产生更严格的界限的方法。为了实现这一目标,我们制定并最大程度地减少背部固定过程中发生的不精确错误。我们的技术是一般的,从某种意义上说,它可以将其集成到许多现有的符号结合的传播技术中,并且只有较小的修改。我们将方法作为概念验证工具实施,并且与执行背部替代的最先进的验证者相比,取得了有利的结果。
translated by 谷歌翻译
我们考虑了认证深神经网络对现实分布变化的鲁棒性的问题。为此,我们通过提出一个新型的神经符号验证框架来弥合手工制作的规格和现实部署设置之间的差距模型。这种环境引起的一个独特的挑战是,现有的验证者不能紧密地近似sigmoid激活,这对于许多最新的生成模型至关重要。为了应对这一挑战,我们提出了一个通用的元算象来处理乙状结肠激活,该乙状结激素利用反示例引导的抽象细化的经典概念。关键思想是“懒惰地”完善Sigmoid函数的抽象,以排除先前抽象中发现的虚假反示例,从而确保验证过程中的进展,同时保持状态空间较小。 MNIST和CIFAR-10数据集的实验表明,我们的框架在一系列具有挑战性的分配变化方面大大优于现有方法。
translated by 谷歌翻译
The adversarial input generation problem has become central in establishing the robustness and trustworthiness of deep neural nets, especially when they are used in safety-critical application domains such as autonomous vehicles and precision medicine. This is also practically challenging for multiple reasons-scalability is a common issue owing to large-sized networks, and the generated adversarial inputs often lack important qualities such as naturalness and output-impartiality. We relate this problem to the task of patching neural nets, i.e. applying small changes in some of the network$'$s weights so that the modified net satisfies a given property. Intuitively, a patch can be used to produce an adversarial input because the effect of changing the weights can also be brought about by changing the inputs instead. This work presents a novel technique to patch neural networks and an innovative approach of using it to produce perturbations of inputs which are adversarial for the original net. We note that the proposed solution is significantly more effective than the prior state-of-the-art techniques.
translated by 谷歌翻译
Fairness of machine learning (ML) software has become a major concern in the recent past. Although recent research on testing and improving fairness have demonstrated impact on real-world software, providing fairness guarantee in practice is still lacking. Certification of ML models is challenging because of the complex decision-making process of the models. In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models. Individual fairness ensures that any two similar individuals get similar treatment irrespective of their protected attributes e.g., race, sex, age. Verifying this fairness property is hard because of the global checking and non-linear computation nodes in NN. We proposed sound approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages whitebox access to the models in production and then apply formal analysis based pruning. Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample. We leveraged interval arithmetic and activation heuristic of the neurons to perform the pruning as necessary. We evaluated Fairify on 25 real-world neural networks collected from four different sources, and demonstrated the effectiveness, scalability and performance over baseline and closely related work. Fairify is also configurable based on the domain and size of the NN. Our novel formulation of the problem can answer targeted verification queries with relaxations and counterexamples, which have practical implications.
translated by 谷歌翻译
作为神经网络(NNS)越来越多地引入安全关键域,在部署之前越来越需要在部署之前正式验证NNS。在这项工作中,我们专注于NN等效的正式验证问题,其旨在证明两个NNS(例如原件和压缩版本)显示等效行为。已经提出了两种方法:混合整数线性编程和间隔传播。虽然第一种方法缺乏可扩展性,但后者仅适用于结构性相似的NN,其重量变化很小。我们纸张的贡献有四个部分。首先,我们通过证明epsilon-andatience问题是突出的,我们表现出理论结果。其次,我们扩展了Tran等人。单个NN几何路径枚举算法以多个NN的设置。在第三步中,我们实现了扩展算法,用于等价验证,评估其实际使用所需的优化。最后,我们执行比较评估,显示我们的方法优于前一种最先进的现有技术,两者,用于等效验证以及反例查找。
translated by 谷歌翻译
深度神经网络已被证明容易受到基于语义特征扰动输入的对抗性攻击。现有的鲁棒性分析仪可以建议语义特征社区提高网络的可靠性。但是,尽管这些技术取得了重大进展,但他们仍然很难扩展到深层网络和大型社区。在这项工作中,我们介绍了VEEP,这是一种主动学习方法,将验证过程分为一系列较小的验证步骤,每个验证步骤都会提交给现有的鲁棒性分析仪。关键想法是基于先前的步骤来预测下一个最佳步骤。通过参数回归估算认证速度和灵敏度来预测最佳步骤。我们评估了MNIST,时尚摄影师,CIFAR-10和Imagenet的VEEP,并表明它可以分析各种特征的邻域:亮度,对比度,色相,饱和度和轻度。我们表明,平均而言,鉴于90分钟的超时,VEEP在29分钟内验证了96%的最大认证社区,而现有的拆分接近近距离验证,平均在58分钟内验证了73%的最大认证社区的73%。
translated by 谷歌翻译
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU ) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
translated by 谷歌翻译
We present AI 2 , the first sound and scalable analyzer for deep neural networks. Based on overapproximation, AI 2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks).The key insight behind AI 2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling us to leverage decades of advances in that area. Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows us to handle real-world neural networks, which are often built out of those types of layers.We present a complete implementation of AI 2 together with an extensive evaluation on 20 neural networks. Our results demonstrate that: (i) AI 2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI 2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, (iii) AI 2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks, and (iv) AI 2 can handle deep convolutional networks, which are beyond the reach of existing methods.
translated by 谷歌翻译
深度神经网络(DNN)越来越多地用于安全至关重要的系统中,迫切需要保证其正确性。因此,验证社区设计了多种技术和工具来验证DNN。当DNN验证者发现触发错误的输入时,很容易确认;但是,当他们报告不存在错误时,就无法确保验证工具本身没有缺陷。由于在DNN验证工具中已经观察到了多个错误,因此这将DNN验证的适用性提出了质疑。在这项工作中,我们提出了一种具有证明生产能力的基于简单的DNN验证符的新型机制:产生易于检查的不可满足性的见证人,这证明了没有错误的情况。我们的证明生产是基于众所周知的Farkas引理的有效适应,并结合了处理分段线性函数和数值精确误差的机制。作为概念的证明,我们在Marabou DNN验证者之上实施了我们的技术。我们对避免空中碰撞的安全至关重要系统的评估表明,在几乎所有情况下,证明生产都成功了,只需要最小的开销。
translated by 谷歌翻译
Neural networks are increasingly applied in safety critical domains, their verification thus is gaining importance. A large class of recent algorithms for proving input-output relations of feed-forward neural networks are based on linear relaxations and symbolic interval propagation. However, due to variable dependencies, the approximations deteriorate with increasing depth of the network. In this paper we present DPNeurifyFV, a novel branch-and-bound solver for ReLU networks with low dimensional input-space that is based on symbolic interval propagation with fresh variables and input-splitting. A new heuristic for choosing the fresh variables allows to ameliorate the dependency problem, while our novel splitting heuristic, in combination with several other improvements, speeds up the branch-and-bound procedure. We evaluate our approach on the airborne collision avoidance networks ACAS Xu and demonstrate runtime improvements compared to state-of-the-art tools.
translated by 谷歌翻译
深度神经网络(DNN)的巨大进步导致了各种任务的最先进的性能。然而,最近的研究表明,DNNS容易受到对抗的攻击,这在将这些模型部署到自动驾驶等安全关键型应用时,这使得非常关注。已经提出了不同的防御方法,包括:a)经验防御,通常可以在不提供稳健性认证的情况下再次再次攻击; b)可认真的稳健方法,由稳健性验证组成,提供了在某些条件下的任何攻击和相应的强大培训方法中的稳健准确性的下限。在本文中,我们系统化了可认真的稳健方法和相关的实用和理论意义和调查结果。我们还提供了在不同数据集上现有的稳健验证和培训方法的第一个全面基准。特别是,我们1)为稳健性验证和培训方法提供分类,以及总结代表性算法的方法,2)揭示这些方法中的特征,优势,局限性和基本联系,3)讨论当前的研究进展情况TNN和4的可信稳健方法的理论障碍,主要挑战和未来方向提供了一个开放的统一平台,以评估超过20种代表可认真的稳健方法,用于各种DNN。
translated by 谷歌翻译
This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
translated by 谷歌翻译
In the past few years, neural architecture search (NAS) has become an increasingly important tool within the deep learning community. Despite the many recent successes of NAS, however, most existing approaches operate within highly structured design spaces, and hence explore only a small fraction of the full search space of neural architectures while also requiring significant manual effort from domain experts. In this work, we develop techniques that enable efficient NAS in a significantly larger design space. To accomplish this, we propose to perform NAS in an abstract search space of program properties. Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance; thus, we can search more efficiently in the abstract search space. To enable this approach, we also propose a novel efficient synthesis procedure, which accepts a set of promising program properties, and returns a satisfying neural architecture. We implement our approach, $\alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties. Starting with a ResNet-34 model, $\alpha$NAS produces a model with slightly improved accuracy on CIFAR-10 but 96% fewer parameters. On ImageNet, $\alpha$NAS is able to improve over Vision Transformer (30% fewer FLOPS and parameters), ResNet-50 (23% fewer FLOPS, 14% fewer parameters), and EfficientNet (7% fewer FLOPS and parameters) without any degradation in accuracy.
translated by 谷歌翻译