Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings 2 . * Work done during the Google AI Residency Program. 2 All code to replicate our findings will be available here: https://goo.gl/hBmhDt 3 We refer here to the broad category of visualization and attribution methods aimed at interpreting trained models. These methods are often used for interpreting deep neural networks particularly on image data.
translated by 谷歌翻译
深度神经网络的可解释性方法主要集中于类得分相对于原始或扰动输入的敏感性,通常使用实际或修改的梯度测量。某些方法还使用模型不足的方法来理解每个预测背后的基本原理。在本文中,我们争论并证明了模型参数空间相对于输入的局部几何形状也可以有益于改善事后解释。为了实现这一目标,我们引入了一种称为“几何引导的集成梯度”的可解释性方法,该方法沿线性路径的梯度计算以传统上用于集成梯度方法中的方式构建。但是,我们的方法没有集成梯度信息,而是从输入的多个缩放版本中探索了模型的动态行为,并捕获了每个输入的最佳归因。我们通过广泛的实验证明,所提出的方法在主观和定量评估中的表现优于香草和综合梯度。我们还提出了“模型扰动”理智检查,以补充传统使用的“模型随机化”测试。
translated by 谷歌翻译
We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data. Specifically, we consider the scenario where the spurious signal to be detected is unknown, at test-time, to the user of the explanation method. We design an empirical methodology that uses semi-synthetic datasets along with pre-specified spurious artifacts to obtain models that verifiably rely on these spurious training signals. We then provide a suite of metrics that assess an explanation method's reliability for spurious signal detection under various conditions. We find that the post hoc explanation methods tested are ineffective when the spurious artifact is unknown at test-time especially for non-visible artifacts like a background blur. Further, we find that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals even when the model being explained does not rely on spurious artifacts. This finding casts doubt on the utility of these approaches, in the hands of a practitioner, for detecting a model's reliance on spurious signals.
translated by 谷歌翻译
随着深度神经网络的兴起,解释这些网络预测的挑战已经越来越识别。虽然存在许多用于解释深度神经网络的决策的方法,但目前没有关于如何评估它们的共识。另一方面,鲁棒性是深度学习研究的热门话题;但是,在最近,几乎没有谈论解释性。在本教程中,我们首先呈现基于梯度的可解释性方法。这些技术使用梯度信号来分配对输入特征的决定的负担。后来,我们讨论如何为其鲁棒性和对抗性的鲁棒性在具有有意义的解释中扮演的作用来评估基于梯度的方法。我们还讨论了基于梯度的方法的局限性。最后,我们提出了在选择解释性方法之前应检查的最佳实践和属性。我们结束了未来在稳健性和解释性融合的地区研究的研究。
translated by 谷歌翻译
在本文中,我们在卷积神经网络(CNNS)的不断扩大性文献中介绍了一个新问题。虽然以前的工作侧重于如何在视觉解释CNNS的问题上,但我们问我们关心解释的是什么,即哪些层和神经元值得我们关注?由于巨大的现代深度学习网络架构,自动化,定量方法需要对神经元的相对重要性进行排名,以便为此问题提供答案。我们提出了一种新的统计方法,用于在网络的任何卷积层中排名隐藏的神经元。我们将重要性定义为激活映射与类分数之间的最大相关性。我们提供了不同的方式,其中该方法可用于可视化与Mnist和Imagenet的目的,并显示我们对街道级图像的空气污染预测方法的真实应用。
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper, we develop a novel post-hoc visual explanation method called Score-CAM based on class activation mapping. Unlike previous class activation mapping based approaches, Score-CAM gets rid of the dependence on gradients by obtaining the weight of each activation map through its forward passing score on target class, the final result is obtained by a linear combination of weights and activation maps. We demonstrate that Score-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks, it also passes the sanity check. We also indicate its application as debugging tools. The implementation is available 1 .
translated by 谷歌翻译
自我监督的视觉学习彻底改变了深度学习,成为域中的下一个重大挑战,并通过大型计算机视觉基准的监督方法迅速缩小了差距。随着当前的模型和培训数据成倍增长,解释和理解这些模型变得关键。我们研究了视力任务的自我监督学习领域中可解释的人工智能的问题,并提出了了解经过自学训练的网络及其内部工作的方法。鉴于自我监督的视觉借口任务的巨大多样性,我们缩小了对理解范式的关注,这些范式从同一图像的两种观点中学习,主要是旨在了解借口任务。我们的工作重点是解释相似性学习,并且很容易扩展到所有其他借口任务。我们研究了两个流行的自我监督视觉模型:Simclr和Barlow Twins。我们总共开发了六种可视化和理解这些模型的方法:基于扰动的方法(条件闭塞,上下文无形的条件闭塞和成对的闭塞),相互作用-CAM,特征可视化,模型差异可视化,平均变换和像素无形。最后,我们通过将涉及单个图像的监督图像分类系统量身定制的众所周知的评估指标来评估这些解释,并将其涉及两个图像的自我监督学习领域。代码为:https://github.com/fawazsammani/xai-ssl
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels wrt the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012 and MIT Places data sets. Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of neural network performance.
translated by 谷歌翻译
显着的方法已被广泛用于突出模型预测中的重要输入功能。大多数现有方法在修改的渐变函数上使用BackPropagation来生成显着性图。因此,嘈杂的渐变可能会导致不忠的特征属性。在本文中,我们解决了这个问题,并为神经网络引入了一个{\ IT显着指导训练}程序,以减少预测中使用的嘈杂渐变,同时保留了模型的预测性能。我们的显着指导训练程序迭代地掩盖小型和潜在的嘈杂渐变的功能,同时最大化模型输出的相似性,对于屏蔽和揭示的输入。我们将显着的指导培训程序从计算机视觉,自然语言处理和时间序列中的各种合成和实际数据集应用于各种神经结构,包括经常性神经网络,卷积网络和变压器。通过定性和定量评估,我们表明,在保留其预测性能的同时,显着的导向培训程序显着提高了各个领域的模型解释性。
translated by 谷歌翻译
Post-hoc analysis is a popular category in eXplainable artificial intelligence (XAI) study. In particular, methods that generate heatmaps have been used to explain the deep neural network (DNN), a black-box model. Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward. Different ways to assess heatmaps' quality have their own merits and shortcomings. This paper introduces a synthetic dataset that can be generated adhoc along with the ground-truth heatmaps for more objective quantitative assessment. Each sample data is an image of a cell with easily recognized features that are distinguished from localization ground-truth mask, hence facilitating a more transparent assessment of different XAI methods. Comparison and recommendations are made, shortcomings are clarified along with suggestions for future research directions to handle the finer details of select post-hoc analysis methods. Furthermore, mabCAM is introduced as the heatmap generation method compatible with our ground-truth heatmaps. The framework is easily generalizable and uses only standard deep learning components.
translated by 谷歌翻译
Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new {\em masked input} by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with \textquotedblleft uninformative\textquotedblright\ pixels, and checking if the net's output is mostly unchanged. This is usually seen as an {\em explanation} of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of {\em completeness \& soundness}, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an {\em intrinsic} framework -- i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.
translated by 谷歌翻译
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
translated by 谷歌翻译
In order for machine learning to be trusted in many applications, it is critical to be able to reliably explain why the machine learning algorithm makes certain predictions. For this reason, a variety of methods have been developed recently to interpret neural network predictions by providing, for example, feature importance maps. For both scientific robustness and security reasons, it is important to know to what extent can the interpretations be altered by small systematic perturbations to the input data, which might be generated by adversaries or by measurement biases. In this paper, we demonstrate how to generate adversarial perturbations that produce perceptively indistinguishable inputs that are assigned the same predicted label, yet have very different interpretations. We systematically characterize the robustness of interpretations generated by several widely-used feature importance interpretation methods (feature importance maps, integrated gradients, and DeepLIFT) on ImageNet and CIFAR-10. In all cases, our experiments show that systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly susceptible to adversarial attack. Our analysis of the geometry of the Hessian matrix gives insight on why robustness is a general challenge to current interpretation approaches.
translated by 谷歌翻译
越来越多的电子健康记录(EHR)数据和深度学习技术进步的越来越多的可用性(DL)已经引发了在开发基于DL的诊断,预后和治疗的DL临床决策支持系统中的研究兴趣激增。尽管承认医疗保健的深度学习的价值,但由于DL的黑匣子性质,实际医疗环境中进一步采用的障碍障碍仍然存在。因此,有一个可解释的DL的新兴需求,它允许最终用户评估模型决策,以便在采用行动之前知道是否接受或拒绝预测和建议。在这篇综述中,我们专注于DL模型在医疗保健中的可解释性。我们首先引入深入解释性的方法,并作为该领域的未来研究人员或临床从业者的方法参考。除了这些方法的细节之外,我们还包括对这些方法的优缺点以及它们中的每个场景都适合的讨论,因此感兴趣的读者可以知道如何比较和选择它们供使用。此外,我们讨论了这些方法,最初用于解决一般域问题,已经适应并应用于医疗保健问题以及如何帮助医生更好地理解这些数据驱动技术。总的来说,我们希望这项调查可以帮助研究人员和从业者在人工智能(AI)和临床领域了解我们为提高其DL模型的可解释性并相应地选择最佳方法。
translated by 谷歌翻译
了解深度神经网络的结果是朝着更广泛接受深度学习算法的重要步骤。许多方法解决了解释人工神经网络的问题,但通常提供不同的解释。此外,不同的解释方法的超级公路可能导致互相冲突。在本文中,我们提出了一种使用受限制的Boltzmann机器(RBMS)来聚合不同解释算法的特征归属的技术,以实现对深神经网络的更可靠和坚固的解释。关于现实世界数据集的几个具有挑战性的实验表明,所提出的RBM方法优于流行的特征归因方法和基本集合技术。
translated by 谷歌翻译
The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state. In addition, many systems, such as image classifiers, operate on low-level features rather than high-level concepts. To address these challenges, we introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result-for example, how sensitive a prediction of zebra is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.
translated by 谷歌翻译
Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task.
translated by 谷歌翻译