Vision transformers (ViTs) are quickly becoming the de-facto architecture for computer vision, yet we understand very little about why they work and what they learn. While existing studies visually analyze the mechanisms of convolutional neural networks, an analogous exploration of ViTs remains challenging. In this paper, we first address the obstacles to performing visualizations on ViTs. Assisted by these solutions, we observe that neurons in ViTs trained with language model supervision (e.g., CLIP) are activated by semantic concepts rather than visual features. We also explore the underlying differences between ViTs and CNNs, and we find that transformers detect image background features, just like their convolutional counterparts, but their predictions depend far less on high-frequency information. On the other hand, both architecture types behave similarly in the way features progress from abstract patterns in early layers to concrete objects in late layers. In addition, we show that ViTs maintain spatial information in all layers except the final layer. In contrast to previous works, we show that the last layer most likely discards the spatial information and behaves as a learned global pooling operation. Finally, we conduct large-scale visualizations on a wide range of ViT variants, including DeiT, CoaT, ConViT, PiT, Swin, and Twin, to validate the effectiveness of our method.
translated by 谷歌翻译
视觉变压器(VIT)在各种机器视觉问题上表现出令人印象深刻的性能。这些模型基于多头自我关注机制,可以灵活地参加一系列图像修补程序以编码上下文提示。一个重要问题是在给定贴片上参加图像范围内的上下文的这种灵活性是如何促进在自然图像中处理滋扰,例如,严重的闭塞,域移位,空间置换,对抗和天然扰动。我们通过广泛的一组实验来系统地研究了这个问题,包括三个vit家族和具有高性能卷积神经网络(CNN)的比较。我们展示和分析了vit的以下迷恋性质:(a)变压器对严重闭塞,扰动和域移位高度稳健,例如,即使在随机堵塞80%的图像之后,也可以在想象中保持高达60%的前1个精度。内容。 (b)与局部纹理的偏置有抗闭锁的强大性能,与CNN相比,VITS对纹理的偏置显着偏差。当受到适当训练以编码基于形状的特征时,VITS展示与人类视觉系统相当的形状识别能力,以前在文献中无与伦比。 (c)使用VIT来编码形状表示导致准确的语义分割而没有像素级监控的有趣后果。 (d)可以组合从单VIT模型的现成功能,以创建一个功能集合,导致传统和几枪学习范例的一系列分类数据集中的高精度率。我们显示VIT的有效特征是由于自我关注机制可以实现灵活和动态的接受领域。
translated by 谷歌翻译
Vision Transformers (ViTs) have gained significant popularity in recent years and have proliferated into many applications. However, it is not well explored how varied their behavior is under different learning paradigms. We compare ViTs trained through different methods of supervision, and show that they learn a diverse range of behaviors in terms of their attention, representations, and downstream performance. We also discover ViT behaviors that are consistent across supervision, including the emergence of Offset Local Attention Heads. These are self-attention heads that attend to a token adjacent to the current token with a fixed directional offset, a phenomenon that to the best of our knowledge has not been highlighted in any prior work. Our analysis shows that ViTs are highly flexible and learn to process local and global information in different orders depending on their training method. We find that contrastive self-supervised methods learn features that are competitive with explicitly supervised features, and they can even be superior for part-level tasks. We also find that the representations of reconstruction-based models show non-trivial similarity to contrastive self-supervised models. Finally, we show how the "best" layer for a given task varies by both supervision method and task, further demonstrating the differing order of information processing in ViTs.
translated by 谷歌翻译
已经发现基于混合的增强对于培训期间的概括模型有效,特别是对于视觉变压器(VITS),因为它们很容易过度装备。然而,先前的基于混合的方法具有潜在的先验知识,即目标的线性内插比应保持与输入插值中提出的比率相同。这可能导致一个奇怪的现象,有时由于增强中的随机过程,混合图像中没有有效对象,但标签空间仍然存在响应。为了弥合输入和标签空间之间的这种差距,我们提出了透明度,该差别将基于视觉变压器的注意图混合标签。如果受关注图的相应输入图像加权,则标签的置信度将会更大。传输令人尴尬地简单,可以在几行代码中实现,而不会在不引入任何额外的参数和拖鞋到基于Vit的模型。实验结果表明,我们的方法可以在想象集分类上一致地始终改善各种基于Vit的模型。在ImageNet上预先接受过扫描后,基于Vit的模型还展示了对语义分割,对象检测和实例分割的更好的可转换性。当在评估4个不同的基准时,传输展示展示更加强劲。代码将在https://github.com/beckschen/transmix上公开提供。
translated by 谷歌翻译
我们展示了如何通过基于关注的全球地图扩充任何卷积网络,以实现非本地推理。我们通过基于关注的聚合层替换为单个变压器块的最终平均池,重量贴片如何参与分类决策。我们使用2个参数(宽度和深度)使用简单的补丁卷积网络,使用简单的补丁的卷积网络插入学习的聚合层。与金字塔设计相比,该架构系列在所有层上维护输入补丁分辨率。它在准确性和复杂性之间产生了令人惊讶的竞争权衡,特别是在记忆消耗方面,如我们在各种计算机视觉任务所示:对象分类,图像分割和检测的实验所示。
translated by 谷歌翻译
Convolutional neural networks (CNNs) have so far been the de-facto model for visual data. Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or even superior performance on image classification tasks. This raises a central question: how are Vision Transformers solving these tasks? Are they acting like convolutional networks, or learning entirely different visual representations? Analyzing the internal representation structure of ViTs and CNNs on image classification benchmarks, we find striking differences between the two architectures, such as ViT having more uniform representations across all layers. We explore how these differences arise, finding crucial roles played by self-attention, which enables early aggregation of global information, and ViT residual connections, which strongly propagate features from lower to higher layers. We study the ramifications for spatial localization, demonstrating ViTs successfully preserve input spatial information, with noticeable effects from different classification methods. Finally, we study the effect of (pretraining) dataset scale on intermediate features and transfer learning, and conclude with a discussion on connections to new architectures such as the MLP-Mixer. This breakthrough highlights a fundamental question: how are Vision Transformers solving these image based tasks? Do they act like convolutions, learning the same inductive biases from scratch? Or are they developing novel task representations? What is the role of scale in learning these representations? And are there ramifications for downstream tasks? In this paper, we study these questions, uncovering key representational differences between ViTs and CNNs, the ways in which these difference arise, and effects on classification and transfer learning. Specifically, our contributions are:35th Conference on Neural Information Processing Systems (NeurIPS 2021).
translated by 谷歌翻译
语言变形金刚的成功主要归因于屏蔽语言建模(MLM)的借口任务,其中文本首先被致以语义有意义的作品。在这项工作中,我们研究了蒙面图像建模(MIM),并指出使用语义有意义的视觉销售器的优缺点。我们提出了一个自我监督的框架IBOT,可以使用在线标记器执行蒙版预测。具体而言,我们在蒙面的补丁令牌上进行自我蒸馏,并将教师网络作为在线标记器,以及在课堂上的自蒸馏来获取视觉语义。在线销售器与MIM目标和分配的多级培训管道共同学习,销售器需要预先预先培训。通过在Imagenet-1K上达到81.6%的线性探测精度和86.3%的微调精度来展示IBOT的突出。除了最先进的图像分类结果之外,我们强调了新兴的局部语义模式,这有助于模型对共同损坏获得强大的鲁棒性,并在密集的下游任务中实现领先的结果,例如,对象检测,实例分割和语义细分。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models 1 .
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
近年来,已经开发了用于图像分类的新型体系结构组件,从变压器中使用的注意力和斑块开始。尽管先前的作品已经分析了建筑成分某些方面对对抗性攻击的鲁棒性,尤其是视觉变形金刚的影响,但对主要因素的理解仍然是有限的。我们比较了几个(非)固定分类器与不同的架构并研究其属性,包括对抗训练对学习特征的解释性和对看不见威胁模型的鲁棒性的影响。从Resnet到Convnext的消融揭示了关键的架构变化,导致$ 10 \%$更高$ \ ell_ \ ell_ \ infty $ bobustness。
translated by 谷歌翻译
由于具有强大的代表性,变形金刚在包括自然语言处理(NLP),计算机视觉和语音识别在内的广泛应用中越来越受欢迎。但是,利用这种代表性的能力有效地需要大量的数据,强大的正则化或两者兼而有之以减轻过度拟合。最近,基于掩盖的自动编码器的自我监督预处理策略已解锁了变压器的功能,这些策略依赖于直接或从未掩盖的内容对比的掩蔽输入进行重建。这种预训练的策略已在NLP中的BERT模型,Speak2VEC模型中使用,最近在Vision中的MAE模型中,该模型迫使该模型使用自动编码相关的目标来了解输入不同部分中的内容之间的关系。在本文中,我们提出了一种小说但令人惊讶的简单替代内容,以预测内容的位置,而无需为其提供位置信息。这样做需要变压器仅凭内容就可以理解输入不同部分之间的位置关系。这相当于有效的实现,其中借口任务是每个输入令牌所有可能位置之间的分类问题。我们在视觉和语音基准上进行了实验,我们的方法对强有力的监督训练基准进行了改进,并且与现代的无监督/自我监督预审方法相媲美。我们的方法还可以使经过训练的变压器在没有位置嵌入的情况下胜过训练有完整位置信息的训练的变压器。
translated by 谷歌翻译
视觉变形金刚(VITS)处理将图像输入图像作为通过自我关注的斑块;比卷积神经网络(CNNS)彻底不同的结构。这使得研究Vit模型的对抗特征空间及其可转移性有趣。特别是,我们观察到通过常规逆势攻击发现的对抗性模式,即使对于大型Vit模型,也表现出非常低的黑箱可转移性。但是,我们表明这种现象仅是由于不利用VITS的真实表示潜力的次优攻击程序。深紫色由多个块组成,具有一致的架构,包括自我关注和前馈层,其中每个块能够独立地产生类令牌。仅使用最后一类令牌(传统方法)制定攻击并不直接利用存储在早期令牌中的辨别信息,从而导致VITS的逆势转移性差。使用Vit模型的组成性质,我们通过引入特定于Vit模型结构的两种新策略来增强现有攻击的可转移性。 (i)自我合奏:我们提出了一种通过将单vit模型解剖到网络的集合来找到多种判别途径的方法。这允许在每个VIT块处明确地利用特定于类信息。 (ii)令牌改进:我们建议改进令牌,以进一步增强每种Vit障碍的歧视能力。我们的令牌细化系统地将类令牌系统组合在补丁令牌中保留的结构信息。在一个视觉变压器中发现的分类器的集合中应用于此类精炼令牌时,对抗攻击具有明显更高的可转移性。
translated by 谷歌翻译
在过去的几年中,基于自我注意力的变压器模型一直在主导许多计算机视觉任务。它们的出色模型质量在很大程度上取决于标记过多的图像数据集。为了减少对大型标记数据集的依赖,基于重建的掩盖自动编码器正在获得流行,这些自动编码器从未标记的图像中学习了高质量的可转移表示形式。出于同样的目的,最近弱监督的图像预处理方法探索了图像随附的文本字幕的语言监督。在这项工作中,我们提出了对语言辅助代表的预读图像,称为米兰。我们的预处理目标不是预测原始像素或低级别的特征,而是用使用字幕监督获得的大量语义信号来重建图像特征。此外,为了适应我们的重建目标,我们提出了更有效的促使解码器体系结构和语义意识到的掩码采样机制,从而进一步推进了预告片模型的传输性能。实验结果表明,米兰的精度比以前的工作更高。当掩盖的自动编码器在ImagEnet-1K数据集上进行了预估计并以224x224的输入分辨率进行了填充时,米兰在VITB/16上的前1位准确性达到了85.4%,使以前的先前最先前的艺术品达到1%。在下游的语义分割任务中,米兰在ADE20K数据集上使用VIT-B/16骨架达到52.7 MIOU,表现优于先前的蒙版预读结果4分。
translated by 谷歌翻译
在本文中,我们询问视觉变形金刚(VIT)是否可以作为改善机器学习模型对抗逃避攻击的对抗性鲁棒性的基础结构。尽管较早的作品集中在改善卷积神经网络上,但我们表明VIT也非常适合对抗训练以实现竞争性能。我们使用自定义的对抗训练配方实现了这一目标,该配方是在Imagenet数据集的一部分上使用严格的消融研究发现的。与卷积相比,VIT的规范培训配方建议强大的数据增强,部分是为了补偿注意力模块的视力归纳偏置。我们表明,该食谱在用于对抗训练时可实现次优性能。相比之下,我们发现省略所有重型数据增强,并添加一些额外的零件($ \ varepsilon $ -Warmup和更大的重量衰减),从而大大提高了健壮的Vits的性能。我们表明,我们的配方在完整的Imagenet-1k上概括了不同类别的VIT体系结构和大规模模型。此外,调查了模型鲁棒性的原因,我们表明,在使用我们的食谱时,在训练过程中产生强烈的攻击更加容易,这会在测试时提高鲁棒性。最后,我们通过提出一种量化对抗性扰动的语义性质并强调其与模型的鲁棒性的相关性来进一步研究对抗训练的结果。总体而言,我们建议社区应避免将VIT的规范培训食谱转换为在对抗培训的背景下进行强大的培训和重新思考常见的培训选择。
translated by 谷歌翻译
Patch-based models, e.g., Vision Transformers (ViTs) and Mixers, have shown impressive results on various visual recognition tasks, alternating classic convolutional networks. While the initial patch-based models (ViTs) treated all patches equally, recent studies reveal that incorporating inductive bias like spatiality benefits the representations. However, most prior works solely focused on the location of patches, overlooking the scene structure of images. Thus, we aim to further guide the interaction of patches using the object information. Specifically, we propose OAMixer (object-aware mixing layer), which calibrates the patch mixing layers of patch-based models based on the object labels. Here, we obtain the object labels in unsupervised or weakly-supervised manners, i.e., no additional human-annotating cost is necessary. Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers. By learning an object-centric representation, we demonstrate that OAMixer improves the classification accuracy and background robustness of various patch-based models, including ViTs, MLP-Mixers, and ConvMixers. Moreover, we show that OAMixer enhances various downstream tasks, including large-scale classification, self-supervised learning, and multi-object recognition, verifying the generic applicability of OAMixer
translated by 谷歌翻译
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https: //github.com/facebookresearch/LeViT.
translated by 谷歌翻译
由多种自我关注层组成的变压器,对适用于不同数据方式的通用学习原语,包括计算机视觉最新(SOTA)标准准确性的近期突破。什么仍然很大程度上未开发,是他们的稳健性评估和归因。在这项工作中,我们研究了视觉变压器(VIT)对共同腐败和扰动,分布换算和自然对抗例的鲁棒性。我们使用六种不同的多样化想象数据集关于强大的分类,进行vit模型和Sota卷积神经网络(CNNS)的全面性能比较,大转移。通过一系列系统地设计的实验,我们提供了分析,这些分析提供了定量和定性迹象,以解释为什么VITS确实更强大的学习者。例如,对于更少的参数和类似的数据集和预训练组合,VIT在ImageNet-A上给出了28.10%的前1个精度,这是比一位的可比较变体高4.3x。我们对图像掩蔽,傅里叶谱灵敏度和传播的分析,在离散余弦能量谱上揭示了Vit归属于改善鲁棒性的损伤性能。再现我们的实验的代码可在https://git.io/j3vo0上获得。
translated by 谷歌翻译
多层erceptron(MLP),作为出现的第一个神经网络结构,是一个大的击中。但是由硬件计算能力和数据集的大小限制,它一旦沉没了数十年。在此期间,我们目睹了从手动特征提取到带有局部接收领域的CNN的范式转变,以及基于自我关注机制的全球接收领域的变换。今年(2021年),随着MLP混合器的推出,MLP已重新进入敏捷,并吸引了计算机视觉界的广泛研究。与传统的MLP进行比较,它变得更深,但改变了完全扁平化以补丁平整的输入。鉴于其高性能和较少的需求对视觉特定的感应偏见,但社区无法帮助奇迹,将MLP,最简单的结构与全球接受领域,但没有关注,成为一个新的电脑视觉范式吗?为了回答这个问题,本调查旨在全面概述视觉深层MLP模型的最新发展。具体而言,我们从微妙的子模块设计到全局网络结构,我们审查了这些视觉深度MLP。我们比较了不同网络设计的接收领域,计算复杂性和其他特性,以便清楚地了解MLP的开发路径。调查表明,MLPS的分辨率灵敏度和计算密度仍未得到解决,纯MLP逐渐发展朝向CNN样。我们建议,目前的数据量和计算能力尚未准备好接受纯的MLP,并且人工视觉指导仍然很重要。最后,我们提供了开放的研究方向和可能的未来作品的分析。我们希望这项努力能够点燃社区的进一步兴趣,并鼓励目前为神经网络进行更好的视觉量身定制设计。
translated by 谷歌翻译
视觉变压器(VIV)被涌现为图像识别的最先进的架构。虽然最近的研究表明,VITS比卷积对应物更强大,但我们的实验发现,VITS过度依赖于局部特征(例如,滋扰和质地),并且不能充分使用全局背景(例如,形状和结构)。因此,VIT不能概括到分销,现实世界数据。为了解决这一缺陷,我们通过添加由矢量量化编码器产生的离散令牌来向Vit的输入层提出简单有效的架构修改。与标准的连续像素令牌不同,离散令牌在小扰动下不变,并且单独包含较少的信息,这促进了VITS学习不变的全局信息。实验结果表明,在七种想象中的鲁棒性基准中增加了四个架构变体上的离散表示,在七个想象中心坚固的基准中加强了高达12%的鲁棒性,同时保持了在想象成上的性能。
translated by 谷歌翻译