随着自我关注机制的发展,变压器模型已经在计算机视觉域中展示了其出色的性能。然而,从完全关注机制带来的大规模计算成为内存消耗的沉重负担。顺序地,记忆的限制降低了改善变压器模型的可能性。为了解决这个问题,我们提出了一种名为耦合器的新的记忆经济性注意力机制,它将注意力映射与两个子矩阵分成并从空间信息中生成对准分数。应用了一系列不同的尺度图像分类任务来评估模型的有效性。实验结果表明,在ImageNet-1K分类任务上,与常规变压器相比,耦合器可以显着降低28%的存储器消耗,同时访问足够的精度要求,并且在占用相同的内存占用时表达了0.92%。结果,耦合器可以用作视觉任务中的有效骨干,并提供关于研究人员注意机制的新颖视角。
translated by 谷歌翻译
自我关注在捕获远程关系时,在提高视觉任务的表现,例如图像分类和图像标题等方面,突出的能力。然而,自我关注模块高度依赖于查询键值特征之间的点产品乘法和维度对齐,这导致两个问题:(1)点产品乘法导致穷举和冗余计算。 (2)由于视觉特征图通常出现作为多维张量,重塑张量特征的尺度,以适应尺寸对齐可能会破坏张量特征图的内部结构。为了解决这些问题,本文提出了一种具有其变体的自我关注插入模块,即合成张量变换(STT),用于直接处理图像张量特征。如果在查询键值之间计算点 - 产品乘法,则基本STT由张量转换组成,以从视觉信息中学习合成注意力。 STT系列的有效性在图像分类和图像标题上验证。实验表明,建议的STT实现了竞争性能,同时保持鲁棒性与基于视觉任务的自我关注相比。
translated by 谷歌翻译
随着变压器作为语言处理的标准及其在计算机视觉方面的进步,参数大小和培训数据的数量相应地增长。许多人开始相信,因此,变形金刚不适合少量数据。这种趋势引起了人们的关注,例如:某些科学领域中数据的可用性有限,并且排除了该领域研究资源有限的人。在本文中,我们旨在通过引入紧凑型变压器来提出一种小规模学习的方法。我们首次表明,具有正确的尺寸,卷积令牌化,变压器可以避免在小数据集上过度拟合和优于最先进的CNN。我们的模型在模型大小方面具有灵活性,并且在获得竞争成果的同时,参数可能仅为0.28亿。当在CIFAR-10上训练Cifar-10,只有370万参数训练时,我们的最佳模型可以达到98%的准确性,这是与以前的基于变形金刚的模型相比,数据效率的显着提高,比其他变压器小于10倍,并且是15%的大小。在实现类似性能的同时,重新NET50。 CCT还表现优于许多基于CNN的现代方法,甚至超过一些基于NAS的方法。此外,我们在Flowers-102上获得了新的SOTA,具有99.76%的TOP-1准确性,并改善了Imagenet上现有基线(82.71%精度,具有29%的VIT参数)以及NLP任务。我们针对变压器的简单而紧凑的设计使它们更可行,可以为那些计算资源和/或处理小型数据集的人学习,同时扩展了在数据高效变压器中的现有研究工作。我们的代码和预培训模型可在https://github.com/shi-labs/compact-transformers上公开获得。
translated by 谷歌翻译
视觉变压器在众多计算机视觉任务上表现出了巨大的成功。然而,由于计算复杂性和记忆足迹是二次的,因此其中心分量(软磁性注意力)禁止视觉变压器扩展到高分辨率图像。尽管在自然语言处理(NLP)任务中引入了线性注意以减轻类似问题,但直接将现有的线性注意力应用于视觉变压器可能不会导致令人满意的结果。我们研究了这个问题,发现与NLP任务相比,计算机视觉任务更多地关注本地信息。基于这一观察结果,我们提出了附近的关注,该关注引入了具有线性复杂性的视觉变压器的局部性偏见。具体而言,对于每个图像补丁,我们根据其相邻贴片测量的2D曼哈顿距离调整了注意力重量。在这种情况下,相邻的补丁比遥远的补丁会受到更大的关注。此外,由于我们的附近注意力要求令牌长度比特征维度大得多,以显示其效率优势,因此我们进一步提出了一个新的附近视觉变压器(VVT)结构,以减少特征维度而不脱离准确性。我们在CIFAR100,ImagEnet1k和ADE20K数据集上进行了广泛的实验,以验证我们方法的有效性。当输入分辨率增加时,与以前的基于变压器和基于卷积的网络相比,GFLOP的增长率较慢。特别是,我们的方法达到了最新的图像分类精度,其参数比以前的方法少50%。
translated by 谷歌翻译
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Code will be released at https: //github.com/leoxiaobin/CvT.
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
视觉变压器由于能够捕获图像中的长期依赖性的能力而成功地应用于图像识别任务。但是,变压器与现有卷积神经网络(CNN)之间的性能和计算成本仍然存在差距。在本文中,我们旨在解决此问题,并开发一个网络,该网络不仅可以超越规范变压器,而且可以超越高性能卷积模型。我们通过利用变压器来捕获长期依赖性和CNN来建模本地特征,从而提出了一个新的基于变压器的混合网络。此外,我们将其扩展为获得一个称为CMT的模型家族,比以前的基于卷积和基于变压器的模型获得了更好的准确性和效率。特别是,我们的CMT-S在ImageNet上获得了83.5%的TOP-1精度,而在拖鞋上的拖曳率分别比现有的DEIT和EficitiveNet小14倍和2倍。拟议的CMT-S还可以很好地概括CIFAR10(99.2%),CIFAR100(91.7%),花(98.7%)以及其他具有挑战性的视觉数据集,例如可可(44.3%地图),计算成本较小。
translated by 谷歌翻译
Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16×16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4×4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an 81.5% top-1 accuracy on the ImageNet, which is about 1.7% higher than that of the state-of-the-art visual transformer with similar computational cost.
translated by 谷歌翻译
变压器架构现在是序列建模任务的核心。注意机制是核心,它可以在序列中对长期依赖性进行有效的建模。最近,变压器已成功地应用于计算机视觉域,在该域中首先将2D图像分割成斑块,然后将其视为1D序列。然而,这种线性化会损害图像中空间位置的概念,该图像具有重要的视觉线索。为了弥合差距,我们提出了连锁反应,这是视觉变压器的次级注意机制。基于最近基于内核的有效注意机制,我们设计了一种新型的动态编程算法,该算法将不同令牌的贡献加重了与它们在线性观察到的2D空间中相对空间距离的查询的贡献。广泛的实验和分析证明了连锁反应对各种视觉任务的有效性。
translated by 谷歌翻译
最近,视觉变压器变得非常流行。但是,将它们部署在许多应用程序中的计算昂贵部分是由于注意力块中的软磁层。我们引入了一个简单但有效的,无软的注意力块Sima,它使用简单的$ \ ell_1 $ -norm而不是使用SoftMax层,将查询和密钥矩阵归一化。然后,SIMA中的注意力块是三个矩阵的简单乘法,因此SIMA可以在测试时间动态更改计算的顺序,以在令牌数量或通道数量上实现线性计算。我们从经验上表明,SIMA应用于变形金刚,DEIT,XCIT和CVT的三种SOTA变体,与SOTA模型相比,SIMA可在不需要SoftMax层的情况下达到PAR准确性。有趣的是,将SIMA从多头更改为单头只会对精度产生很小的影响,这进一步简化了注意力障碍。该代码可在此处找到:$ \ href {https://github.com/ucdvision/sima} {\ text {this https url}} $
translated by 谷歌翻译
The recently developed vision transformer (ViT) has achieved promising results on image classification compared to convolutional neural networks. Inspired by this, in this paper, we study how to learn multi-scale feature representations in transformer models for image classification. To this end, we propose a dual-branch transformer to combine image patches (i.e., tokens in a transformer) of different sizes to produce stronger image features. Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity and these tokens are then fused purely by attention multiple times to complement each other. Furthermore, to reduce computation, we develop a simple yet effective token fusion module based on cross attention, which uses a single token for each branch as a query to exchange information with other branches. Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise. Extensive experiments demonstrate that our approach performs better than or on par with several concurrent works on vision transformer, in addition to efficient CNN models. For example, on the ImageNet1K dataset, with some architectural changes, our approach outperforms the recent DeiT by a large margin of 2% with a small to moderate increase in FLOPs and model parameters. Our source codes and models are available at https://github.com/IBM/CrossViT.
translated by 谷歌翻译
There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
translated by 谷歌翻译
视觉变压器(VIT)用作强大的视觉模型。与卷积神经网络不同,在前几年主导视觉研究,视觉变压器享有捕获数据中的远程依赖性的能力。尽管如此,任何变压器架构的组成部分,自我关注机制都存在高延迟和低效的内存利用,使其不太适合高分辨率输入图像。为了缓解这些缺点,分层视觉模型在非交错的窗口上局部使用自我关注。这种放松会降低输入尺寸的复杂性;但是,它限制了横窗相互作用,损害了模型性能。在本文中,我们提出了一种新的班次不变的本地注意层,称为查询和参加(QNA),其以重叠的方式聚集在本地输入,非常类似于卷积。 QNA背后的关键想法是介绍学习的查询,这允许快速高效地实现。我们通过将其纳入分层视觉变压器模型来验证我们的层的有效性。我们展示了速度和内存复杂性的改进,同时实现了与最先进的模型的可比准确性。最后,我们的图层尺寸尤其良好,窗口大小,需要高于X10的内存,而不是比现有方法更快。
translated by 谷歌翻译
视觉变形金刚(VIT)通过贴片图像令牌化推动了各种视觉识别任务的最先进,然后是堆叠的自我注意操作。采用自我发场模块会导致计算和内存使用情况的二次复杂性。因此,已经在自然语言处理中进行了各种尝试以线性复杂性近似自我发挥计算的尝试。但是,这项工作的深入分析表明,它们在理论上是缺陷的,或者在经验上是无效的视觉识别。我们确定它们的局限性植根于在近似过程中保留软马克斯的自我注意力。具体而言,传统的自我注意力是通过使令状特征向量之间的缩放点产物标准化来计算的。保留SoftMax操作会挑战任何随后的线性化工作。在这个见解下,首次提出了无软磁变压器(缩写为软的变压器)。为了消除自我注意事项的软马克斯操作员,采用高斯内核函数来替代点产品相似性。这使完整的自发矩阵可以通过低级矩阵分解近似。我们近似的鲁棒性是通过使用牛顿 - 拉夫森方法来计算其摩尔 - 芬罗逆的。此外,在低级别的自我注意事项上引入了有效的对称归一化,以增强模型的推广性和可传递性。对Imagenet,Coco和ADE20K的广泛实验表明,我们的软可以显着提高现有VIT变体的计算效率。至关重要的是,具有线性复杂性,允许使用较长的令牌序列,从而使精度和复杂性之间的权衡较高。
translated by 谷歌翻译
变压器建立在多头缩放的点产生关注和位置编码的基础上,旨在学习特征表示和令牌依赖性。在这项工作中,我们专注于通过学习通过变压器中的自我发项机制来增强特征图来增强独特的表示。具体而言,我们提出了水平的关注,以重新权重降低维度降低的点产量注意的多头输出,并提出垂直注意力以通过对不同的相互依赖性在不同的相互依赖性的方面自适应重新校准的频道响应,以使不同频道。我们证明了配备了两种专注的变压器模型在不同监督的学习任务中具有很高的概括能力,并具有较小的额外计算成本开销。提出的水平和垂直注意力是高度模块化的,可以将其插入各种变压器模型中,以进一步提高性能。我们的代码在补充材料中可用。
translated by 谷歌翻译
变形金刚在自然语言处理方面取得了巨大的成功。由于变压器中自我发挥机制的强大能力,研究人员为各种计算机视觉任务(例如图像识别,对象检测,图像分割,姿势估计和3D重建)开发了视觉变压器。本文介绍了有关视觉变形金刚的不同建筑设计和培训技巧(包括自我监督的学习)文献的全面概述。我们的目标是为开放研究机会提供系统的审查。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
随着计算机愿景中变压器架构的普及,研究焦点已转向开发计算有效的设计。基于窗口的本地关注是最近作品采用的主要技术之一。这些方法以非常小的贴片尺寸和小的嵌入尺寸开始,然后执行冲击卷积(贴片合并),以减少特征图尺寸并增加嵌入尺寸,因此,形成像设计的金字塔卷积神经网络(CNN)。在这项工作中,我们通过呈现一种新的各向同性架构,调查变压器中的本地和全球信息建模,以便采用当地窗口和特殊令牌,称为超级令牌,以自我关注。具体地,将单个超级令牌分配给每个图像窗口,该窗口捕获该窗口的丰富本地细节。然后使用这些令牌用于跨窗口通信和全局代表学习。因此,大多数学习都独立于较高层次的图像补丁$(n)$,并且仅基于超级令牌$(n / m ^ 2)$何处,从中学习额外的嵌入量窗口大小。在ImageNet-1K上的标准图像分类中,所提出的基于超代币的变压器(STT-S25)实现了83.5 \%的精度,其等同于带有大约一半参数(49M)的Swin变压器(Swin-B)和推断的两倍时间吞吐量。建议的超级令牌变压器为可视识别任务提供轻量级和有前途的骨干。
translated by 谷歌翻译
Vision Transformers (ViTs) have become a dominant paradigm for visual representation learning with self-attention operators. Although these operators provide flexibility to the model with their adjustable attention kernels, they suffer from inherent limitations: (1) the attention kernel is not discriminative enough, resulting in high redundancy of the ViT layers, and (2) the complexity in computation and memory is quadratic in the sequence length. In this paper, we propose a novel attention operator, called lightweight structure-aware attention (LiSA), which has a better representation power with log-linear complexity. Our operator learns structural patterns by using a set of relative position embeddings (RPEs). To achieve log-linear complexity, the RPEs are approximated with fast Fourier transforms. Our experiments and ablation studies demonstrate that ViTs based on the proposed operator outperform self-attention and other existing operators, achieving state-of-the-art results on ImageNet, and competitive results on other visual understanding benchmarks such as COCO and Something-Something-V2. The source code of our approach will be released online.
translated by 谷歌翻译
Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.
translated by 谷歌翻译