There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
translated by 谷歌翻译
随着计算机愿景中变压器架构的普及,研究焦点已转向开发计算有效的设计。基于窗口的本地关注是最近作品采用的主要技术之一。这些方法以非常小的贴片尺寸和小的嵌入尺寸开始,然后执行冲击卷积(贴片合并),以减少特征图尺寸并增加嵌入尺寸,因此,形成像设计的金字塔卷积神经网络(CNN)。在这项工作中,我们通过呈现一种新的各向同性架构,调查变压器中的本地和全球信息建模,以便采用当地窗口和特殊令牌,称为超级令牌,以自我关注。具体地,将单个超级令牌分配给每个图像窗口,该窗口捕获该窗口的丰富本地细节。然后使用这些令牌用于跨窗口通信和全局代表学习。因此,大多数学习都独立于较高层次的图像补丁$(n)$,并且仅基于超级令牌$(n / m ^ 2)$何处,从中学习额外的嵌入量窗口大小。在ImageNet-1K上的标准图像分类中,所提出的基于超代币的变压器(STT-S25)实现了83.5 \%的精度,其等同于带有大约一半参数(49M)的Swin变压器(Swin-B)和推断的两倍时间吞吐量。建议的超级令牌变压器为可视识别任务提供轻量级和有前途的骨干。
translated by 谷歌翻译
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Code will be released at https: //github.com/leoxiaobin/CvT.
translated by 谷歌翻译
视觉变压器由于能够捕获图像中的长期依赖性的能力而成功地应用于图像识别任务。但是,变压器与现有卷积神经网络(CNN)之间的性能和计算成本仍然存在差距。在本文中,我们旨在解决此问题,并开发一个网络,该网络不仅可以超越规范变压器,而且可以超越高性能卷积模型。我们通过利用变压器来捕获长期依赖性和CNN来建模本地特征,从而提出了一个新的基于变压器的混合网络。此外,我们将其扩展为获得一个称为CMT的模型家族,比以前的基于卷积和基于变压器的模型获得了更好的准确性和效率。特别是,我们的CMT-S在ImageNet上获得了83.5%的TOP-1精度,而在拖鞋上的拖曳率分别比现有的DEIT和EficitiveNet小14倍和2倍。拟议的CMT-S还可以很好地概括CIFAR10(99.2%),CIFAR100(91.7%),花(98.7%)以及其他具有挑战性的视觉数据集,例如可可(44.3%地图),计算成本较小。
translated by 谷歌翻译
我们在视觉变压器上呈现整洁但有效的递归操作,可以提高参数利用而不涉及额外参数。这是通过在变压器网络的深度分享权重来实现的。所提出的方法可以只使用NA \“IVE递归操作来获得大量增益(〜2%),不需要对设计网络原理的特殊或复杂的知识,并引入训练程序的最小计算开销。减少额外的计算通过递归操作,同时保持卓越的准确性,我们通过递归层的多个切片组自行引入近似方法,这可以通过最小的性能损失将成本消耗降低10〜30%。我们称我们的模型切片递归变压器(SRET) ,这与高效视觉变压器的广泛的其他设计兼容。我们最好的模型在含有较少参数的同时,在最先进的方法中对Imagenet建立了重大改进。建议的切片递归操作使我们能够建立一个变压器超过100甚至1000层,仍然仍然小尺寸(13〜15米),以避免困难当模型尺寸太大时,IES在优化中。灵活的可扩展性显示出缩放和构建极深和大维视觉变压器的巨大潜力。我们的代码和模型可在https://github.com/szq0214/sret中找到。
translated by 谷歌翻译
Compared with the vanilla transformer, the window-based transformer offers a better trade-off between accuracy and efficiency. Although the window-based transformer has made great progress, its long-range modeling capabilities are limited due to the size of the local window and the window connection scheme. To address this problem, we propose a novel Token Transformer (TT). The core mechanism of TT is the addition of a Class (CLS) token for summarizing window information in each local window. We refer to this type of token interaction as CLS Attention. These CLS tokens will interact spatially with the tokens in each window to enable long-range modeling. In order to preserve the hierarchical design of the window-based transformer, we designed Feature Inheritance Module (FIM) in each phase of TT to deliver the local window information from the previous phase to the CLS token in the next phase. In addition, we have designed a Spatial-Channel Feedforward Network (SCFFN) in TT, which can mix CLS tokens and embedded tokens on the spatial domain and channel domain without additional parameters. Extensive experiments have shown that our TT achieves competitive results with low parameters in image classification and downstream tasks.
translated by 谷歌翻译
Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16×16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4×4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an 81.5% top-1 accuracy on the ImageNet, which is about 1.7% higher than that of the state-of-the-art visual transformer with similar computational cost.
translated by 谷歌翻译
Since the recent success of Vision Transformers (ViTs), explorations toward transformer-style architectures have triggered the resurgence of modern ConvNets. In this work, we explore the representation ability of DNNs through the lens of interaction complexities. We empirically show that interaction complexity is an overlooked but essential indicator for visual recognition. Accordingly, a new family of efficient ConvNets, named MogaNet, is presented to pursue informative context mining in pure ConvNet-based models, with preferable complexity-performance trade-offs. In MogaNet, interactions across multiple complexities are facilitated and contextualized by leveraging two specially designed aggregation blocks in both spatial and channel interaction spaces. Extensive studies are conducted on ImageNet classification, COCO object detection, and ADE20K semantic segmentation tasks. The results demonstrate that our MogaNet establishes new state-of-the-art over other popular methods in mainstream scenarios and all model scales. Typically, the lightweight MogaNet-T achieves 80.0\% top-1 accuracy with only 1.44G FLOPs using a refined training setup on ImageNet-1K, surpassing ParC-Net-S by 1.4\% accuracy but saving 59\% (2.04G) FLOPs.
translated by 谷歌翻译
由于缺乏电感偏见,视觉变压器(VIT)通常被认为比卷积神经网络(CNN)少。因此,最近的工作将卷积作为插件模块,并将其嵌入各种Vit对应物中。在本文中,我们认为卷积内核执行信息聚合以连接所有令牌。但是,如果这种明确的聚合能够以更均匀的方式起作用,则实际上是轻重量VIT的不必要的。受到这一点的启发,我们将Lightvit作为新的轻巧VIT家族,以在不卷积的情况下在纯变压器块上实现更好的准确性效率平衡。具体而言,我们将一个全球但有效的聚合方案引入了VIT的自我注意力和前馈网络(FFN),其中引入了其他可学习的令牌以捕获全球依赖性;在令牌嵌入上施加了双维通道和空间注意力。实验表明,我们的模型在图像分类,对象检测和语义分割任务上取得了重大改进。例如,我们的LightVit-T仅使用0.7G拖鞋的ImageNet上达到78.7%的精度,在GPU上的PVTV2-B0优于8.2%,而GPU的速度快11%。代码可在https://github.com/hunto/lightvit上找到。
translated by 谷歌翻译
诸如对象检测和分割等密集的计算机视觉任务需要有效的多尺度特征表示,用于检测或分类具有不同大小的对象或区域。虽然卷积神经网络(CNNS)是这种任务的主导架构,但最近引入了视觉变压器(VITS)的目标是将它们替换为骨干。类似于CNN,VITS构建一个简单的多级结构(即,细致粗略),用于使用单尺度补丁进行多尺度表示。在这项工作中,通过从现有变压器的不同角度来看,我们探索了多尺度补丁嵌入和多路径结构,构建了多路径视觉变压器(MPVIT)。 MPVIT通过使用重叠的卷积贴片嵌入,将相同尺寸〜(即,序列长度,序列长度,序列长度的序列长度)嵌入不同尺度的斑块。然后,通过多个路径独立地将不同尺度的令牌独立地馈送到变压器编码器,并且可以聚合产生的特征,使得能够在相同特征级别的精细和粗糙的特征表示。由于多样化,多尺寸特征表示,我们的MPVits从微小〜(5m)缩放到基础〜(73米)一直在想象成分,对象检测,实例分段上的最先进的视觉变压器来实现卓越的性能,和语义细分。这些广泛的结果表明,MPVIT可以作为各种视觉任务的多功能骨干网。代码将在\ url {https://git.io/mpvit}上公开可用。
translated by 谷歌翻译
变压器提供了一种设计神经网络以进行视觉识别的新方法。与卷积网络相比,变压器享有在每个阶段引用全局特征的能力,但注意模块带来了更高的计算开销,阻碍了变压器的应用来处理高分辨率的视觉数据。本文旨在减轻效率和灵活性之间的冲突,为此,我们为每个地区提出了专门的令牌,作为使者(MSG)。因此,通过操纵这些MSG令牌,可以在跨区域灵活地交换视觉信息,并且减少计算复杂性。然后,我们将MSG令牌集成到一个名为MSG-Transformer的多尺度体系结构中。在标准图像分类和对象检测中,MSG变压器实现了竞争性能,加速了GPU和CPU的推断。代码可在https://github.com/hustvl/msg-transformer中找到。
translated by 谷歌翻译
我们介绍克斯内变压器,一种高效且有效的变压器的骨干,用于通用视觉任务。变压器设计的具有挑战性的问题是,全球自我关注来计算成本昂贵,而局部自我关注经常限制每个令牌的相互作用。为了解决这个问题,我们开发了以平行的横向和垂直条纹在水平和垂直条纹中计算自我关注的交叉形窗口自我关注机制,通过将输入特征分成相等的条纹而获得的每个条纹宽度。我们提供了条纹宽度效果的数学分析,并改变变压器网络的不同层的条纹宽度,这在限制计算成本时实现了强大的建模能力。我们还介绍了本地增强的位置编码(LEPE),比现有的编码方案更好地处理本地位置信息。 LEPE自然支持任意输入分辨率,因此对下游任务特别有效和友好。 CSWIN变压器并入其具有这些设计和分层结构,展示了普通愿景任务的竞争性能。具体来说,它在ImageNet-1K上实现了85.4 \%Top-1精度,而无需任何额外的培训数据或标签,53.9盒AP和46.4掩模AP,ADE20K语义分割任务上的52.2 Miou,超过以前的状态 - 在类似的拖鞋设置下,艺术品+1.2,+2.0,+1.4和+2.0分别为+1.2,+2.0,+1.4和+2.0。通过在较大的数据集Imagenet-21k上进行前预先预订,我们在Ave20K上实现了87.5%的成像-1K和高分性能,55.7 miou。代码和模型可在https://github.com/microsoft/cswin-transformer中找到。
translated by 谷歌翻译
在最近的计算机视觉研究中,Vision Transformer(VIT)的出现迅速彻底改变了各种建筑设计工作:VIT使用自然语言处理中发现的自我注意力实现了最新的图像分类性能,而MLP-Mixer实现了使用简单多层感知器的竞争性能。相比之下,一些研究还表明,精心重新设计的卷积神经网络(CNN)可以实现与VIT相当的先进性能,而无需诉诸这些新想法。在这种背景下,越来越多的感应偏见适合计算机视觉。在这里,我们提出了Sequencer,这是VIT的一种新颖且具有竞争力的体系结构,可为这些问题提供新的看法。与VIT不同,音序器使用LSTM而不是自我发项层模型的远程依赖性。我们还提出了二维版本的音序器模块,其中LSTM分解为垂直和水平LSTM,以增强性能。尽管它很简单,但一些实验表明,Sequencer表现出色:Sequencer2d-L,具有54m参数,​​仅在Imagenet-1K上实现了84.6%的TOP-1精度。不仅如此,我们还表明它具有良好的可传递性和在双分辨率波段上具有强大的分辨率适应性。
translated by 谷歌翻译
Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译
视觉变压器(VIT)最近在一系列计算机视觉任务中占据了主导地位,但训练数据效率低下,局部语义表示能力较低,而没有适当的电感偏差。卷积神经网络(CNNS)固有地捕获了区域感知语义,激发了研究人员将CNN引入VIT的架构中,以为VIT提供理想的诱导偏见。但是,嵌入在VIT中的微型CNN实现的位置是否足够好?在本文中,我们通过深入探讨混合CNNS/VIT的宏观结构如何增强层次VIT的性能。特别是,我们研究了令牌嵌入层,别名卷积嵌入(CE)的作用,并系统地揭示了CE如何在VIT中注入理想的感应偏置。此外,我们将最佳CE配置应用于最近发布的4个最先进的Vits,从而有效地增强了相应的性能。最后,释放了一个有效的混合CNN/VIT家族,称为CETNET,可以用作通用的视觉骨架。具体而言,CETNET在Imagenet-1K上获得了84.9%的TOP-1准确性(从头开始训练),可可基准上的48.6%的盒子地图和ADE20K上的51.6%MIOU,从而显着提高了相应的最新态度的性能。艺术基线。
translated by 谷歌翻译
探讨了语言建模流行的变形金刚,用于近期解决视觉任务,例如,用于图像分类的视觉变压器(VIT)。 VIT模型将每个图像分成具有固定长度的令牌序列,然后应用多个变压器层以模拟它们的全局关系以进行分类。然而,当从像想象中的中型数据集上从头开始训练时,VIT对CNNS达到较差的性能。我们发现它是因为:1)输入图像的简单标记未能模拟相邻像素之间的重要局部结构,例如边缘和线路,导致训练采样效率低。 2)冗余注意骨干骨干设计对固定计算预算和有限的训练样本有限的具有限制性。为了克服这些限制,我们提出了一种新的令牌到令牌视觉变压器(T2T-VIT),它包含1)层 - 明智的代币(T2T)转换,通过递归聚合相邻来逐步地结构于令牌到令牌。代币进入一个令牌(令牌到令牌),这样可以建模由周围令牌所代表的本地结构,并且可以减少令牌长度; 2)一种高效的骨干,具有深度狭窄的结构,用于在实证研究后CNN建筑设计的激励变压器结构。值得注意的是,T2T-VIT将Vanilla Vit的参数计数和Mac减少了一半,同时从想象中从头开始训练时,改善了超过3.0 \%。它还优于Endnets并通过直接培训Imagenet训练来实现与MobileNets相当的性能。例如,T2T-VTO与Reset50(21.5M参数)的可比大小(21.5M参数)可以在图像分辨率384 $ \ Times 384上实现83.3 \%TOP1精度。 (代码:https://github.com/yitu-opensource/t2t-vit)
translated by 谷歌翻译
变压器在计算机视觉任务中表现出很大的潜力。常见的信念是他们的注意力令牌混合器模块对他们的能力做出了贡献。但是,最近的作品显示了变压器中的基于关注的模块可以被空间MLP所取代,由此产生的模型仍然表现得很好。基于该观察,我们假设变压器的一般架构,而不是特定的令牌混音器模块对模型的性能更为必要。为了验证这一点,我们刻意用尴尬的简单空间池汇集操作员取代变压器中的注意模块,以仅进行最基本的令牌混合。令人惊讶的是,我们观察到,派生模型称为池,在多台计算机视觉任务上实现了竞争性能。例如,在ImageNet-1K上,泳池制造器实现了82.1%的前1个精度,超越了调节的视觉变压器/ MLP样基线Deit-B / ResmmP-B24,比参数的35%/ 52%的准确度为0.3%/ 1.1%和48%/ 60%的Mac。泳道的有效性验证了我们的假设,并敦促我们启动“MetaFormer”的概念,这是一个从变压器抽象的一般架构,而无需指定令牌混音器。基于广泛的实验,我们认为MetaFormer是在视觉任务上实现最近变压器和MLP样模型的优越结果的关键球员。这项工作要求更具未来的研究,专门用于改善元形器,而不是专注于令牌混音器模块。此外,我们提出的池更换器可以作为未来的MetaFormer架构设计的起始基线。代码可在https://github.com/sail-sg/poolformer使用
translated by 谷歌翻译
视觉变压器(VIT)用作强大的视觉模型。与卷积神经网络不同,在前几年主导视觉研究,视觉变压器享有捕获数据中的远程依赖性的能力。尽管如此,任何变压器架构的组成部分,自我关注机制都存在高延迟和低效的内存利用,使其不太适合高分辨率输入图像。为了缓解这些缺点,分层视觉模型在非交错的窗口上局部使用自我关注。这种放松会降低输入尺寸的复杂性;但是,它限制了横窗相互作用,损害了模型性能。在本文中,我们提出了一种新的班次不变的本地注意层,称为查询和参加(QNA),其以重叠的方式聚集在本地输入,非常类似于卷积。 QNA背后的关键想法是介绍学习的查询,这允许快速高效地实现。我们通过将其纳入分层视觉变压器模型来验证我们的层的有效性。我们展示了速度和内存复杂性的改进,同时实现了与最先进的模型的可比准确性。最后,我们的图层尺寸尤其良好,窗口大小,需要高于X10的内存,而不是比现有方法更快。
translated by 谷歌翻译
The recently developed vision transformer (ViT) has achieved promising results on image classification compared to convolutional neural networks. Inspired by this, in this paper, we study how to learn multi-scale feature representations in transformer models for image classification. To this end, we propose a dual-branch transformer to combine image patches (i.e., tokens in a transformer) of different sizes to produce stronger image features. Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity and these tokens are then fused purely by attention multiple times to complement each other. Furthermore, to reduce computation, we develop a simple yet effective token fusion module based on cross attention, which uses a single token for each branch as a query to exchange information with other branches. Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise. Extensive experiments demonstrate that our approach performs better than or on par with several concurrent works on vision transformer, in addition to efficient CNN models. For example, on the ImageNet1K dataset, with some architectural changes, our approach outperforms the recent DeiT by a large margin of 2% with a small to moderate increase in FLOPs and model parameters. Our source codes and models are available at https://github.com/IBM/CrossViT.
translated by 谷歌翻译
我们提出了全球环境视觉变压器(GC VIT),这是一种新的结构,可增强参数和计算利用率。我们的方法利用了与本地自我注意的联合的全球自我发项模块,以有效但有效地建模长和短距离的空间相互作用,而无需昂贵的操作,例如计算注意力面罩或移动本地窗户。此外,我们通过建议在我们的体系结构中使用修改后的融合倒置残差块来解决VIT中缺乏归纳偏差的问题。我们提出的GC VIT在图像分类,对象检测和语义分割任务中实现了最新的结果。在用于分类的ImagEnet-1k数据集上,基本,小而微小的GC VIT,$ 28 $ M,$ 51 $ M和$ 90 $ M参数实现$ \ textbf {83.2 \%} $,$ \ textbf {83.9 \%} $和$ \ textbf {84.4 \%} $ top-1的精度,超过了相当大的先前艺术,例如基于CNN的Convnext和基于VIT的Swin Transformer,其优势大大。在对象检测,实例分割和使用MS Coco和ADE20K数据集的下游任务中,预训练的GC VIT主机在对象检测,实例分割和语义分割的任务中始终如一地超过事务,有时是通过大余量。可在https://github.com/nvlabs/gcvit上获得代码。
translated by 谷歌翻译