Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities. However, ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training. This results in ViT not performing as well as CNNs on small datasets like medicine and science. We experimentally found that masked autoencoders (MAE) can make the transformer focus more on the image itself, thus alleviating the data-hungry issue of ViT to some extent. Yet the current MAE model is too complex resulting in over-fitting problems on small datasets. This leads to a gap between MAEs trained on small datasets and advanced CNNs models still. Therefore, we investigated how to reduce the decoder complexity in MAE and found a more suitable architectural configuration for it with small datasets. Besides, we additionally designed a location prediction task and a contrastive learning task to introduce localization and invariance characteristics for MAE. Our contrastive learning task not only enables the model to learn high-level visual information but also allows the training of MAE's class token. This is something that most MAE improvement efforts do not consider. Extensive experiments have shown that our method shows state-of-the-art performance on standard small datasets as well as medical datasets with few samples compared to the current popular masked image modeling (MIM) and vision transformers for small datasets.The code and models are available at https://github.com/Talented-Q/SDMAE.
translated by 谷歌翻译
近年来,自我监督的学习(SSL)引起了病理图像分析的越来越多的关注。与需要仔细设计的对比学习相比,从生成范式中掩盖了自动编码器(MAE)构建SSL可能是一种更简单的方法。在本文中,我们介绍MAE并验证可见斑块对病理图像分类的影响。基于它,提出了一种新型的SD-MAE模型,以使RAW MAE顶部的自我验证增强SSL。除了掩盖图像贴片的重建损失外,SD-MAE还进一步对可见斑块施加了自我验证损失。它传递了由解码器的全球注意力引起的知识,该知识仅利用局部关注。我们将SD-MAE应用于两个公共病理图像数据集。实验表明,与其他SSL方法相比,SD-MAE的竞争性高。我们的代码将很快发布。
translated by 谷歌翻译
蒙面图像建模(MIM)在各种视觉任务上取得了令人鼓舞的结果。但是,学到的表示形式的有限可区分性表现出来,使一个更强大的视力学习者还有很多值得一试。为了实现这一目标,我们提出了对比度蒙面的自动编码器(CMAE),这是一种新的自我监督的预训练方法,用于学习更全面和有能力的视觉表示。通过详细统一的对比度学习(CL)和掩盖图像模型(MIM),CMAE利用了它们各自的优势,并以强大的实例可辨别性和局部的可感知来学习表示形式。具体而言,CMAE由两个分支组成,其中在线分支是不对称的编码器编码器,而目标分支是动量更新的编码器。在培训期间,在线编码器从蒙面图像的潜在表示中重建了原始图像,以学习整体特征。馈送完整图像的目标编码器通过其在线学习通过对比度学习增强了功能可区分性。为了使CL与MIM兼容,CMAE引入了两个新组件,即用于生成合理的正视图和特征解码器的像素移位,以补充对比度对的特征。多亏了这些新颖的设计,CMAE可以有效地提高了MIM对应物的表示质量和转移性能。 CMAE在图像分类,语义分割和对象检测的高度竞争基准上实现了最先进的性能。值得注意的是,CMAE-BASE在Imagenet上获得了$ 85.3 \%$ $ TOP-1的准确性和$ 52.5 \%$ MIOU的ADE20K,分别超过了$ 0.7 \%\%$ $和$ 1.8 \%$ $。代码将公开可用。
translated by 谷歌翻译
It has been witnessed that masked image modeling (MIM) has shown a huge potential in self-supervised learning in the past year. Benefiting from the universal backbone vision transformer, MIM learns self-supervised visual representations through masking a part of patches of the image while attempting to recover the missing pixels. Most previous works mask patches of the image randomly, which underutilizes the semantic information that is beneficial to visual representation learning. On the other hand, due to the large size of the backbone, most previous works have to spend much time on pre-training. In this paper, we propose \textbf{Attention-driven Masking and Throwing Strategy} (AMT), which could solve both problems above. We first leverage the self-attention mechanism to obtain the semantic information of the image during the training process automatically without using any supervised methods. Masking strategy can be guided by that information to mask areas selectively, which is helpful for representation learning. Moreover, a redundant patch throwing strategy is proposed, which makes learning more efficient. As a plug-and-play module for masked image modeling, AMT improves the linear probing accuracy of MAE by $2.9\% \sim 5.9\%$ on CIFAR-10/100, STL-10, Tiny ImageNet, and ImageNet-1K, and obtains an improved performance with respect to fine-tuning accuracy of MAE and SimMIM. Moreover, this design also achieves superior performance on downstream detection and segmentation tasks.
translated by 谷歌翻译
本文探讨了贝尔视觉变压器预训练的更好的码本。最近的工作成功地转移了从NLP到视野领域的BERT预训练。它直接采用一个简单的离散VAE作为视觉销售器,但尚未考虑由此产生的视觉令牌的语义水平。相比之下,NLP字段中的离散令牌是自然的高度语义。这种差异激励我们学习一个感知码本。我们惊奇地找到了一个简单而有效的想法:在DVAE训练期间强制执行感知相似性。我们证明,所提出的感知码本生成的视觉令牌确实表现出更好的语义含义,随后有助于预训练在各种下游任务中实现卓越的转移性能。例如,我们在Imagenet-1K上实现了84.5前1个精度,vit-B骨干,优于竞争方法Beit +1.3,具有相同的训练纪元。它还可以通过+1.3框AP和+1.0掩模AP,在ADE20K上的语义细分,在ADE20K上提高对象检测和分割任务的性能,+1.0 miou,代码和型号将在\ url {https:// github.com/microsoft/peco}。
translated by 谷歌翻译
Autoregressive language modeling (ALM) have been successfully used in self-supervised pre-training in Natural language processing (NLP). However, this paradigm has not achieved comparable results with other self-supervised approach in computer vision (e.g., contrastive learning, mask image modeling). In this paper, we try to find the reason why autoregressive modeling does not work well on vision tasks. To tackle this problem, we fully analyze the limitation of visual autoregressive methods and proposed a novel stochastic autoregressive image modeling (named SAIM) by the two simple designs. First, we employ stochastic permutation strategy to generate effective and robust image context which is critical for vision tasks. Second, we create a parallel encoder-decoder training process in which the encoder serves a similar role to the standard vision transformer focus on learning the whole contextual information, and meanwhile the decoder predicts the content of the current position, so that the encoder and decoder can reinforce each other. By introducing stochastic prediction and the parallel encoder-decoder, SAIM significantly improve the performance of autoregressive image modeling. Our method achieves the best accuracy (83.9%) on the vanilla ViT-Base model among methods using only ImageNet-1K data. Transfer performance in downstream tasks also show that our model achieves competitive performance.
translated by 谷歌翻译
最近,自我监督的蒙面自动编码器(MAE)因其令人印象深刻的表示能力而引起了前所未有的关注。但是,借口任务是掩盖的图像建模(MIM),重建缺失的本地贴片,缺乏对图像的全局理解。本文通过添加有监督的分类部门将MAE扩展到了完全监督的环境,从而使Mae可以从Golden Labels中有效地学习全球功能。所提出的监督MAE(Supmae)仅利用图像贴片的可见子集进行分类,这与使用所有图像贴片的标准监督预训练不同。通过实验,我们证明了Supmae不仅更有效地训练,而且还学会了更健壮和可转移的功能。具体而言,Supmae在使用VIT-B/16模型的ImageNet上评估时仅使用30%的计算来实现MAE的可比性。 Supmae对ImageNet变体的鲁棒性和转移学习绩效优于MAE和标准监督前培训对手。代码将公开可用。
translated by 谷歌翻译
本文显示屏蔽的自动化器(MAE)是可扩展的自我监督学习者,用于计算机愿景。我们的MAE方法很简单:我们掩盖输入图像的随机补丁并重建缺失像素。它基于两个核心设计。首先,我们开发一个不对称的编码器解码器架构,其中编码器仅在掩码的可见子集(没有掩码令牌)上,以及重量解码器,该重量解码器从潜像和掩码令牌重建原始图像。其次,我们发现掩蔽了高比例的输入图像,例如,75%,产生非凡和有意义的自我监督任务。耦合这两种设计使我们能够有效且有效地培训大型模型:我们加速培训(3倍或更多)并提高准确性。我们可扩展的方法允许学习概括的高容量模型:例如,Vanilla Vit-Maxim模型在使用Imagenet-1K数据的方法中实现最佳准确性(87.8%)。下游任务中的转移性能优于监督预培训并显示有前途的缩放行为。
translated by 谷歌翻译
我们提出了引导蒙面的自动编码器(bootmae),这是一种新的视觉BERT预训练方法。 Bootmae用两个核心设计改进了原始的蒙版自动编码器(MAE):1)动量编码器,该动量编码器可作为额外的BERT预测目标提供在线功能; 2)试图降低编码器的压力以记住目标特定信息的靶向解码器。第一个设计的动机是通过观察到的,即使用预定的MAE提取特征,因为掩盖令牌的BERT预测目标可以实现更好的预训练性能。因此,我们与原始的MAE编码器并行添加了一个动量编码器,该编码器通过将其自己的表示作为BERT预测目标来引导预处理性能。在第二个设计中,我们将特定于目标的信息(例如,未掩盖贴片的像素值)直接传达到解码器中,以减少记住目标特定信息的编码器的压力。因此,编码器专注于语义建模,这是BERT预训练的目的,并且不需要浪费其在记住与预测目标相关的未掩盖令牌的信息时的能力。通过广泛的实验,我们的Bootmae在ImageNet-1k上获得了$ 84.2 \%$ $ $ $+0.8 \%$在同一预训练时期。 Bootmae还获得了$+1.0 $ MIOU在ADE20K上的语义细分和$+1.3 $ box ap,$+1.4 $+1.4 $ bask ap改进对象检测和可可数据集上的细分。代码在https://github.com/lightdxy/bootmae上发布。
translated by 谷歌翻译
通常需要在大型数据集上进行预训练的视频变压器,以在相对较小的数据集上实现首要性能。在本文中,我们表明视频蒙面的自动编码器(Videomae)是用于自我监督视频预训练(SSVP)的数据效率学习者。我们的启发受到了最近的ImageMae的启发,并提出了具有极高比例的定制视频管掩蔽。这种简单的设计使视频重建成为更具挑战性的自我判断任务,从而鼓励在此预训练过程中提取更有效的视频表示。我们在SSVP上获得了三个重要发现:(1)屏蔽比的比例极高(即90%至95%)仍然可以产生良好的视频性能。在时间上冗余的视频内容比图像更高的掩蔽率。 (2)视频在很小的数据集(即3K-4K视频)上取得了令人印象深刻的结果,而无需使用任何额外的数据。 (3)视频表明,数据质量比SSVP的数据数量更重要。在培训和目标数据集之间的域转移是一个重要问题。值得注意的是,我们与香草VIT的视频在动力学400上可以达到85.8%,在不使用任何额外数据的情况下,在HMDB51上的V2上有75.3%,UCF101的某些东西为75.3%,在UCF101上获得90.8%,HMDB51上的90.8%和61.1%。代码可从https://github.com/mcg-nju/videomae获得。
translated by 谷歌翻译
蒙面的自动编码器是可扩展的视觉学习者,因为Mae \ Cite {He2022masked}的标题表明,视觉中的自我监督学习(SSL)可能会采用与NLP中类似的轨迹。具体而言,具有蒙版预测(例如BERT)的生成借口任务已成为NLP中的事实上的标准SSL实践。相比之下,他们的歧视性对应物(例如对比度学习)掩埋了视力中的生成方法的早期尝试;但是,蒙版图像建模的成功已恢复了屏蔽自动编码器(过去通常被称为DeNosing AutoCoder)。作为在NLP中与Bert弥合差距的一个里程碑,蒙面自动编码器吸引了对SSL在视觉及其他方面的前所未有的关注。这项工作对蒙面自动编码器进行了全面的调查,以洞悉SSL的有希望的方向。作为第一个使用蒙版自动编码器审查SSL的人,这项工作通过讨论其历史发展,最新进度以及对不同应用的影响,重点介绍其在视觉中的应用。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
Masked image modelling (e.g., Masked AutoEncoder) and contrastive learning (e.g., Momentum Contrast) have shown impressive performance on unsupervised visual representation learning. This work presents Masked Contrastive Representation Learning (MACRL) for self-supervised visual pre-training. In particular, MACRL leverages the effectiveness of both masked image modelling and contrastive learning. We adopt an asymmetric setting for the siamese network (i.e., encoder-decoder structure in both branches), where one branch with higher mask ratio and stronger data augmentation, while the other adopts weaker data corruptions. We optimize a contrastive learning objective based on the learned features from the encoder in both branches. Furthermore, we minimize the $L_1$ reconstruction loss according to the decoders' outputs. In our experiments, MACRL presents superior results on various vision benchmarks, including CIFAR-10, CIFAR-100, Tiny-ImageNet, and two other ImageNet subsets. Our framework provides unified insights on self-supervised visual pre-training and future research.
translated by 谷歌翻译
我们引入了一个自我监督的视觉表示模型BEIT,该模型代表来自图像变压器的双向编码器表示。在Bert在自然语言处理区域中开发后,我们提出了一项掩盖的图像建模任务,以预识视觉变压器。具体而言,每个图像在我们的预训练中具有两个视图,即图像贴片(例如16x16像素)和视觉令牌(即离散令牌)。我们首先将原始图像“将”“令牌化”到视觉令牌中。然后,我们随机掩盖了一些图像补丁并将其喂入骨干变压器中。预训练的目标是根据损坏的图像补丁恢复原始的视觉令牌。在预训练BEIT之后,我们通过将任务层附加在预审计的编码器上,直接通过将任务层附加到下游任务上的模型参数。图像分类和语义分割的实验结果表明,我们的模型通过以前的预训练方法实现了竞争结果。例如,基本大小的BEIT在Imagenet-1K上获得了83.2%的TOP-1精度,并以相同的设置优于划痕DEIT训练(81.8%)。此外,大尺寸的BEIT仅使用Imagenet-1K获得86.3%,即使在Imagenet-22K上进行预训练(85.2%),甚至超过了VIT-L。代码和预估计的模型可在https://aka.ms/beit上找到。
translated by 谷歌翻译
The combination of transformers and masked image modeling (MIM) pre-training framework has shown great potential in various vision tasks. However, the pre-training computational budget is too heavy and withholds the MIM from becoming a practical training paradigm. This paper presents FastMIM, a simple and generic framework for expediting masked image modeling with the following two steps: (i) pre-training vision backbones with low-resolution input images; and (ii) reconstructing Histograms of Oriented Gradients (HOG) feature instead of original RGB values of the input images. In addition, we propose FastMIM-P to progressively enlarge the input resolution during pre-training stage to further enhance the transfer results of models with high capacity. We point out that: (i) a wide range of input resolutions in pre-training phase can lead to similar performances in fine-tuning phase and downstream tasks such as detection and segmentation; (ii) the shallow layers of encoder are more important during pre-training and discarding last several layers can speed up the training stage with no harm to fine-tuning performance; (iii) the decoder should match the size of selected network; and (iv) HOG is more stable than RGB values when resolution transfers;. Equipped with FastMIM, all kinds of vision backbones can be pre-trained in an efficient way. For example, we can achieve 83.8%/84.1% top-1 accuracy on ImageNet-1K with ViT-B/Swin-B as backbones. Compared to previous relevant approaches, we can achieve comparable or better top-1 accuracy while accelerate the training procedure by $\sim$5$\times$. Code can be found in https://github.com/ggjy/FastMIM.pytorch.
translated by 谷歌翻译
由于具有强大的代表性,变形金刚在包括自然语言处理(NLP),计算机视觉和语音识别在内的广泛应用中越来越受欢迎。但是,利用这种代表性的能力有效地需要大量的数据,强大的正则化或两者兼而有之以减轻过度拟合。最近,基于掩盖的自动编码器的自我监督预处理策略已解锁了变压器的功能,这些策略依赖于直接或从未掩盖的内容对比的掩蔽输入进行重建。这种预训练的策略已在NLP中的BERT模型,Speak2VEC模型中使用,最近在Vision中的MAE模型中,该模型迫使该模型使用自动编码相关的目标来了解输入不同部分中的内容之间的关系。在本文中,我们提出了一种小说但令人惊讶的简单替代内容,以预测内容的位置,而无需为其提供位置信息。这样做需要变压器仅凭内容就可以理解输入不同部分之间的位置关系。这相当于有效的实现,其中借口任务是每个输入令牌所有可能位置之间的分类问题。我们在视觉和语音基准上进行了实验,我们的方法对强有力的监督训练基准进行了改进,并且与现代的无监督/自我监督预审方法相媲美。我们的方法还可以使经过训练的变压器在没有位置嵌入的情况下胜过训练有完整位置信息的训练的变压器。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
变形金刚在自然语言处理方面取得了巨大的成功。由于变压器中自我发挥机制的强大能力,研究人员为各种计算机视觉任务(例如图像识别,对象检测,图像分割,姿势估计和3D重建)开发了视觉变压器。本文介绍了有关视觉变形金刚的不同建筑设计和培训技巧(包括自我监督的学习)文献的全面概述。我们的目标是为开放研究机会提供系统的审查。
translated by 谷歌翻译
最近的蒙版图像建模(MIM)在自我监督学习(SSL)中受到了很多关注,该学习要求目标模型恢复输入图像的掩盖部分。尽管基于MIM的预训练方法在转移到许多下游任务时达到了新的最新性能,但可视化表明,与基于基于对比性学习预训练相比,学习的表示形式不可分割,尤其是相比。这激发了我们思考MIM预培训表示的线性可分离性是否可以进一步改善,从而改善了训练的性能。由于MIM和对比度学习倾向于利用不同的数据增强和培训策略,因此将这两个借口任务结合起来并不是微不足道的。在这项工作中,我们提出了一个新颖而灵活的预训练框架,名为Mimco,该框架通过两阶段的预培训结合了MIM和对比度学习。具体而言,MIMCO将预先训练的对比学习模型作为教师模型,并通过两种类型的学习目标进行了预培训:贴片级和图像级的重建损失。关于下游任务的广泛转移实验证明了我们的MIMCO预训练框架的出色表现。以VIT-S为例,当使用预先训练的MoCov3-Vit-S作为教师模型时,Mimco只需要100个时期的预训练时期即可达到Imagenet-1K上的82.53%Top-1 FineTuning精度,这表现优于表现最先进的自我监督学习对手。
translated by 谷歌翻译
由于其最近在减少监督学习的差距方面取得了成功,自我监督的学习方法正在增加计算机愿景的牵引力。在自然语言处理(NLP)中,自我监督的学习和变形金刚已经是选择的方法。最近的文献表明,变压器也在计算机愿景中越来越受欢迎。到目前为止,当使用大规模监督数据或某种共同监督时,视觉变压器已被证明可以很好地工作。在教师网络方面。这些监督的普试视觉变压器在下游任务中实现了非常好的变化,变化最小。在这项工作中,我们调查自我监督学习的预用图像/视觉变压器,然后使用它们进行下游分类任务的优点。我们提出了自我监督的视觉变压器(坐在)并讨论了几种自我监督的培训机制,以获得借口模型。静坐的架构灵活性允许我们将其用作自动统计器,并无缝地使用多个自我监控任务。我们表明,可以在小规模数据集上进行预训练,以便在小型数据集上进行下游分类任务,包括几千个图像而不是数百万的图像。使用公共协议对所提出的方法进行评估标准数据集。结果展示了变压器的强度及其对自我监督学习的适用性。我们通过大边缘表现出现有的自我监督学习方法。我们还观察到坐着很好,很少有镜头学习,并且还表明它通过简单地训练从坐的学到的学习功能的线性分类器来学习有用的表示。预先训练,FineTuning和评估代码将在以下:https://github.com/sara-ahmed/sit。
translated by 谷歌翻译