This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model OSCAR [21], and utilize an improved approach OSCAR+ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. Code, models and pre-extracted features are released at https://github.com/pzzhang/VinVL. ♥ Microsoft Corporation♠ University of Washington † indicates equal contributions.
translated by 谷歌翻译
Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use selfattention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar 1 , which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. 2
translated by 谷歌翻译
我们提出了GLIPV2,这是一个接地的VL理解模型,该模型既服务于本地化任务(例如,对象检测,实例分割)和视觉语言(VL)理解任务(例如VQA,图像字幕)。 GLIPV2优雅地将本地化预训练和视觉语言预训练(VLP)具有三个预训练任务:短语接地作为对检测任务的VL重新重新制定,区域词对比度学习作为新型的区域词对比度对比度对比学习任务,以及蒙面的语言建模。这种统一不仅简化了先前的多阶段VLP程序,而且还可以在本地化和理解任务之间实现相互利益。实验结果表明,在各种本地化和理解任务上,单个GLIPV2模型(所有模型权重)在SOTA性能附近实现。该模型还显示了(1)在开放式摄制对象检测任务上进行的强零射击和很少的自适应性能,以及(2)VL理解任务上的卓越接地能力。代码将在https://github.com/microsoft/glip上发布。
translated by 谷歌翻译
We study joint learning of Convolutional Neural Network (CNN) and Transformer for vision-language pre-training (VLPT) which aims to learn cross-modal alignments from millions of image-text pairs. State-of-the-art approaches extract salient image regions and align regions with words step-by-step. As region-based visual features usually represent parts of an image, it is challenging for existing visionlanguage models to fully understand the semantics from paired natural languages. In this paper, we propose SOHO to "See Out of tHe bOx" that takes a whole image as input, and learns vision-language representation in an endto-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than regionbased approaches. In particular, SOHO learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding. VD is designed to represent consistent visual abstractions of similar semantics. It is updated on-the-fly and utilized in our proposed pre-training task Masked Visual Modeling (MVM). We conduct experiments on four well-established vision-language tasks by following standard VLPT settings. In particular, SOHO achieves absolute gains of 2.0% R@1 score on MSCOCO text retrieval 5k test split, 1.5% accuracy on NLVR 2 test-P split, 6.7% accuracy on SNLI-VE test split, respectively.
translated by 谷歌翻译
视觉语言(VL)预训练最近受到了广泛的关注。但是,大多数现有的端到端预训练方法只旨在解决诸如图像文本检索,视觉询问答案(VQA)和图像字幕等VL任务,以测试对图像的高级了解,或者仅对目标区域进行测试 - 对诸如短语接地和对象检测等任务的水平理解。我们提出了Fiber(基于回避的变压器),这是一种新的VL模型体系结构,可以无缝处理这两种类型的任务。 Fiber没有将多模式融合到模型深处,而不是将融合后的专用变压器层用于融合,而是通过将交叉注意力插入图像和文本骨干杆中,从而在记忆和性能方面带来了增长。此外,与以前的工作不同,它要么仅在图像文本数据上进行训练,要么在带有框级注释的细粒度数据上进行培训,我们提出了一种两阶段的预训练策略,该策略有效地使用了这两种数据:(( i)基于图像文本数据的粗粒细化预训练;然后是(ii)基于图像文本框数据的细粒度预训练。我们对各种VL任务进行全面的实验,从VQA,图像字幕和检索到短语接地,参考表达理解和对象检测。使用深层多模式融合,结合两阶段的预训练,光纤可对所有任务的强基础进行一致的性能改进,通常使用幅度更优于更多数据的方法。代码可从https://github.com/microsoft/fiber获得。
translated by 谷歌翻译
Joint image-text embedding is the bedrock for most Visionand-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage finegrained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OTbased WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question
translated by 谷歌翻译
This paper presents a unified Vision-Language Pre-training (VLP) model. The model is unified in that (1) it can be finetuned for either vision-language generation (e.g., image captioning) or understanding (e.g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models. The unified VLP model is pre-trained on a large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequence-to-sequence (seq2seq) masked vision-language prediction. The two tasks differ solely in what context the prediction conditions on. This is controlled by utilizing specific self-attention masks for the shared transformer network. To the best of our knowledge, VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions, and VQA 2.0. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP.
translated by 谷歌翻译
在本文中,我们提出了一种单一统一的变压器(UFO),其能够处理视觉语言的单峰输入(例如,图像或语言)或多模式输入(例如,图像和问题的串联)( VL)表示学习。现有方法通常为每个模态和/或特定融合网络设计个人网络,用于多模式任务。为了简化网络架构,我们使用单个变压器网络并在VL预培训期间强制执行多任务学习,其包括图像文本对比丢失,图像文本匹配丢失和基于双向的屏蔽语言建模损耗SEQ2Seq注意面具。相同的变压器网络用作不同预训练任务中的图像编码器,文本编码器或融合网络。经验上,我们观察不同任务之间的冲突,并在视觉问题应答,Coco图像标题(交叉熵优化)和Nocaps(在香料中)实现新的艺术状态。在其他下游任务中,例如,图像文本检索,我们也实现了竞争性能。
translated by 谷歌翻译
近年来在开发更好的图像标题模型方面取得了巨大进展,但其中大多数依赖于单独的对象探测器来提取区域特征。最近的视觉语言研究通过利用网格表示来实现更灵活的模型训练和更快推理速度的速度来转向探测器趋势。但是,这种发展主要专注于图像理解任务,并且对标题生成任务的研究仍然较少。在本文中,我们涉及一种更好的无需探测器图像标题模型,并提出了一种基于纯视觉变压器的图像标题模型,称为VITCAP,其中使用了网格表示而不提取区域特征。为了提高性能,我们介绍了一种新颖的概念令牌网络(CTN)来预测语义概念,然后将它们纳入端到端的标题。特别地,CTN是基于视觉变换器构建的,并且旨在通过分类任务预测概念令牌,其中包含丰富的语义信息极大地利益标题任务。与以前的探测器的模型相比,Vitcap大大简化了架构,同时在各种具有挑战性的图像标题数据集上实现了竞争性能。特别是,Vitcap分别达到138.1苹果酒分数,即在Nocaps上的Coco-Caption Karpatal-Splity,93.8和108.6苹果酒分数和Google-CC标题数据集上分别达到138.1苹果酒分数。
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
Vision-and语言(VL)预培训已被证明对各种VL下游任务非常有效。虽然最近的工作表明,基于完全变换器的VL模型可以比以前的基于区域特征的方法更有效,但它们在下游任务上的性能通常显着降低。在本文中,我们呈现仪表〜(\ textbf {m} ultimodal \ textbf {e} nd-to-text \ textbf {t} ransform \ textbf {er}),我们通过它系统地调查如何设计和预先列车基于完全变换器的VL模型以端到端的方式。具体而言,我们将模型设计沿多个尺寸分析:视觉编码器(例如,剪辑 - vit,Swin变压器),文本编码器(例如,Roberta,Deberta),多模式融合(例如,合并注意力与共同关注),架构设计(例如,仅编码器与编码器 - 解码器)和预训练目标(例如,屏蔽图像建模)。我们对广泛的VL任务进行全面实验,并提供有关如何在保持快速推理速度的同时培训表演VL变压器的见解。值得注意的是,仪表〜使用仅使用4M图像进行预培训的VQAV2 TEST-STD设置的精度为77.64 \%,超过最先进的区域特征的VINVL模型+1.04 \%,以及优于以前最好的完全变换器的ALBEF模型+1.6 \%。
translated by 谷歌翻译
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific modelsachieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.Preprint. Under review.
translated by 谷歌翻译
通用视觉(GPV)系统是旨在解决各种视觉任务的模型,而无需进行架构更改。如今,GPV主要从大型完全监督的数据集中学习技能和概念。通过获取数据以迅速学习每个技能的每个概念,将GPV扩展到数万个概念都变得令人望而却步。这项工作提出了一种有效且廉价的替代方法:从监督数据集中学习技能,从Web图像搜索中学习概念,并利用GPV的关键特征:跨技能传递视觉知识的能力。我们使用跨越10K+视觉概念的1M+图像的数据集来演示3个基准上的两个现有GPV(GPV-1和VL-T5)的Webly Supumented概念扩展:5个基于可可的数据集(80个主要概念),这是一个新的策划系列,这是一个新的策划系列。基于OpenImages和VisualGenome存储库(〜500个概念)以及Web衍生的数据集(10K+概念)的5个数据集。我们还提出了一种新的体系结构GPV-2,该架构支持各种任务 - 从分类和本地化等视觉任务到Qu Viewer+语言任务,例如QA和字幕,再到更多的利基市场,例如人类对象互动检测。 GPV-2从Web数据中受益匪浅,并且在这些基准测试中胜过GPV-1和VL-T5。我们的数据,代码和Web演示可在https://prior.allenai.org/projects/gpv2上获得。
translated by 谷歌翻译
The availability of large-scale image captioning and visual question answering datasets has contributed significantly to recent successes in vision-and-language pretraining. However, these datasets are often collected with overrestrictive requirements inherited from their original target tasks (e.g., image caption generation), which limit the resulting dataset scale and diversity. We take a step further in pushing the limits of vision-and-language pretraining data by relaxing the data collection pipeline used in Conceptual Captions 3M (CC3M) [70] and introduce the Conceptual 12M (CC12M), a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training. We perform an analysis of this dataset and benchmark its effectiveness against CC3M on multiple downstream tasks with an emphasis on long-tail visual recognition. Our results clearly illustrate the benefit of scaling up pre-training data for vision-and-language tasks, as indicated by the new state-of-the-art results on both the nocaps and Conceptual Captions benchmarks. 1
translated by 谷歌翻译
Vision-Language Transformers can be learned without human labels (e.g. class labels, bounding boxes, etc). Existing work, whether explicitly utilizing bounding boxes or patches, assumes that the visual backbone must first be trained on ImageNet class prediction before being integrated into a multimodal linguistic pipeline. We show that this is not necessary and introduce a new model Vision-Language from Captions (VLC) built on top of Masked Auto-Encoders that does not require this supervision. In fact, in a head-to-head comparison between ViLT, the current state-of-the-art patch-based vision-language transformer which is pretrained with supervised object classification, and our model, VLC, we find that our approach 1. outperforms ViLT on standard benchmarks, 2. provides more interpretable and intuitive patch visualizations, and 3. is competitive with many larger models that utilize ROIs trained on annotated bounding-boxes.
translated by 谷歌翻译
随着变压器的发展,近年来预先训练的模型已经以突破性的步伐发展。他们在自然语言处理(NLP)和计算机视觉(CV)中主导了主流技术。如何将预训练适应视觉和语言(V-L)学习和改善下游任务绩效成为多模式学习的重点。在本文中,我们回顾了视力语言预训练模型(VL-PTMS)的最新进展。作为核心内容,我们首先简要介绍了几种方法,将原始图像和文本编码为单模式嵌入在预训练之前。然后,我们在建模文本和图像表示之间的相互作用时深入研究VL-PTM的主流体系结构。我们进一步提出了广泛使用的预训练任务,然后我们介绍了一些常见的下游任务。我们终于结束了本文,并提出了一些有前途的研究方向。我们的调查旨在为研究人员提供合成和指向相关研究的指针。
translated by 谷歌翻译
远见和语言预测已成为解决多模式下游任务的普遍方法。当前的趋势是朝着更大的模型和预处理数据集迈进。从长远来看,这一计算头急促似乎是不合理的,而是朝着可持续的解决方案迈进,事实上,排除了资源有限的学术实验室。在这项工作中,我们提出了一个称为VICHA的新框架,该框架有效利用输入数据以通过以下方式提高学习,以: ,(c)利用图像级注释,称为视觉概念,使用现有基础模型(例如剪辑)获得,以提高图像编码器的性能。尽管对数据的预估计少了四倍,但我们的VICHA策略在下游任务(例如图像文本检索,VQA,视觉推理,视觉上和视觉接地)上的其他方法优于其他方法。该代码将在此处公开提供:https://github.com/mshukor/vicha
translated by 谷歌翻译
近年来,根据Vision-Language预训练(VLP),我们在图像标题任务中掌握了显着的性能提升。比例被认为是这一进步的重要因素。然而,大多数现有工作仅侧重于预训练的变压器,在大约400万图像上具有中等大小(例如,12或24层)。在本文中,我们呈现柠檬,一个大规模的图像标题器,并为图像标题的VLP的缩放行为提供第一个实证研究。我们使用最先进的VINVL模型作为我们的参考模型,它由图像特征提取器和变压器模型组成,并将变压器上下放大,模型大小范围从13到675万参数。在数据方面,我们通过高达200万图像文本对进行实验,该对基于图像的Alt属性自动从Web自动收集(称为ALT200M)。广泛的分析有助于将性能趋势表征为模型大小和预训练数据尺寸增加。我们还比较不同的培训配方,特别是在大规模嘈杂数据上培训。结果,柠檬在几个主要图像标题基准上实现了新的技术状态,包括Coco标题,Nocaps和概念标题。我们还显示柠檬可以在以零拍摄方式使用时生成带有长尾视觉概念的标题。
translated by 谷歌翻译
Vision-Language预培训是一个新兴和快速发展的研究主题,将多模态知识从丰富的资源预训练任务转移到有限资源下游任务。与主要学习单个通用编码器的现有作品不同,我们提出了一种可训练的通用编码器 - 解码器网络(UNI-EDEN),以促进视觉语言感知(例如,视觉问题应答)和生成(例如,图像标题)。 UNI-EDEN是一种基于双流变换器的结构,由三个模块组成:对象和句子编码器,其单独了解每个模态的表示,以及通过模态交互能够实现多模态推理和句子的句子解码器。考虑到每个图像的语言表示可以跨越该层次结构的不同粒度,包括从简单到全面,个人标签,短语和自然句子,我们通过多粒愿景语言代理任务预先列车UNI-EDEN:屏蔽对象分类(MOC),蒙版区域短语生成(MRPG),图像句匹配(ISM)和屏蔽句生成(MSG)。以这种方式,UNI-EDEN赋予了多模态表示提取和语言建模的功率。广泛的实验证明了通过微调到四个视觉语言感知和发电下游任务来展示Uni-Eden的概括性。
translated by 谷歌翻译
在本文中,我们设计和训练生成的图像到文本变压器Git,以统一视觉语言任务,例如图像/视频字幕和问题答案。尽管生成模型在预训练和微调之间提供了一致的网络体系结构,但现有工作通常包含复杂的结构(Uni/多模式编码器/解码器),并取决于外部模块,例如对象检测器/标记器和光学角色识别(OCR) )。在git中,我们将体系结构简化为一个图像编码器,而在单语言建模任务下将架构简化为一个文本解码器。我们还扩展了预训练数据和模型大小,以提高模型性能。没有铃铛和哨子,我们的git在12个具有挑战性的基准下建立了新的艺术状态。例如,我们的模型在文本贴图上首次超过了人类的表现(138.2 vs. 125.5在苹果酒中)。此外,我们提出了一种新的基于一代的图像分类和场景文本识别的方案,在标准基准上实现了不错的表现。
translated by 谷歌翻译