虽然在开发模型上做了很多工作来解决视觉问题的问题的问题,但这些模型将问题与图像功能相关的能力仍然不那么探索。我们介绍了不同损耗功能的不同特征提取方法的实证研究。我们为视觉问题的任务提出了新的数据集,其中多个图像输入只有一个地面真理,并在它们上基准测试我们的结果。我们的最终模型利用Reset + RCNN图像特征和BERT Embedings,灵感来自堆叠注意力网络,在Clever + Tinyimagenet数据集中提供39%的字精度和99%的图像精度。
translated by 谷歌翻译
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
translated by 谷歌翻译
视觉和语言任务在研究界越来越受欢迎,但重点仍主要放在英语上。我们提出了一条管道,该管道利用仅英语视觉语言模型来训练目标语言的单语模型。我们建议扩展Oscar+,该模型利用对象标签作为学习图像文本对齐的锚点,以训练以不同语言的视觉问题回答数据集。我们提出了一种新颖的知识蒸馏方法,以使用并行句子以其他语言来训练模型。与其他在训练阶段的语料库中使用目标语言的模型相比,我们可以利用现有的英语模型使用明显较小的资源将知识转移到目标语言中。我们还以日语和印地语语言发布了一个大规模的视觉问题,回答数据集。尽管我们将工作限制为视觉问题的回答,但我们的模型可以扩展到任何序列级别的分类任务,并且也可以将其扩展到其他语言。本文重点介绍了两种语言,用于视觉问题回答任务 - 日语和印地语。我们的管道表现优于当前的最新模型的相对增加4.4%和13.4%的准确性。
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. 1 .
translated by 谷歌翻译
Visual Question Answering (VQA) requires a finegrained and simultaneous understanding of both the visual content of images and the textual content of questions. Therefore, designing an effective 'co-attention' model to associate key words in questions with key objects in images is central to VQA performance. So far, most successful attempts at co-attention learning have been achieved by using shallow models, and deep co-attention models show little improvement over their shallow counterparts. In this paper, we propose a deep Modular Co-Attention Network (MCAN) that consists of Modular Co-Attention (MCA) layers cascaded in depth. Each MCA layer models the self-attention of questions and images, as well as the guided-attention of images jointly using a modular composition of two basic attention units. We quantitatively and qualitatively evaluate MCAN on the benchmark VQA-v2 dataset and conduct extensive ablation studies to explore the reasons behind MCAN's effectiveness.Experimental results demonstrate that MCAN significantly outperforms the previous state-ofthe-art. Our best single model delivers 70.63% overall accuracy on the test-dev set.Code is available at https://github.com/MILVLG/mcan-vqa.
translated by 谷歌翻译
This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model OSCAR [21], and utilize an improved approach OSCAR+ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. Code, models and pre-extracted features are released at https://github.com/pzzhang/VinVL. ♥ Microsoft Corporation♠ University of Washington † indicates equal contributions.
translated by 谷歌翻译
Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.
translated by 谷歌翻译
Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders. 1
translated by 谷歌翻译
对比语言 - 图像预培训(剪辑)在广泛的图像中与跨模仿监督学习的卓越成功 - 在线收集的文本对。到目前为止,夹子的有效性主要是在一般结构域多数制问题中进行研究。这项工作评估了剪辑的有效性,用于医学视觉问题的任务(MedVQA)。为此,我们向PubMedClip提供PubMedClip,基于PubMed文章的医疗领域的微调版本。我们的实验是在两个MedVQA基准数据集中进行,并调查两种MedVQA方法,MEVF(增强的视觉功能)和QCR(通过条件推理的问题回答)。对于这些中的每一个,我们使用PubMedClip,原始剪辑和最先进的MAML(模型 - 不可知的Meta-Learning)网络仅评估视觉表示学习的优点,仅在视觉数据上训练。我们为我们的Medvqa管道和预训练PubMedclip开源代码。与MAML的Visual Encoder相比,剪辑和PubMedClip实现了改进。 PubMedclip以最高精度的最佳效果达到最佳结果,高达3%。个别示例说明了与先前广泛使用的MAML网络相比的PubMedclip的强度。 PubMedclip语言监督的视觉表现出学习导致MedVQA的显着改进。我们的实验揭示了在以前的工作中尚未传授的两个MedVQA基准数据集中的分布差异,并在PubMedClip中导致不同的后端视觉编码,在这些数据集上表现出不同的行为。此外,我们证明了VQA一般与医学领域的基本性能差异。
translated by 谷歌翻译
根据图像回答语义复杂的问题是在视觉问题应答(VQA)任务中的具有挑战性。虽然图像可以通过深度学习来良好代表,但是始终简单地嵌入问题,并且不能很好地表明它的含义。此外,视觉和文本特征具有不同模式的间隙,很难对齐和利用跨模块信息。在本文中,我们专注于这两个问题,并提出了一种匹配关注(GMA)网络的图表。首先,它不仅为图像构建图形,而且在句法和嵌入信息方面构建了该问题的图表。接下来,我们通过双级图形编码器探讨了模特内的关系,然后呈现双边跨模型图匹配注意力以推断图像与问题之间的关系。然后将更新的跨模式特征发送到答案预测模块中以进行最终答案预测。实验表明,我们的网络在GQA数据集和VQA 2.0数据集上达到了最先进的性能。消融研究验证了GMA网络中每个模块的有效性。
translated by 谷歌翻译
视觉问题回答(VQA)近年来见证了巨大进展。但是,大多数努力只关注2D图像问题应答任务。在本文中,我们介绍了将VQA扩展到3D域的第一次尝试,这可以促进人工智能对3D现实世界情景的看法。与基于图像的VQA不同,3D问题应答(3DQA)将颜色点云作为输入,需要外观和3D几何理解能力来回答3D相关问题。为此,我们提出了一种基于新颖的基于变换器的3DQA框架\ TextBF {“3DQA-TR”},其包括两个编码器,分别用于利用外观和几何信息。外观,几何和的多模码信息语言问题最终可以通过3D语言伯特互相参加,以预测目标答案。要验证我们提出的3DQA框架的有效性,我们还开发了第一个建立的3DQA DataSet \ TextBF {“scanqa”} SCANNet DataSet并包含$ \ SIM $ 6K问题,$ \ SIM $ 30k答案,可满足806美元的场景。在此数据集上的广泛实验展示了我们提出的3DQA框架在现有的VQA框架上的明显优势,以及我们主要设计的有效性。我们的代码和数据集将公开可用于促进此方向的研究。
translated by 谷歌翻译
手术中的视觉问题回答(VQA)在很大程度上没有探索。专家外科医生稀缺,经常被临床和学术工作负载超负荷。这种超负荷通常会限制他们从患者,医学生或初级居民与手术程序有关的时间回答问卷。有时,学生和初级居民也不要在课堂上提出太多问题以减少干扰。尽管计算机辅助的模拟器和过去的手术程序记录已经可以让他们观察和提高技能,但他们仍然非常依靠医学专家来回答他们的问题。将手术VQA系统作为可靠的“第二意见”可以作为备份,并减轻医疗专家回答这些问题的负担。缺乏注释的医学数据和特定于域的术语的存在限制了对手术程序的VQA探索。在这项工作中,我们设计了一项外科VQA任务,该任务根据外科手术场景回答有关手术程序的问卷。扩展MICCAI内窥镜视觉挑战2018数据集和工作流识别数据集,我们介绍了两个具有分类和基于句子的答案的手术VQA数据集。为了执行手术VQA,我们采用视觉文本变压器模型。我们进一步介绍了一个基于MLP的剩余Visualbert编码器模型,该模型可以在视觉令牌和文本令牌之间进行相互作用,从而改善了基于分类的答案的性能。此外,我们研究了输入图像贴片数量和时间视觉特征对分类和基于句子的答案中模型性能的影响。
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
大规模预制速度迅速成为视觉语言(VL)建模中的规范。然而,普遍的VL方法受标记数据的要求和复杂的多步预介质目标的要求受限。我们呈现Magma - 使用基于适配器的FineTuning使用额外的方式增强生成语言模型的简单方法。在冻结的情况下,我们培训一系列VL模型,从视觉和文本输入的任意组合自动生成文本。使用单一语言建模目的,预先预测完全结束于结束,与先前的方法相比,简化优化。重要的是,在培训期间,语言模型权重保持不变,允许从语言预磨练转移百科全书知识和内心的学习能力。 Magma在开放式生成任务上冻结的岩浆,实现了最先进的状态,结果在Okvqa基准和竞争结果上的一系列其他流行的VL基准测试中,同时预先训练用于培训SIMVLM的样本数量的0.2%。
translated by 谷歌翻译
近年来,多模态变压器在视觉语言任务中显示出显着进展,例如视觉问题应答(VQA),以相当多的余量优于以前的架构。 VQA的这种改进通常归因于视觉和语言流之间的丰富相互作用。在这项工作中,我们研究了共同关注变压器层在回答问题时帮助网络专注于相关区域的功效。我们使用这些共同关注层中的质询图像注意力分数来生成视觉注意图。我们评估以下关键组分对最先进的VQA模型的视觉注意的影响:(i)对象区域提案数,(ii)言语(POS)标签的问题部分,(iii)问题语义,(iv)共同关注层数,和(v)答案准确性。我们比较神经网络注意力地图对人类注意力地图的定性和定量。我们的研究结果表明,在给出一个问题的情况下,共同关注变压器模块对图像的相关区域至关重要。重要的是,我们观察到问题的语义含义不是驱动视觉关注的,但问题中的特定关键词是。我们的工作揭示了关注变压器层的功能和解释,突出了当前网络中的差距,并指导了同时处理视觉和语言流的未来VQA模型和网络的开发。
translated by 谷歌翻译
通过分析医学图像来编写报告对于缺乏经验的从业者和经验丰富的医生来说是错误的。在这项工作中,我们介绍了改编预先训练的视力和语言模型来解释医学图像并以自然语言生成自动报告的Repsnet。 repsnet由一个编码器模型组成:编码器通过对比度学习将图像与自然语言描述对齐,而解码器则通过对编码图像进行调节和通过最近的邻居搜索检索的描述的先验上下文来预测答案。我们在视觉问题回答设置中提出问题,以处理分类和描述性的自然语言答案。我们在放射学图像数据集的两个医学视觉问题回答(VQA-RAD)和报告生成(IU-XRAR)的两个具有挑战性的任务上进行实验。结果表明,REPNET优于最先进的方法,在VQA-RAD 2018上具有81.08%的分类精度和IU-XRAY的0.58 BLEU-1得分。补充详细信息可从https://sites.google.com/view/repsnet获得
translated by 谷歌翻译
超出经验的概括在开发实用AI系统方面具有重要作用。已经表明,目前的视觉问题回答(VQA)模型过度依赖于火车集中的语言 - 前沿(问题类型和最常见答案之间的虚假相关性),并在分配外构成性能差( ood)测试集。这一行为限制了它们的概括性,并限制了他们在现实世界中使用的。本文表明,问题编码器中使用的序列模型架构在VQA模型的普遍性中具有重要作用。为了证明这一点,我们对各种现有的基于RNN和基于变压器的问号进行了详细的分析,以及我们提出了一种基于新的曲线图注意网络(GAT)的问题编码器。我们的研究发现,即使不使用任何额外的相对复杂的偏差方法,问题编码器中的序列模型也更好地选择了VQA模型的普遍性。
translated by 谷歌翻译
This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.
translated by 谷歌翻译
文本VQA旨在回答需要了解图像中文本提示的问题。尽管现有的文本VQA方法取得了长足的进步,但它们的性能仍遭受了人类标记的问题解答(QA)对不足。但是,我们观察到,通常在现有数据集中没有完全利用场景文本 - 每个图像中只有一小部分文本参与了带注释的QA活动。这导致大量有用的信息浪费。为了解决这种缺陷,我们开发了一种新方法来通过明确利用每个图像的场景上下文中可用的现有文本来生成高质量和多样化的质量质量对。具体而言,我们建议,TAG是一种文本感知的视觉问题 - 答案生成的结构,该结构学会使用多模式变压器来生成有意义且准确的QA样品。该体系结构通过将生成的QA对与初始培训数据相结合,从而利用了未充满激光的场景文本信息,并增强了文本VQA模型的场景理解。对两个众所周知的Text-VQA基准(TextVQA和ST-VQA)的广泛实验结果表明,我们提议的标签有效地扩大了训练数据,有助于提高文本VQA性能而无需额外的标签努力。此外,我们的模型优于预先通过大规模数据进行训练的最先进方法。代码将公开可用。
translated by 谷歌翻译