在域概括(DG)中取得了长足的进步,该域旨在从多个通知的源域到未知目标域学习可推广的模型。但是,在许多实际情况下,获得足够的源数据集的注释可能非常昂贵。为了摆脱域的概括和注释成本之间的困境,在本文中,我们介绍了一个名为标签效率的域概括(LEDG)的新任务,以使用标签限制的源域来实现模型概括。为了解决这一具有挑战性的任务,我们提出了一个称为协作探索和概括(CEG)的新颖框架,该框架共同优化了主动探索和半监督的概括。具体而言,在主动探索中,在避免信息差异和冗余的同时探索阶级和域可区分性,我们查询具有类别不确定性,域代表性和信息多样性的总体排名最高的样品标签。在半监督的概括中,我们设计了基于混音的内部和域间知识增强,以扩大域知识并概括域的不变性。我们以协作方式统一主动探索和半监督概括,并促进它们之间的相互增强,从而以有限的注释来增强模型的概括。广泛的实验表明,CEG产生了出色的概括性能。特别是,与以前的DG方法相比,CEG甚至只能使用5%的数据注释预算来实现竞争结果,并在PACS数据集中具有完全标记的数据。
translated by 谷歌翻译
最近,已经开发了各种视觉变压器作为对远程依赖性建模的能力。在当前的基于变压器的主骨用于医疗图像分割的骨架中,卷积层被纯变压器替换,或者将变压器添加到最深的编码器中以学习全球环境。但是,从规模的角度来看,主要有两个挑战:(1)尺度内问题:在每个尺度中提取局部全球线索所缺乏的现有方法,这可能会影响小物体的信号传播; (2)尺度间问题:现有方法未能从多个量表中探索独特的信息,这可能会阻碍表示尺寸,形状和位置广泛的对象的表示形式学习。为了解决这些局限性,我们提出了一个新颖的骨干,即比例尺形式,具有两个吸引人的设计:(1)尺度上的尺度内变压器旨在将基于CNN的本地功能与每个尺度中的基于变压器的全球线索相结合,在行和列的全局依赖项上可以通过轻巧的双轴MSA提取。 (2)一种简单有效的空间感知尺度变压器旨在以多个尺度之间的共识区域相互作用,该区域可以突出跨尺度依赖性并解决复杂量表的变化。对不同基准测试的实验结果表明,我们的尺度形式的表现优于当前最新方法。该代码可公开可用:https://github.com/zjugivelab/scaleformer。
translated by 谷歌翻译
多模式情感分析和抑郁估计是两个重要的研究主题,旨在使用多模式数据预测人类精神状态。先前的研究重点是制定有效的融合策略,以交换和整合不同模式的与思想有关的信息。一些基于MLP的技术最近在各种计算机视觉任务中取得了巨大的成功。受到这一点的启发,我们探索了本研究中具有混合视角的多模式方法。为此,我们介绍了完全基于MLP的多模式特征处理框架CubeMLP。 CUBEMLP由三个独立的MLP单元组成,每个单元都有两个仿射转换。 CUBEMLP接受所有相关的模态特征作为输入,并在三个轴上混合它们。使用CubeMLP提取特性后,将混合的多模式特征扁平以进行任务预测。我们的实验是在情感分析数据集上进行的:CMU-MOSI和CMU-MOSEI,以及抑郁估计数据集:AVEC2019。结果表明,CUBEMLP可以以低得多的计算成本来实现最先进的性能。
translated by 谷歌翻译
虽然U-Net在医学图像分割任务中取得了巨大的成功,但它缺乏明确模拟远程依赖性的能力。因此,视觉变压器最近被出现为替代分割结构,以便通过自我关注捕获远程相关性的先天能力(SA)。然而,变压器通常依赖于大规模的预训练并具有高的计算复杂性。此外,SA只能在单个样本内模拟自我亲和力,忽略整个数据集的潜在相关性。为了解决这些问题,我们提出了一种名为混合变压器模块(MTM)的新型变压器模块,用于同时和内部内部学习。 MTM首先通过我们设计精心设计的本地全球高斯加权自我关注(LGG-SA),有效地计算自我亲创。然后,它通过外部注意力(EA)挖掘数据样本之间的连接。通过使用MTM,我们构造一个名为混合变压器U-NET(MT-UNET)的U形模型,以进行准确的医学图像分割。我们在两个不同的公共数据集上测试我们的方法,实验结果表明,该方法达到了更好的性能,对其他最先进的方法进行了更好的性能。代码可在:https://github.com/dootmaan/mt-unet。
translated by 谷歌翻译
域泛化(DG)利用多个标记的源数据集来训练未经化的目标域的概括模型。然而,由于昂贵的注释成本,在现实世界应用中难以满足标记所有源数据的要求。在本文中,我们调查单个标记的域泛化(SLDG)任务,只标有一个源域,这比传统的域泛化(CDG)更实用和具有挑战性。 SLDG任务中的主要障碍是可怜的概括偏置:标记源数据集中的鉴别信息可以包含特定于域的偏差,限制训练模型的泛化。为了解决这个具有挑战性的任务,我们提出了一种称为域特定偏置滤波(DSBF)的新方法,该方法用标记的源数据初始化识别模型,然后通过用于泛化改进的未标记的源数据来滤除其域特定的偏差。我们将过滤过程划分为(1)特征提取器扩展通过K-Means的基于聚类的语义特征重新提取和(2)分类器通过注意引导语义特征投影校准。 DSBF统一探索标签和未标记的源数据,以增强培训模型的可辨性和泛化,从而产生高度普遍的模型。我们进一步提供了理论分析,以验证所提出的域特定的偏置滤波过程。关于多个数据集的广泛实验显示了DSBF在解决具有挑战性的SLDG任务和CDG任务时的优越性。
translated by 谷歌翻译
仪器变量(IVS),治疗随机化的来源,条件无关的结果,在因果推理中发挥着重要作用,不观察到的混乱。然而,现有的基于IV的反事实预测方法需要良好预定义的IVS,而它是一种艺术而不是科学,可以在许多现实世界场景中找到有效的IV。此外,通过违反有效IVS的条件,预定的手工制作的IV可能是弱或错误的。这些棘手的事实阻碍了基于IV的反事实预测方法的应用。在本文中,我们提出了一种新颖的自动仪器可变分解(AUTOV)算法,以自动生成从观察到的变量(IV候选)的IVS角色的表示。具体地,我们让学习的IV表示通过相互信息最大化和最小化限制的结果,通过互动和排除条件满足相关性条件。我们还通过鼓励他们与治疗和结果相关,学习混乱的陈述。 IV和混淆器表示竞争其在对抗性游戏中的限制的信息,这使我们能够获得基于IV的反事实预测的有效IV表示。广泛的实验表明,我们的方法为基于准确的IV的反事实预测生成有效的IV表示。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译