基于视频的无监督域适应性(VUDA)方法改善了视频模型的鲁棒性,从而使它们能够应用于不同环境的动作识别任务。但是,这些方法需要在适应过程中不断访问源数据。然而,在许多现实世界中,源视频域中的主题和场景应该与目标视频域中的主题和场景无关。随着对数据隐私的越来越重视,需要源数据访问的方法会引起严重的隐私问题。因此,为应对这种关注,更实用的域适应情景被提出为基于无源的视频域的适应性(SFVDA)。尽管图像数据上有一些无源域适应性(SFDA)的方法,但由于视频的多模式性质,这些方法在SFVDA中产生了退化性能,并且存在其他时间特征。在本文中,我们提出了一个新颖的专注时间一致网络(ATCON)来通过学习时间一致性来解决SFVDA,并由两个新颖的一致性目标保证,即具有跨局部时间特征执行的特征一致性和源预测一致性。 ATCON通过基于预测置信度参与本地时间特征,进一步构建有效的总体特征。经验结果表明,ATCON在各种跨域动作识别基准中的最先进表现。
translated by 谷歌翻译
为了使视频模型能够在不同环境中无缝应用,已经提出了各种视频无监督的域适应性(VUDA)方法来提高视频模型的鲁棒性和可传递性。尽管模型鲁棒性有所改进,但这些VUDA方法仍需要访问源数据和源模型参数以进行适应,从而提高了严重的数据隐私和模型可移植性问题。为了应对上述问题,本文首先将Black-Box视频域的适应(BVDA)制定为更现实但具有挑战性的场景,在该场景中,仅作为Black-Box预测器提供了源视频模型。尽管在图像域中提出了一些针对黑框域适应性(BDA)的方法,但这些方法不能适用于视频域,因为视频模式具有更复杂的时间特征,难以对齐。为了解决BVDA,我们通过应用蒙版到混合策略和视频量的正则化:内部正规化和外部正规化,提出了一个新颖的内野和外部正规化网络(EXTERS),在剪辑和时间特征上执行,并进行外部正规化,同时将知识从从黑框预测变量获得的预测中提炼出来。经验结果表明,在各种跨域封闭设置和部分集合动作识别基准中,外部的最先进性能甚至超过了具有源数据可访问性的大多数现有视频域适应方法。
translated by 谷歌翻译
Domain adaptation (DA) approaches address domain shift and enable networks to be applied to different scenarios. Although various image DA approaches have been proposed in recent years, there is limited research towards video DA. This is partly due to the complexity in adapting the different modalities of features in videos, which includes the correlation features extracted as long-term dependencies of pixels across spatiotemporal dimensions. The correlation features are highly associated with action classes and proven their effectiveness in accurate video feature extraction through the supervised action recognition task. Yet correlation features of the same action would differ across domains due to domain shift. Therefore we propose a novel Adversarial Correlation Adaptation Network (ACAN) to align action videos by aligning pixel correlations. ACAN aims to minimize the distribution of correlation information, termed as Pixel Correlation Discrepancy (PCD). Additionally, video DA research is also limited by the lack of cross-domain video datasets with larger domain shifts. We, therefore, introduce a novel HMDB-ARID dataset with a larger domain shift caused by a larger statistical difference between domains. This dataset is built in an effort to leverage current datasets for dark video classification. Empirical results demonstrate the state-of-the-art performance of our proposed ACAN for both existing and the new video DA datasets.
translated by 谷歌翻译
尽管近年来行动认可取得了令人印象深刻的结果,但视频培训数据的收集和注释仍然很耗时和成本密集。因此,已经提出了图像到视频改编,以利用无标签的Web图像源来适应未标记的目标视频。这提出了两个主要挑战:(1)Web图像和视频帧之间的空间域移动; (2)图像和视频数据之间的模态差距。为了应对这些挑战,我们提出了自行车域的适应(CYCDA),这是一种基于周期的方法,用于通过在图像和视频中利用图像和视频中的联合空间信息来适应无监督的图像到视频域,另一方面,训练一个独立的时空模型,用于弥合模式差距。我们在每个周期中的两者之间的知识转移之间在空间和时空学习之间交替。我们在基准数据集上评估了图像到视频的方法,以及用于实现最新结果的混合源域的适应性,并证明了我们的循环适应性的好处。
translated by 谷歌翻译
无监督域适应(UDA)旨在将知识从相关但不同的良好标记的源域转移到新的未标记的目标域。大多数现有的UDA方法需要访问源数据,因此当数据保密而不相配在隐私问题时,不适用。本文旨在仅使用培训的分类模型来解决现实设置,而不是访问源数据。为了有效地利用适应源模型,我们提出了一种新颖的方法,称为源假设转移(拍摄),其通过将目标数据特征拟合到冻结源分类模块(表示分类假设)来学习目标域的特征提取模块。具体而言,拍摄挖掘出于特征提取模块的信息最大化和自我监督学习,以确保目标特征通过同一假设与看不见的源数据的特征隐式对齐。此外,我们提出了一种新的标签转移策略,它基于预测的置信度(标签信息),然后采用半监督学习来将目标数据分成两个分裂,然后提高目标域中的较为自信预测的准确性。如果通过拍摄获得预测,我们表示标记转移为拍摄++。关于两位数分类和对象识别任务的广泛实验表明,拍摄和射击++实现了与最先进的结果超越或相当的结果,展示了我们对各种视域适应问题的方法的有效性。代码可用于\ url {https://github.com/tim-learn/shot-plus}。
translated by 谷歌翻译
半监督域适应(SSDA)是一种具有挑战性的问题,需要克服1)以朝向域的较差的数据和2)分布换档的方法。不幸的是,由于培训数据偏差朝标标样本训练,域适应(DA)和半监督学习(SSL)方法的简单组合通常无法解决这两个目的。在本文中,我们介绍了一种自适应结构学习方法,以规范SSL和DA的合作。灵感来自多视图学习,我们建议的框架由共享特征编码器网络和两个分类器网络组成,用于涉及矛盾的目的。其中,其中一个分类器被应用于组目标特征以提高级别的密度,扩大了鲁棒代表学习的分类集群的间隙。同时,其他分类器作为符号器,试图散射源功能以增强决策边界的平滑度。目标聚类和源扩展的迭代使目标特征成为相应源点的扩张边界内的封闭良好。对于跨域特征对齐和部分标记的数据学习的联合地址,我们应用最大平均差异(MMD)距离最小化和自培训(ST)将矛盾结构投影成共享视图以进行可靠的最终决定。对标准SSDA基准的实验结果包括Domainnet和Office-Home,展示了我们对最先进的方法的方法的准确性和稳健性。
translated by 谷歌翻译
无监督的域适应性(DA)中的主要挑战是减轻源域和目标域之间的域移动。先前的DA工作表明,可以使用借口任务来通过学习域不变表示来减轻此域的转移。但是,实际上,我们发现大多数现有的借口任务对其他已建立的技术无效。因此,我们从理论上分析了如何以及何时可以利用子公司借口任务来协助给定DA问题的目标任务并制定客观的子公司任务适用性标准。基于此标准,我们设计了一个新颖的贴纸干预过程和铸造贴纸分类的过程,作为监督的子公司DA问题,该问题与目标任务无监督的DA同时发生。我们的方法不仅改善了目标任务适应性能,而且还促进了面向隐私的无源DA,即没有并发源目标访问。标准Office-31,Office-Home,Domainnet和Visda基准的实验证明了我们对单源和多源无源DA的优势。我们的方法还补充了现有的无源作品,从而实现了领先的绩效。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named Source HypOthesis Transfer (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and selfsupervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.
translated by 谷歌翻译
无监督的视频域适应是一项实用但具有挑战性的任务。在这项工作中,我们第一次从脱离视图中解决了它。我们的关键想法是在适应过程中将与域相关的信息从数据中删除。具体而言,我们考虑从两组潜在因素中生成跨域视频,一个编码静态域相关信息,另一个编码时间和语义相关的信息。然后开发转移顺序的VAE(Transvae)框架以建模这种产生。为了更好地适应适应,我们进一步提出了几个目标,以限制Transvae中的潜在因素。与几种最先进的方法相比,对UCF-HMDB,小丑和Epic-Kitchens数据集进行了广泛的实验验证了Transvae的有效性和优势。代码可在https://github.com/ldkong1205/transvae上公开获取。
translated by 谷歌翻译
假设源标签空间集成了目标一个,部分视频域适应(PVDA)是跨域视频分类问题的更一般和实际的场景。 PVDA的主要挑战是减轻由仅源离群类别类别引起的负转移。为了应对这一挑战,一个关键的步骤是通过提高目标类别和下降的异常值类来汇总目标预测,以分配类权重。但是,班级权重的错误预测会误导网络并导致负转移。以前的工作通过使用时间特征和注意力机制来提高类重量的准确性,但是当试图在域移动显着时,尝试产生准确的类重量时,这些方法可能会缺乏,就像在大多数真实世界中一样。为了应对这些挑战,我们提出了多模式集群校准的部分对抗网络(MCAN)。 MCAN通过多个时间尺度的多模式特征增强了视频功能提取,以形成更强大的整体特征。它利用一种新型的类重量校准方法来减轻由不正确的类重量引起的负转移。校准方法试图使用无监督聚类所隐含的分布信息来识别和权衡正确和错误的预测。与最先进的PVDA方法相比,对盛行的PVDA基准进行了广泛的实验,而拟议的MCAN取得了重大改进。
translated by 谷歌翻译
Systems for person re-identification (ReID) can achieve a high accuracy when trained on large fully-labeled image datasets. However, the domain shift typically associated with diverse operational capture conditions (e.g., camera viewpoints and lighting) may translate to a significant decline in performance. This paper focuses on unsupervised domain adaptation (UDA) for video-based ReID - a relevant scenario that is less explored in the literature. In this scenario, the ReID model must adapt to a complex target domain defined by a network of diverse video cameras based on tracklet information. State-of-art methods cluster unlabeled target data, yet domain shifts across target cameras (sub-domains) can lead to poor initialization of clustering methods that propagates noise across epochs, thus preventing the ReID model to accurately associate samples of same identity. In this paper, an UDA method is introduced for video person ReID that leverages knowledge on video tracklets, and on the distribution of frames captured over target cameras to improve the performance of CNN backbones trained using pseudo-labels. Our method relies on an adversarial approach, where a camera-discriminator network is introduced to extract discriminant camera-independent representations, facilitating the subsequent clustering. In addition, a weighted contrastive loss is proposed to leverage the confidence of clusters, and mitigate the risk of incorrect identity associations. Experimental results obtained on three challenging video-based person ReID datasets - PRID2011, iLIDS-VID, and MARS - indicate that our proposed method can outperform related state-of-the-art methods. Our code is available at: \url{https://github.com/dmekhazni/CAWCL-ReID}
translated by 谷歌翻译
在大量标记培训数据的监督下,视频语义细分取得了巨大进展。但是,域自适应视频分割,可以通过从标记的源域对未标记的目标域进行调整来减轻数据标记约束,这很大程度上被忽略了。我们设计了时间伪监督(TPS),这是一种简单有效的方法,探讨了从未标记的目标视频学习有效表示的一致性培训的想法。与在空间空间中建立一致性的传统一致性训练不同,我们通过在增强视频框架之间执行模型一致性来探索时空空间中的一致性训练,这有助于从更多样化的目标数据中学习。具体来说,我们设计了跨框架伪标签,以从以前的视频帧中提供伪监督,同时从增强的当前视频帧中学习。跨框架伪标签鼓励网络产生高确定性预测,从而有效地通过跨框架增强来促进一致性训练。对多个公共数据集进行的广泛实验表明,与最先进的ART相比,TPS更容易实现,更稳定,并且可以实现卓越的视频细分精度。
translated by 谷歌翻译
在过去的几年中,无监督的域适应性(UDA)技术在计算机视觉中具有显着的重要性和流行。但是,与可用于图像的广泛文献相比,视频领域仍然相对尚未探索。另一方面,动作识别模型的性能受到域转移的严重影响。在本文中,我们提出了一种简单新颖的UDA方法,以供视频动作识别。我们的方法利用了时空变压器的最新进展来构建一个强大的源模型,从而更好地概括了目标域。此外,由于引入了来自信息瓶颈原则的新颖对齐损失术语,我们的架构将学习域不变功能。我们报告了UDA的两个视频动作识别基准的结果,显示了HMDB $ \ leftrightArrow $ ucf的最新性能,以及动力学$ \ rightarrow $ nec-Drone,这更具挑战性。这证明了我们方法在处理不同级别的域转移方面的有效性。源代码可在https://github.com/vturrisi/udavt上获得。
translated by 谷歌翻译
现有的视频域改编(DA)方法需要存储视频帧的所有时间组合或配对源和目标视频,这些视频和目标视频成本昂贵,无法扩展到长时间的视频。为了解决这些局限性,我们建议采用以下记忆高效的基于图形的视频DA方法。首先,我们的方法模型每个源或目标视频通过图:节点表示视频帧和边缘表示帧之间的时间或视觉相似性关系。我们使用图形注意力网络来了解单个帧的重量,并同时将源和目标视频对齐到域不变的图形特征空间中。我们的方法没有存储大量的子视频,而是仅构建一个图形,其中一个视频的图形注意机制,从而大大降低了内存成本。广泛的实验表明,与最先进的方法相比,我们在降低内存成本的同时取得了卓越的性能。
translated by 谷歌翻译
State-of-the-art 3D semantic segmentation models are trained on the off-the-shelf public benchmarks, but they often face the major challenge when these well-trained models are deployed to a new domain. In this paper, we propose an Active-and-Adaptive Segmentation (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model, and bridge the point distribution gap between domains. Specifically, before the cross-domain adaptation stage begins, ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation, reducing the adaptation difficulty under 3D scenarios. Benefiting from the rise of multi-modal 2D-3D datasets, ADAS utilizes a cross-modal attention-based feature fusion module that can extract a representative pair of image features and point features to achieve a bi-directional image-point feature interaction for better safe adaptation. Experimentally, ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA) which means that only a few unlabeled samples are available in the unlabeled target domain; 3) Active Domain Adaptation (ADA) which means that the selected target samples by ADAS are manually annotated. Their results demonstrate that ADAS achieves a significant accuracy gain by easily coupling ADAS with self-training methods or off-the-shelf UDA works.
translated by 谷歌翻译
作为对数据有效使用的研究,多源无监督的域适应性将知识从带有标记数据的多个源域转移到了未标记的目标域。但是,目标域中不同域和嘈杂的伪标签之间的分布差异都导致多源无监督域适应方法的性能瓶颈。鉴于此,我们提出了一种将注意力驱动的领域融合和耐噪声学习(ADNT)整合到上述两个问题的方法。首先,我们建立了相反的注意结构,以在特征和诱导域运动之间执行信息。通过这种方法,当域差异降低时,特征的可区分性也可以显着提高。其次,基于无监督的域适应训练的特征,我们设计了自适应的反向横向熵损失,该损失可以直接对伪标签的产生施加约束。最后,结合了这两种方法,几个基准的实验结果进一步验证了我们提出的ADNT的有效性,并证明了优于最新方法的性能。
translated by 谷歌翻译
无监督域适应(UDA)已成功解决了可视应用程序的域移位问题。然而,由于以下原因,这些方法可能对时间序列数据的性能有限。首先,它们主要依赖于用于源预制的大规模数据集(即,ImageNet),这不适用于时间序列数据。其次,它们在域对齐步骤期间忽略源极限和目标域的特征空间上的时间维度。最后,最先前的UDA方法中的大多数只能对齐全局特征而不考虑目标域的细粒度分布。为了解决这些限制,我们提出了一个自我监督的自回归域适应(Slarda)框架。特别是,我们首先设计一个自我监督的学习模块,它利用预测作为辅助任务以提高源特征的可转换性。其次,我们提出了一种新的自回归域自适应技术,其包括在域对齐期间源和目标特征的时间依赖性。最后,我们开发了一个集合教师模型,通过自信的伪标记方法对准目标域中的类明智分发。已经在三个现实世界时间序列应用中进行了广泛的实验,具有30个跨域方案。结果表明,我们所提出的杆状方法明显优于时序序列域适应的最先进的方法。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
translated by 谷歌翻译
大多数现有的多源域适配(MSDA)方法通过特征分布对准最小化多个源 - 目标域对之间的距离,从单个源设置借用的方法。但是,对于不同的源极域,对齐成对特征分布是具有挑战性的,甚至可以对MSDA进行反效率。在本文中,我们介绍了一种新颖的方法:可转让的属性学习。动机很简单:虽然不同的域可以具有急剧不同的视野,但它们包含相同的类类,其特征在一起相同的属性;因此,MSDA模型应该专注于学习目标域的最可转换的属性。采用这种方法,我们提出了域名关注一致性网络,称为DAC网。关键设计是一个特征通道注意模块,旨在识别可转移功能(属性)。重要的是,注意模块受到一致性损失的监督,这对源极和目标域之间的信道注意权重的分布施加。此外,为了促进对目标数据的鉴别特征学习,我们将伪标记与类紧凑性丢失相结合,以最小化目标特征和分类器的权重向量之间的距离。在三个MSDA基准测试中进行了广泛的实验表明,我们的DAC-NET在所有这些中实现了新的最新性能。
translated by 谷歌翻译
This work introduces the novel task of Source-free Multi-target Domain Adaptation and proposes adaptation framework comprising of \textbf{Co}nsistency with \textbf{N}uclear-Norm Maximization and \textbf{Mix}Up knowledge distillation (\textit{CoNMix}) as a solution to this problem. The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing. The source-free approach leverages target pseudo labels, which can be noisy, to improve the target adaptation. We introduce consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, we propose novel MixUp Knowledge Distillation (MKD) for better generalization on multiple target domains using various source-free STDA models. We also show that the Vision Transformer (VT) backbone gives better feature representation with improved domain transferability and class discriminability. Our proposed framework achieves the state-of-the-art (SOTA) results in various paradigms of source-free STDA and MTDA settings on popular domain adaptation datasets like Office-Home, Office-Caltech, and DomainNet. Project Page: https://sites.google.com/view/conmix-vcl
translated by 谷歌翻译