When facing changing environments in the real world, the lightweight model on client devices suffers from severe performance drops under distribution shifts. The main limitations of the existing device model lie in (1) unable to update due to the computation limit of the device, (2) the limited generalization ability of the lightweight model. Meanwhile, recent large models have shown strong generalization capability on the cloud while they can not be deployed on client devices due to poor computation constraints. To enable the device model to deal with changing environments, we propose a new learning paradigm of Cloud-Device Collaborative Continual Adaptation, which encourages collaboration between cloud and device and improves the generalization of the device model. Based on this paradigm, we further propose an Uncertainty-based Visual Prompt Adapted (U-VPA) teacher-student model to transfer the generalization capability of the large model on the cloud to the device model. Specifically, we first design the Uncertainty Guided Sampling (UGS) to screen out challenging data continuously and transmit the most out-of-distribution samples from the device to the cloud. Then we propose a Visual Prompt Learning Strategy with Uncertainty guided updating (VPLU) to specifically deal with the selected samples with more distribution shifts. We transmit the visual prompts to the device and concatenate them with the incoming data to pull the device testing distribution closer to the cloud training distribution. We conduct extensive experiments on two object detection datasets with continually changing environments. Our proposed U-VPA teacher-student framework outperforms previous state-of-the-art test time adaptation and device-cloud collaboration methods. The code and datasets will be released.
translated by 谷歌翻译
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data. Existing methods mainly focus on model-based adaptation in a self-training manner, such as predicting pseudo labels for new domain datasets. Since pseudo labels are noisy and unreliable, these methods suffer from catastrophic forgetting and error accumulation when dealing with dynamic data distributions. Motivated by the prompt learning in NLP, in this paper, we propose to learn an image-level visual domain prompt for target domains while having the source model parameters frozen. During testing, the changing target datasets can be adapted to the source model by reformulating the input data with the learned visual prompts. Specifically, we devise two types of prompts, i.e., domains-specific prompts and domains-agnostic prompts, to extract current domain knowledge and maintain the domain-shared knowledge in the continual adaptation. Furthermore, we design a homeostasis-based prompt adaptation strategy to suppress domain-sensitive parameters in domain-invariant prompts to learn domain-shared knowledge more effectively. This transition from the model-dependent paradigm to the model-free one enables us to bypass the catastrophic forgetting and error accumulation problems. Experiments show that our proposed method achieves significant performance gains over state-of-the-art methods on four widely-used benchmarks, including CIFAR-10C, CIFAR-100C, ImageNet-C, and VLCS datasets.
translated by 谷歌翻译
Vision-Centric Bird-Eye-View (BEV) perception has shown promising potential and attracted increasing attention in autonomous driving. Recent works mainly focus on improving efficiency or accuracy but neglect the domain shift problem, resulting in severe degradation of transfer performance. With extensive observations, we figure out the significant domain gaps existing in the scene, weather, and day-night changing scenarios and make the first attempt to solve the domain adaption problem for multi-view 3D object detection. Since BEV perception approaches are usually complicated and contain several components, the domain shift accumulation on multi-latent spaces makes BEV domain adaptation challenging. In this paper, we propose a novel Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework to ease the domain shift accumulation, which consists of a Depth-Aware Teacher (DAT) and a Multi-space Feature Aligned (MFA) student model. Specifically, DAT model adopts uncertainty guidance to sample reliable depth information in target domain. After constructing domain-invariant BEV perception, it then transfers pixel and instance-level knowledge to student model. To further alleviate the domain shift at the global level, MFA student model is introduced to align task-relevant multi-space features of two domains. To verify the effectiveness of $M^{2}ATS$, we conduct BEV 3D object detection experiments on four cross domain scenarios and achieve state-of-the-art performance (e.g., +12.6% NDS and +9.1% mAP on Day-Night). Code and dataset will be released.
translated by 谷歌翻译
Domain adaptive object detection (DAOD) aims to alleviate transfer performance degradation caused by the cross-domain discrepancy. However, most existing DAOD methods are dominated by computationally intensive two-stage detectors, which are not the first choice for industrial applications. In this paper, we propose a novel semi-supervised domain adaptive YOLO (SSDA-YOLO) based method to improve cross-domain detection performance by integrating the compact one-stage detector YOLOv5 with domain adaptation. Specifically, we adapt the knowledge distillation framework with the Mean Teacher model to assist the student model in obtaining instance-level features of the unlabeled target domain. We also utilize the scene style transfer to cross-generate pseudo images in different domains for remedying image-level differences. In addition, an intuitive consistency loss is proposed to further align cross-domain predictions. We evaluate our proposed SSDA-YOLO on public benchmarks including PascalVOC, Clipart1k, Cityscapes, and Foggy Cityscapes. Moreover, to verify its generalization, we conduct experiments on yawning detection datasets collected from various classrooms. The results show considerable improvements of our method in these DAOD tasks. Our code is available on \url{https://github.com/hnuzhy/SSDA-YOLO}.
translated by 谷歌翻译
Models should be able to adapt to unseen data during test-time to avoid performance drops caused by inevitable distribution shifts in real-world deployment scenarios. In this work, we tackle the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data. We propose a simple recipe called \textit{Data-efficient Prompt Tuning} (DePT) with two key ingredients. First, DePT plugs visual prompts into the vision Transformer and only tunes these source-initialized prompts during adaptation. We find such parameter-efficient finetuning can efficiently adapt the model representation to the target domain without overfitting to the noise in the learning objective. Second, DePT bootstraps the source representation to the target domain by memory bank-based online pseudo-labeling. A hierarchical self-supervised regularization specially designed for prompts is jointly optimized to alleviate error accumulation during self-training. With much fewer tunable parameters, DePT demonstrates not only state-of-the-art performance on major adaptation benchmarks VisDA-C, ImageNet-C, and DomainNet-126, but also superior data efficiency, i.e., adaptation with only 1\% or 10\% data without much performance degradation compared to 100\% data. In addition, DePT is also versatile to be extended to online or multi-source TTA settings.
translated by 谷歌翻译
尽管他们最近取得了成功,但在测试时遇到分配变化时,深层神经网络仍会继续表现不佳。最近,许多提出的方法试图通过将模型与推理之前的新分布对齐来解决。由于没有可用的标签,因此需要无监督的目标才能使模型适应观察到的测试数据。在本文中,我们提出了测试时间自我训练(测试):一种技术,该技术在测试时以某些源数据和新的数据分配为输入,并使用学生教师框架来学习不变且强大的表示形式。 。我们发现使用测试适应的模型可以显着改善基线测试时间适应算法。测试可以实现现代领域适应算法的竞争性能,同时自适应时访问5-10倍的数据。我们对两项任务进行了各种基准:对象检测和图像分割,并发现该模型适用于测试。我们发现测试设置了用于测试时间域适应算法的新最新技术。
translated by 谷歌翻译
最近,检测变压器(DETR)是一种端到端对象检测管道,已达到有希望的性能。但是,它需要大规模标记的数据,并遭受域移位,尤其是当目标域中没有标记的数据时。为了解决这个问题,我们根据平均教师框架MTTRANS提出了一个端到端的跨域检测变压器,该变压器可以通过伪标签充分利用对象检测训练中未标记的目标域数据和在域之间的传输知识中的传输知识。我们进一步提出了综合的多级特征对齐方式,以改善由平均教师框架生成的伪标签,利用跨尺度的自我注意事项机制在可变形的DETR中。图像和对象特征在本地,全局和实例级别与基于域查询的特征对齐(DQFA),基于BI级的基于图形的原型对齐(BGPA)和Wine-Wise图像特征对齐(TIFA)对齐。另一方面,未标记的目标域数据伪标记,可用于平均教师框架的对象检测训练,可以导致更好的特征提取和对齐。因此,可以根据变压器的架构对迭代和相互优化的平均教师框架和全面的多层次特征对齐。广泛的实验表明,我们提出的方法在三个领域适应方案中实现了最先进的性能,尤其是SIM10K到CityScapes方案的结果,从52.6地图提高到57.9地图。代码将发布。
translated by 谷歌翻译
我们解决对象检测中的域适应问题,其中在源(带有监控)和目标域(没有监督的域的域名)之间存在显着的域移位。作为广泛采用的域适应方法,自培训教师学生框架(学生模型从教师模型生成的伪标签学习)在目标域中产生了显着的精度增益。然而,由于其偏向源域,它仍然存在从教师产生的大量低质量伪标签(例如,误报)。为了解决这个问题,我们提出了一种叫做自适应无偏见教师(AUT)的自我训练框架,利用对抗的对抗学习和弱强的数据增强来解决域名。具体而言,我们在学生模型中使用特征级的对抗性培训,确保从源和目标域中提取的功能共享类似的统计数据。这使学生模型能够捕获域不变的功能。此外,我们在目标领域的教师模型和两个域上的学生模型之间应用了弱强的增强和相互学习。这使得教师模型能够从学生模型中逐渐受益,而不会遭受域移位。我们展示了AUT通过大边距显示所有现有方法甚至Oracle(完全监督)模型的优势。例如,我们在有雾的城市景观(Clipart1k)上实现了50.9%(49.3%)地图,分别比以前的最先进和甲骨文高9.2%(5.2%)和8.2%(11.0%)
translated by 谷歌翻译
Test-time adaptation is the problem of adapting a source pre-trained model using test inputs from a target domain without access to source domain data. Most of the existing approaches address the setting in which the target domain is stationary. Moreover, these approaches are prone to making erroneous predictions with unreliable uncertainty estimates when distribution shifts occur. Hence, test-time adaptation in the face of non-stationary target domain shift becomes a problem of significant interest. To address these issues, we propose a principled approach, PETAL (Probabilistic lifElong Test-time Adaptation with seLf-training prior), which looks into this problem from a probabilistic perspective using a partly data-dependent prior. A student-teacher framework, where the teacher model is an exponential moving average of the student model naturally emerges from this probabilistic perspective. In addition, the knowledge from the posterior distribution obtained for the source task acts as a regularizer. To handle catastrophic forgetting in the long term, we also propose a data-driven model parameter resetting mechanism based on the Fisher information matrix (FIM). Moreover, improvements in experimental results suggest that FIM based data-driven parameter restoration contributes to reducing the error accumulation and maintaining the knowledge of recent domain by restoring only the irrelevant parameters. In terms of predictive error rate as well as uncertainty based metrics such as Brier score and negative log-likelihood, our method achieves better results than the current state-of-the-art for online lifelong test time adaptation across various benchmarks, such as CIFAR-10C, CIFAR-100C, ImageNetC, and ImageNet3DCC datasets.
translated by 谷歌翻译
域自适应对象检测(DAOD)旨在改善探测和测试数据来自不同域时的探测器的泛化能力。考虑到显着的域间隙,一些典型方法,例如基于Conscangan的方法,采用中间域来逐步地桥接源域和靶域。然而,基于Conscangan的中间域缺少对象检测的PIX或实例级监控,这导致语义差异。为了解决这个问题,在本文中,我们介绍了具有四种不同的低频滤波器操作的频谱增强一致性(FSAC)框架。通过这种方式,我们可以获得一系列增强数据作为中间域。具体地,我们提出了一种两级优化框架。在第一阶段,我们利用所有原始和增强的源数据来训练对象检测器。在第二阶段,采用增强源和目标数据,具有伪标签来执行预测一致性的自培训。使用均值优化的教师模型用于进一步修改伪标签。在实验中,我们分别评估了我们在单一和复合目标DAOD上的方法,这证明了我们方法的有效性。
translated by 谷歌翻译
Deep learning has achieved notable success in 3D object detection with the advent of large-scale point cloud datasets. However, severe performance degradation in the past trained classes, i.e., catastrophic forgetting, still remains a critical issue for real-world deployment when the number of classes is unknown or may vary. Moreover, existing 3D class-incremental detection methods are developed for the single-domain scenario, which fail when encountering domain shift caused by different datasets, varying environments, etc. In this paper, we identify the unexplored yet valuable scenario, i.e., class-incremental learning under domain shift, and propose a novel 3D domain adaptive class-incremental object detection framework, DA-CIL, in which we design a novel dual-domain copy-paste augmentation method to construct multiple augmented domains for diversifying training distributions, thereby facilitating gradual domain adaptation. Then, multi-level consistency is explored to facilitate dual-teacher knowledge distillation from different domains for domain adaptive class-incremental learning. Extensive experiments on various datasets demonstrate the effectiveness of the proposed method over baselines in the domain adaptive class-incremental learning scenario.
translated by 谷歌翻译
域的适应性是将所学的共享知识从源域转移到新的环境,即目标域。一种常见的做法是在标记的源域数据和未标记的目标域数据上训练模型。然而,由于对源域的强有力监督,学到的模型通常会偏差。大多数研究人员采用早期策略来防止过度拟合,但是由于缺乏目标域验证集,因此何时停止培训仍然是一个具有挑战性的问题。在本文中,我们提出了一种高效的自举方法,称为Adaboost学生,在培训过程中明确学习互补模型,并使用户摆脱经验的早期停止。 Adaboost学生将深入的模型学习与常规培训策略(即自适应增强)相结合,并在学习模型与数据采样器之间进行互动。我们采用一个自适应数据采样器来逐步促进硬样品学习并汇总“弱”模型以防止过度拟合。广泛的实验表明,(1)无需担心停止时间,Adaboost学生通过在培训期间通过有效的互补模型学习提供了一个强大的解决方案。 (2)Adaboost学生与大多数领域适应方法是正交的,可以将其与现有方法结合使用,以进一步改善最新性能。我们已经在三个广泛使用的场景细分域适应基准上取得了竞争成果。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
translated by 谷歌翻译
Unsupervised source-free domain adaptation methods aim to train a model to be used in the target domain utilizing the pretrained source-domain model and unlabeled target-domain data, where the source data may not be accessible due to intellectual property or privacy issues. These methods frequently utilize self-training with pseudo-labeling thresholded by prediction confidence. In a source-free scenario, only supervision comes from target data, and thresholding limits the contribution of the self-training. In this study, we utilize self-training with a mean-teacher approach. The student network is trained with all predictions of the teacher network. Instead of thresholding the predictions, the gradients calculated from the pseudo-labels are weighted based on the reliability of the teacher's predictions. We propose a novel method that uses proxy-based metric learning to estimate reliability. We train a metric network on the encoder features of the teacher network. Since the teacher is updated with the moving average, the encoder feature space is slowly changing. Therefore, the metric network can be updated in training time, which enables end-to-end training. We also propose a metric-based online ClassMix method to augment the input of the student network where the patches to be mixed are decided based on the metric reliability. We evaluated our method in synthetic-to-real and cross-city scenarios. The benchmarks show that our method significantly outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
神经形态尖峰摄像机以生物启发的方式生成具有高时间分辨率的数据流,该方式在自动驾驶等现实世界应用中具有巨大的潜力。与RGB流相反,Spike流具有克服运动模糊的固有优势,从而导致对高速对象的更准确的深度估计。但是,几乎不可能以监督的方式培训尖峰深度估计网络,因为获得时间密集的尖峰流的配对深度标签非常费力和挑战。在本文中,我们没有构建带有完整深度标签的Spike流数据集,而是以不受监督的方式从开源RGB数据集(例如Kitti)和估算峰值深度转移知识。此类问题的关键挑战在于RGB和SPIKE模式之间的模态差距,以及标记的源RGB和未标记的目标尖峰域之间的域间隙。为了克服这些挑战,我们引入了无监督的尖峰深度估计的跨模式跨域(BICROSS)框架。我们的方法通过引入中介模拟的源尖峰域来缩小源RGB和目标尖峰之间的巨大差距。要具体而言,对于跨模式阶段,我们提出了一种新颖的粗到精细知识蒸馏(CFKD),将图像和像素级知识从源RGB转移到源尖峰。这种设计分别利用了RGB和SPIKE模式的大量语义和密集的时间信息。对于跨域阶段,我们引入了不确定性引导的均值老师(UGMT),以生成具有不确定性估计的可靠伪标签,从而减轻了源尖峰和目标尖峰域之间的变化。此外,我们提出了一种全局级特征对齐方法(GLFA),以对齐两个域之间的特征并生成更可靠的伪标签。
translated by 谷歌翻译
无监督的域适应性(UDA)旨在使标记的源域的模型适应未标记的目标域。现有的基于UDA的语义细分方法始终降低像素级别,功能级别和输出级别的域移动。但是,几乎所有这些都在很大程度上忽略了上下文依赖性,该依赖性通常在不同的领域共享,从而导致较不怀疑的绩效。在本文中,我们提出了一个新颖的环境感知混音(camix)框架自适应语义分割的框架,该框架以完全端到端的可训练方式利用了上下文依赖性的这一重要线索作为显式的先验知识,以增强对适应性的适应性目标域。首先,我们通过利用积累的空间分布和先前的上下文关系来提出上下文掩盖的生成策略。生成的上下文掩码在这项工作中至关重要,并将指导三个不同级别的上下文感知域混合。此外,提供了背景知识,我们引入了重要的一致性损失,以惩罚混合学生预测与混合教师预测之间的不一致,从而减轻了适应性的负面转移,例如早期绩效降级。广泛的实验和分析证明了我们方法对广泛使用的UDA基准的最新方法的有效性。
translated by 谷歌翻译
在本文中,我们的目标是在测试时调整预训练的卷积神经网络对域的变化。我们在没有标签的情况下,不断地使用传入的测试批次流。现有文献主要是基于通过测试图像的对抗扰动获得的人工偏移。在此激励的情况下,我们在域转移的两个现实和挑战的来源(即背景和语义转移)上评估了艺术的状态。上下文移动与环境类型相对应,例如,在室内上下文上预先训练的模型必须适应Core-50上的户外上下文[7]。语义转移对应于捕获类型,例如,在自然图像上预先训练的模型必须适应域网上的剪贴画,草图和绘画[10]。我们在分析中包括了最近的技术,例如预测时间批归一化(BN)[8],测试熵最小化(帐篷)[16]和持续的测试时间适应(CottA)[17]。我们的发现是三个方面的:i)测试时间适应方法的表现更好,并且与语义转移相比,在上下文转移方面忘记了更少的忘记,ii)帐篷在短期适应方面的表现优于其他方法,而Cotta则超过了其他关于长期适应的方法, iii)bn是最可靠和强大的。
translated by 谷歌翻译
深度学习模型的最新发展,捕捉作物物候的复杂的时间模式有卫星图像时间序列(坐在),大大高级作物分类。然而,当施加到目标区域从训练区空间上不同的,这些模型差没有任何目标标签由于作物物候区域之间的时间位移进行。为了解决这个无人监督跨区域适应环境,现有方法学域不变特征没有任何目标的监督,而不是时间偏移本身。因此,这些技术提供了SITS只有有限的好处。在本文中,我们提出TimeMatch,一种新的无监督领域适应性方法SITS直接占时移。 TimeMatch由两个部分组成:1)时间位移的估计,其估计具有源极训练模型的未标记的目标区域的时间偏移,和2)TimeMatch学习,它结合了时间位移估计与半监督学习到一个分类适应未标记的目标区域。我们还引进了跨区域适应的开放式访问的数据集与来自欧洲四个不同区域的旁边。在此数据集,我们证明了TimeMatch优于所有竞争的方法,通过11%的在五个不同的适应情景F1-得分,创下了新的国家的最先进的跨区域适应性。
translated by 谷歌翻译
由于训练和测试分布之间的不匹配,自动语音识别(ASR)的跨域性能可能会受到严重阻碍。由于目标域通常缺乏标记的数据,并且在声学和语言水平上存在域移位,因此对ASR进行无监督的域适应性(UDA)是一项挑战。先前的工作表明,通过利用未标记的数据的自我检查,自我监督的学习(SSL)或伪标记(PL)可以有效地进行UDA。但是,这些自我介绍也面临不匹配的域分布中的性能退化,而以前的工作未能解决。这项工作提出了一个系统的UDA框架,可以在预训练和微调范式中充分利用具有自学贴标签的未标记数据。一方面,我们应用持续的预训练和数据重播技术来减轻SSL预训练模型的域不匹配。另一方面,我们提出了一种基于PL技术的域自适应微调方法,并具有三种独特的修改:首先,我们设计了一种双分支PL方法,以降低对错误的伪标签的敏感性;其次,我们设计了一种不确定性感知的置信度过滤策略,以提高伪标签的正确性。第三,我们引入了两步PL方法,以结合目标域语言知识,从而产生更准确的目标域伪标记。各种跨域场景的实验结果表明,所提出的方法可以有效地提高跨域的性能,并显着超过以前的方法。
translated by 谷歌翻译
无监督的域适应性(UDA)旨在减少训练和测试数据之间的域间隙,并在大多数情况下以离线方式进行。但是,在部署过程中可能会连续且不可预测地发生域的变化(例如,天气变化突然变化)。在这种情况下,深度神经网络见证了准确性的急剧下降,离线适应可能不足以对比。在本文中,我们解决了在线域适应(ONDA)进行语义细分。我们设计了一条可逐步或突然转移的域转移的管道,在多雨和有雾的情况下,我们对其进行了评估。我们的实验表明,我们的框架可以有效地适应部署期间的新域,而不受灾难性遗忘以前的域的影响。
translated by 谷歌翻译