Recent neural radiance field (NeRF) representation has achieved great success in the tasks of novel view synthesis and 3D reconstruction. However, they suffer from the catastrophic forgetting problem when continuously learning from streaming data without revisiting the previous training data. This limitation prohibits the application of existing NeRF models to scenarios where images come in sequentially. In view of this, we explore the task of incremental learning for neural radiance field representation in this work. We first propose a student-teacher pipeline to mitigate the catastrophic forgetting problem. Specifically, we iterate the process of using the student as the teacher at the end of each incremental step and let the teacher guide the training of the student in the next step. In this way, the student network is able to learn new information from the streaming data and retain old knowledge from the teacher network simultaneously. Given that not all information from the teacher network is helpful since it is only trained with the old data, we further introduce a random inquirer and an uncertainty-based filter to filter useful information. We conduct experiments on the NeRF-synthetic360 and NeRF-real360 datasets, where our approach significantly outperforms the baselines by 7.3% and 25.2% in terms of PSNR. Furthermore, we also show that our approach can be applied to the large-scale camera facing-outwards dataset ScanNet, where we surpass the baseline by 60.0% in PSNR.
translated by 谷歌翻译
Domain shift widely exists in the visual world, while modern deep neural networks commonly suffer from severe performance degradation under domain shift due to the poor generalization ability, which limits the real-world applications. The domain shift mainly lies in the limited source environmental variations and the large distribution gap between source and unseen target data. To this end, we propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle such domain shift in various visual tasks. Specifically, SHADE is constructed based on two consistency constraints, Style Consistency (SC) and Retrospection Consistency (RC). SC enriches the source situations and encourages the model to learn consistent representation across style-diversified samples. RC leverages general visual knowledge to prevent the model from overfitting to source data and thus largely keeps the representation consistent between the source and general visual models. Furthermore, we present a novel style hallucination module (SHM) to generate style-diversified samples that are essential to consistency learning. SHM selects basis styles from the source distribution, enabling the model to dynamically generate diverse and realistic samples during training. Extensive experiments demonstrate that our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation and object detection, with different models, i.e., ConvNets and Transformer.
translated by 谷歌翻译
Existing state-of-the-art method for audio-visual conditioned video prediction uses the latent codes of the audio-visual frames from a multimodal stochastic network and a frame encoder to predict the next visual frame. However, a direct inference of per-pixel intensity for the next visual frame from the latent codes is extremely challenging because of the high-dimensional image space. To this end, we propose to decouple the audio-visual conditioned video prediction into motion and appearance modeling. The first part is the multimodal motion estimation module that learns motion information as optical flow from the given audio-visual clip. The second part is the context-aware refinement module that uses the predicted optical flow to warp the current visual frame into the next visual frame and refines it base on the given audio-visual context. Experimental results show that our method achieves competitive results on existing benchmarks.
translated by 谷歌翻译
Semantic segmentation in 3D indoor scenes has achieved remarkable performance under the supervision of large-scale annotated data. However, previous works rely on the assumption that the training and testing data are of the same distribution, which may suffer from performance degradation when evaluated on the out-of-distribution scenes. To alleviate the annotation cost and the performance degradation, this paper introduces the synthetic-to-real domain generalization setting to this task. Specifically, the domain gap between synthetic and real-world point cloud data mainly lies in the different layouts and point patterns. To address these problems, we first propose a clustering instance mix (CINMix) augmentation technique to diversify the layouts of the source data. In addition, we augment the point patterns of the source data and introduce non-parametric multi-prototypes to ameliorate the intra-class variance enlarged by the augmented point patterns. The multi-prototypes can model the intra-class variance and rectify the global classifier in both training and inference stages. Experiments on the synthetic-to-real benchmark demonstrate that both CINMix and multi-prototypes can narrow the distribution gap and thus improve the generalization ability on real-world datasets.
translated by 谷歌翻译
Open world object detection aims at detecting objects that are absent in the object classes of the training data as unknown objects without explicit supervision. Furthermore, the exact classes of the unknown objects must be identified without catastrophic forgetting of the previous known classes when the corresponding annotations of unknown objects are given incrementally. In this paper, we propose a two-stage training approach named Open World DETR for open world object detection based on Deformable DETR. In the first stage, we pre-train a model on the current annotated data to detect objects from the current known classes, and concurrently train an additional binary classifier to classify predictions into foreground or background classes. This helps the model to build an unbiased feature representations that can facilitate the detection of unknown classes in subsequent process. In the second stage, we fine-tune the class-specific components of the model with a multi-view self-labeling strategy and a consistency constraint. Furthermore, we alleviate catastrophic forgetting when the annotations of the unknown classes becomes available incrementally by using knowledge distillation and exemplar replay. Experimental results on PASCAL VOC and MS-COCO show that our proposed method outperforms other state-of-the-art open world object detection methods by a large margin.
translated by 谷歌翻译
显式神经表面表示可以在任意精度上精确有效地提取编码表面,以及差异几何特性(例如表面正常和曲率)的分析推导。这种理想的属性在其隐式对应物中没有,使其非常适合计算机视觉,图形和机器人技术中的各种应用。但是,SOTA的作品在可以有效描述的拓扑结构方面受到限制,它引入了重建复杂表面和模型效率的失真。在这项工作中,我们提出了最小的神经图集,这是一种新型基于地图集的显式神经表面表示。从其核心处是一个完全可学习的参数域,由在参数空间的开放正方形上定义的隐式概率占用字段给出。相比之下,先前的工作通常预定了参数域。附加的灵活性使图表能够允许任意拓扑和边界。因此,我们的表示形式可以学习3个图表的最小地图集,这些图表对任意拓扑表面的表面(包括具有任意连接的组件的闭合和开放表面),具有变形最小的参数化。我们的实验支持了这一假设,并表明我们的重建在整体几何形状方面更为准确,这是由于对拓扑和几何形状的关注所分离。
translated by 谷歌翻译
当前有监督的跨域图像检索方法可以实现出色的性能。但是,数据收集和标签的成本施加了在实际应用程序中实践部署的棘手障碍。在本文中,我们研究了无监督的跨域图像检索任务,其中类标签和配对注释不再是训练的先决条件。这是一项极具挑战性的任务,因为没有对内域特征表示学习和跨域对准的监督。我们通过引入:1)一种新的群体对比度学习机制来应对这两个挑战,以帮助提取班级语义感知特征,以及2)新的距离距离损失,以有效地测量并最大程度地减少域差异而无需任何外部监督。在办公室和域名数据集上进行的实验始终显示出与最先进方法相比,我们的框架的出色图像检索精度。我们的源代码可以在https://github.com/conghuihu/ucdir上找到。
translated by 谷歌翻译
由于基于相交的联盟(IOU)优化维持最终IOU预测度量和损失的一致性,因此它已被广泛用于单级2D对象检测器的回归和分类分支。最近,几种3D对象检测方法采用了基于IOU的优化,并用3D iou直接替换了2D iou。但是,由于复杂的实施和效率低下的向后操作,3D中的这种直接计算非常昂贵。此外,基于3D IOU的优化是优化的,因为它对旋转很敏感,因此可能导致训练不稳定性和检测性能恶化。在本文中,我们提出了一种新型的旋转旋转iou(RDIOU)方法,该方法可以减轻旋转敏感性问题,并在训练阶段与3D IOU相比产生更有效的优化目标。具体而言,我们的RDIOU通过将旋转变量解耦为独立术语,但保留3D iou的几何形状来简化回归参数的复杂相互作用。通过将RDIOU纳入回归和分类分支,鼓励网络学习更精确的边界框,并同时克服分类和回归之间的错位问题。基准Kitti和Waymo开放数据集的广泛实验验证我们的RDIOU方法可以为单阶段3D对象检测带来实质性改进。
translated by 谷歌翻译
在本文中,我们考虑了语义分割中域概括的问题,该问题旨在仅使用标记的合成(源)数据来学习强大的模型。该模型有望在看不见的真实(目标)域上表现良好。我们的研究发现,图像样式的变化在很大程度上可以影响模型的性能,并且样式特征可以通过图像的频率平均值和标准偏差来很好地表示。受此启发,我们提出了一种新颖的对抗性增强(Advstyle)方法,该方法可以在训练过程中动态生成硬性化的图像,因此可以有效防止该模型过度适应源域。具体而言,AdvStyle将样式功能视为可学习的参数,并通过对抗培训对其进行更新。学习的对抗性风格功能用于构建用于健壮模型训练的对抗图像。 AdvStyle易于实现,并且可以轻松地应用于不同的模型。对两个合成到现实的语义分割基准的实验表明,Advstyle可以显着改善看不见的真实域的模型性能,并表明我们可以实现最新技术的状态。此外,可以将AdvStyle用于域通用图像分类,并在考虑的数据集上产生明显的改进。
translated by 谷歌翻译
在本文中,我们研究了合成到现实域的广义语义分割的任务,该任务旨在学习一个仅使用合成数据的现实场景的强大模型。合成数据和现实世界数据之间的大域移动,包括有限的源环境变化以及合成和现实世界数据之间的较大分布差距,极大地阻碍了看不见的现实现实场景中的模型性能。在这项工作中,我们建议使用样式挂钩的双重一致性学习(Shad)框架来处理此类域转移。具体而言,阴影是基于两个一致性约束,样式一致性(SC)和回顾一致性(RC)构建的。 SC丰富了来源情况,并鼓励模型在样式多样化样本中学习一致的表示。 RC利用现实世界的知识来防止模型过度拟合到合成数据,因此在很大程度上使综合模型和现实世界模型之间的表示一致。此外,我们提出了一个新颖的样式幻觉模块(SHM),以生成对一致性学习至关重要的样式变化样本。 SHM从源分布中选择基本样式,使模型能够在训练过程中动态生成多样化和现实的样本。实验表明,我们的阴影在单个和多源设置上的三个现实世界数据集的平均MIOU的平均MIOU的平均MIOU的平均水平分别优于最先进的方法,并优于最先进的方法。
translated by 谷歌翻译