Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.
translated by 谷歌翻译
This paper presents a 3D generative model that uses diffusion models to automatically generate 3D digital avatars represented as neural radiance fields. A significant challenge in generating such avatars is that the memory and processing costs in 3D are prohibitive for producing the rich details required for high-quality avatars. To tackle this problem we propose the roll-out diffusion network (Rodin), which represents a neural radiance field as multiple 2D feature maps and rolls out these maps into a single 2D feature plane within which we perform 3D-aware diffusion. The Rodin model brings the much-needed computational efficiency while preserving the integrity of diffusion in 3D by using 3D-aware convolution that attends to projected features in the 2D feature plane according to their original relationship in 3D. We also use latent conditioning to orchestrate the feature generation for global coherence, leading to high-fidelity avatars and enabling their semantic editing based on text prompts. Finally, we use hierarchical synthesis to further enhance details. The 3D avatars generated by our model compare favorably with those produced by existing generative techniques. We can generate highly detailed avatars with realistic hairstyles and facial hair like beards. We also demonstrate 3D avatar generation from image or text as well as text-guided editability.
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Depth cues are known to be useful for visual perception. However, direct measurement of depth is often impracticable. Fortunately, though, modern learning-based methods offer promising depth maps by inference in the wild. In this work, we adapt such depth inference models for object segmentation using the objects' ``pop-out'' prior in 3D. The ``pop-out'' is a simple composition prior that assumes objects reside on the background surface. Such compositional prior allows us to reason about objects in the 3D space. More specifically, we adapt the inferred depth maps such that objects can be localized using only 3D information. Such separation, however, requires knowledge about contact surface which we learn using the weak supervision of the segmentation mask. Our intermediate representation of contact surface, and thereby reasoning about objects purely in 3D, allows us to better transfer the depth knowledge into semantics. The proposed adaptation method uses only the depth model without needing the source data used for training, making the learning process efficient and practical. Our experiments on eight datasets of two challenging tasks, namely camouflaged object detection and salient object detection, consistently demonstrate the benefit of our method in terms of both performance and generalizability.
translated by 谷歌翻译
Visual anomaly detection plays a crucial role in not only manufacturing inspection to find defects of products during manufacturing processes, but also maintenance inspection to keep equipment in optimum working condition particularly outdoors. Due to the scarcity of the defective samples, unsupervised anomaly detection has attracted great attention in recent years. However, existing datasets for unsupervised anomaly detection are biased towards manufacturing inspection, not considering maintenance inspection which is usually conducted under outdoor uncontrolled environment such as varying camera viewpoints, messy background and degradation of object surface after long-term working. We focus on outdoor maintenance inspection and contribute a comprehensive Maintenance Inspection Anomaly Detection (MIAD) dataset which contains more than 100K high-resolution color images in various outdoor industrial scenarios. This dataset is generated by a 3D graphics software and covers both surface and logical anomalies with pixel-precise ground truth. Extensive evaluations of representative algorithms for unsupervised anomaly detection are conducted, and we expect MIAD and corresponding experimental results can inspire research community in outdoor unsupervised anomaly detection tasks. Worthwhile and related future work can be spawned from our new dataset.
translated by 谷歌翻译
在本文中,我们研究了神经视频压缩(NVC)中位分配的问题。首先,我们揭示了最近声称是最佳的位分配方法实际上是由于其实施而是最佳的。具体而言,我们发现其亚典型性在于半损坏的变异推理(SAVI)对潜在的不正确的应用,具有非物质变异后验。然后,我们表明,在非因素潜伏期上校正的SAVI校正版本需要递归地通过梯度上升应用后传播,这是我们得出校正后的最佳位分配算法的。由于校正位分配的计算不可行性,我们设计了有效的近似值以使其实用。经验结果表明,我们提出的校正显着改善了R-D性能和比特率误差的错误分配,并且比所有其他位分配方法都大大提高了。源代码在补充材料中提供。
translated by 谷歌翻译
自我训练在半监督学习中表现出巨大的潜力。它的核心思想是使用在标记数据上学习的模型来生成未标记样本的伪标签,然后自我教学。为了获得有效的监督,主动尝试通常会采用动量老师进行伪标签的预测,但要观察确认偏见问题,在这种情况下,错误的预测可能会提供错误的监督信号并在培训过程中积累。这种缺点的主要原因是,现行的自我训练框架充当以前的知识指导当前状态,因为老师仅与过去的学生更新。为了减轻这个问题,我们提出了一种新颖的自我训练策略,该策略使模型可以从未来学习。具体而言,在每个培训步骤中,我们都会首先优化学生(即,在不将其应用于模型权重的情况下缓存梯度),然后用虚拟未来的学生更新老师,最后要求老师为伪标记生产伪标签目前的学生作为指导。这样,我们设法提高了伪标签的质量,从而提高了性能。我们还通过深入(FST-D)和广泛(FST-W)窥视未来,开发了我们未来自我训练(FST)框架的两个变体。将无监督的域自适应语义分割和半监督语义分割的任务作为实例,我们在广泛的环境下实验表明了我们方法的有效性和优越性。代码将公开可用。
translated by 谷歌翻译
零击学习是一种学习制度,通过概括从可见类中学到的视觉语义关系来识别看不见的课程。为了获得有效的ZSL模型,可以诉诸于来自多个来源的培训样本,这可能不可避免地提高了有关不同组织之间数据共享的隐私问题。在本文中,我们提出了一个新颖的联合零摄影学习FedZSL框架,该框架从位于边缘设备上的分散数据中学习了一个中心模型。为了更好地概括为以前看不见的类,FEDZSL允许从非重叠类采样的每个设备上的训练数据,这些数据远非I.I.D.传统的联邦学习通常假设。我们在FEDZSL协议中确定了两个关键挑战:1)受过训练的模型容易偏向于本地观察到的类,因此未能推广到其他设备上的看不见的类和/或所见类别; 2)由于培训数据中的每个类别都来自单个来源,因此中心模型非常容易受到模型置换(后门)攻击的影响。为了解决这些问题,我们提出了三个局部目标,以通过关系蒸馏来进行视觉声音对齐和跨设备对齐,这利用了归一化的类协方差,以使跨设备的预测逻辑的一致性正常。为了防止后门攻击,提出了一种功能级防御技术。由于恶意样本与给定的语义属性的相关性较小,因此将丢弃低大小的视觉特征以稳定模型更新。 FedZSL的有效性和鲁棒性通过在三个零击基准数据集上进行的广泛实验证明。
translated by 谷歌翻译
单眼深度估计(MDE)由于其低成本和机器人任务的关键功能,例如定位,映射和障碍物检测而吸引了激烈的研究。经过深入学习的发展,监督的方法已取得了巨大的成功,但它们依靠大量的地面深度注释,这些深度昂贵。无监督的域适应性(UDA)将知识从标记的源数据转移到未标记的目标数据,以放大监督学习的约束。但是,由于域移位问题,现有的UDA方法可能无法完全跨不同数据集的域差距对齐。我们认为,可以通过精心设计的特征分解来实现更好的域对齐。在本文中,我们提出了一种针对MDE的新型UDA方法,称为适应的学习功能分解(LFDA),该方法学会将功能空间分解为内容和样式组件。 LFDA仅尝试对齐内容组件,因为它具有较小的域间隙。同时,它不包括针对源域的样式组件,而不是训练主要任务。此外,LFDA使用单独的特征分布估计来进一步弥合域间隙。在三个域适应性MDE方案上进行了广泛的实验表明,与最先进的方法相比,所提出的方法可实现卓越的准确性和较低的计算成本。
translated by 谷歌翻译
现实世界中的数据通常遵循长尾巴的分布,其中一些多数类别占据了大多数数据,而大多数少数族裔类别都包含有限数量的样本。分类模型最小化跨凝结的努力来代表和分类尾部类别。尽管已经对学习无偏分类器的学习问题进行了充分的研究,但代表不平衡数据的方法却没有探索。在本文中,我们专注于表示不平衡数据的表示。最近,受到监督的对比学习最近在平衡数据上表现出了有希望的表现。但是,通过我们的理论分析,我们发现对于长尾数据,它未能形成常规的单纯形,这是代表学习的理想几何配置。为了纠正SCL的优化行为并进一步改善了长尾视觉识别的性能,我们提出了平衡对比度学习(BCL)的新型损失。与SCL相比,我们在BCL:类平均水平方面有两个改进,可以平衡负类的梯度贡献。课堂组合,允许所有类都出现在每个迷你批次中。提出的平衡对比度学习(BCL)方法满足形成常规单纯形的条件并有助于跨透明拷贝的优化。配备了BCL,提出的两分支框架可以获得更强的特征表示,并在诸如CIFAR-10-LT,CIFAR-100-LT,Imagenet-LT和Inaturalist2018之类的长尾基准数据集上实现竞争性能。我们的代码可在\ href {https://github.com/flamiezhu/bcl} {this url}中获得。
translated by 谷歌翻译