Along with the widespread use of face recognition systems, their vulnerability has become highlighted. While existing face anti-spoofing methods can be generalized between attack types, generic solutions are still challenging due to the diversity of spoof characteristics. Recently, the spoof trace disentanglement framework has shown great potential for coping with both seen and unseen spoof scenarios, but the performance is largely restricted by the single-modal input. This paper focuses on this issue and presents a multi-modal disentanglement model which targetedly learns polysemantic spoof traces for more accurate and robust generic attack detection. In particular, based on the adversarial learning mechanism, a two-stream disentangling network is designed to estimate spoof patterns from the RGB and depth inputs, respectively. In this case, it captures complementary spoofing clues inhering in different attacks. Furthermore, a fusion module is exploited, which recalibrates both representations at multiple stages to promote the disentanglement in each individual modality. It then performs cross-modality aggregation to deliver a more comprehensive spoof trace representation for prediction. Extensive evaluations are conducted on multiple benchmarks, demonstrating that learning polysemantic spoof traces favorably contributes to anti-spoofing with more perceptible and interpretable results.
translated by 谷歌翻译
异常识别高度取决于对象与场景之间的关系,因为相同/不同场景中的不同/相同对象动作可能导致各种程度的正态性和异常。因此,对象场景关系实际上在异常检测中起着至关重要的作用,但在以前的工作中探讨了不足。在本文中,我们提出了一个时空关系学习(STRL)框架来解决视频异常检测任务。首先,考虑到对象的动态特征以及场景区域,我们构建了一个时空自动编码器(STAE),以共同利用代表学习的空间和时间演化模式。为了获得更好的图案提取,在STAE模块中设计了两个解码分支,即通过直接预测下一个帧来捕获空间提示的外观分支,以及一个运动分支,重点是通过光流预测对动态进行建模。然后,为了很好地融合对象场所关系,设计了一个关系学习(RL)模块来通过引入知识图嵌入方法来分析和总结正常关系。在此过程中具体来说,通过共同建模对象/场景特征和优化的对象场所关系图来衡量对象场景关系的合理性。在三个公共数据集上进行了广泛的实验,而对最新方法的优越性能证明了我们方法的有效性。
translated by 谷歌翻译
在线知识蒸馏(OKD)通过相互利用教师和学生之间的差异来改善所涉及的模型。它们之间的差距上有几个关键的瓶颈 - 例如,为什么以及何时以及何时损害表现,尤其是对学生的表现?如何量化教师和学生之间的差距? - 接受了有限的正式研究。在本文中,我们提出了可切换的在线知识蒸馏(Switokd),以回答这些问题。 Switokd的核心思想不是专注于测试阶段的准确性差距,而是通过两种模式之间的切换策略来适应训练阶段的差距,即蒸馏差距 - 专家模式(暂停老师,同时暂停教师保持学生学习)和学习模式(重新启动老师)。为了拥有适当的蒸馏差距,我们进一步设计了一个自适应开关阈值,该阈值提供了有关何时切换到学习模式或专家模式的正式标准,从而改善了学生的表现。同时,老师从我们的自适应切换阈值中受益,并基本上与其他在线艺术保持同步。我们进一步将Switokd扩展到具有两个基础拓扑的多个网络。最后,广泛的实验和分析验证了Switokd在最新面前的分类的优点。我们的代码可在https://github.com/hfutqian/switokd上找到。
translated by 谷歌翻译
预测不同托卡马克人的破坏是要克服的巨大障碍。未来的Tokamaks在高性能排放时几乎无法忍受中断。很少有高性能的破坏排放几乎无法构成丰富的训练集,这使得当前数据驱动的方法难以获得可接受的结果。能够将在一个Tokamak训练的中断预测模型转移到另一种训练的机器学习方法以解决该问题。关键是一个包含特征提取器的破坏预测模型,该模型能够在Tokamak诊断数据中提取常见的破坏前体痕迹,并具有可转移的破坏分类器。基于上面的问题,该论文首先提出了专门针对Tokamaks上的普通诊断中的破坏前体特征而设计的深融合功能提取器,该特征是根据当前已知的破坏前体,为可转移模型提供了有希望的基础。通过与J-Text上的手动特征提取进行比较,可以证明融合功能提取器。基于在J-TEXT上训练的功能提取器,将中断预测模型转移到East数据中,仅来自East实验的20次放电。该性能与经过1896年出院的模型相当。从其他模型培训方案之间的比较,转移学习表明了其在预测不同托卡马克人的破坏方面的潜力。
translated by 谷歌翻译
现有的锚定面向对象检测方法已经实现了惊人的结果,但这些方法需要一些手动预设盒,这引入了额外的超参数和计算。现有的锚定方法通常具有复杂的架构,并且不易部署。我们的目标是提出一种简单易于部署的空中图像检测算法。在本文中,我们介绍了基于FCOS的单级锚定旋转对象检测器(FCOSR),可以在大多数平台上部署。 FCOSR具有简单的架构,包括卷积图层。我们的工作侧重于培训阶段的标签分配策略。我们使用椭圆中心采样方法来定义面向定向框(obb)的合适采样区域。模糊样本分配策略为重叠对象提供合理的标签。为解决采样问题不足,设计了一种多级采样模块。这些策略将更合适的标签分配给培训样本。我们的算法分别在DOTA1.0,DOTA1.5和HRSC2016数据集上实现79.25,75.41和90.15映射。 FCOSR在单规模评估中展示了其他方法的卓越性能。我们将轻量级FCOSR模型转换为Tensorrt格式,该格式在Dota1.0上以10.68 fps在jetson Xavier NX上实现73.93映射。该代码可用于:https://github.com/lzh420202/fcosr
translated by 谷歌翻译
预测动态场景中的行人轨迹仍然是各种应用中的关键问题,例如自主驾驶和社会意识的机器人。由于人类和人类对象的相互作用和人类随机性引起的未来不确定性,这种预测是挑战。基于生成式模型的方法通过采样潜在变量来处理未来的不确定性。然而,很少有研究探索了潜在变量的产生。在这项工作中,我们提出了具有伪Oracle(TPPO)的轨迹预测器,这是一种基于模型的基于模型的轨迹预测因子。第一个伪甲骨文是行人的移动方向,第二个是从地面真理轨迹估计的潜在变量。社会注意力模块用于基于行人移动方向与未来轨迹之间的相关性聚集邻居的交互。这种相关性受到行人的未来轨迹往往受到前方行人的影响。提出了一种潜在的变量预测器来估计观察和地面轨迹的潜在可变分布。此外,在训练期间,这两个分布之间的间隙最小化。因此,潜在的变量预测器可以估计观察到的轨迹的潜变量,以近似从地面真理轨迹估计。我们将TPPO与在几个公共数据集上的相关方法进行比较。结果表明,TPPO优于最先进的方法,具有低平均和最终位移误差。作为测试期间的采样时间下降,消融研究表明预测性能不会显着降低。
translated by 谷歌翻译
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.
translated by 谷歌翻译
In this work, we present a new computer vision task named video object of interest segmentation (VOIS). Given a video and a target image of interest, our objective is to simultaneously segment and track all objects in the video that are relevant to the target image. This problem combines the traditional video object segmentation task with an additional image indicating the content that users are concerned with. Since no existing dataset is perfectly suitable for this new task, we specifically construct a large-scale dataset called LiveVideos, which contains 2418 pairs of target images and live videos with instance-level annotations. In addition, we propose a transformer-based method for this task. We revisit Swin Transformer and design a dual-path structure to fuse video and image features. Then, a transformer decoder is employed to generate object proposals for segmentation and tracking from the fused features. Extensive experiments on LiveVideos dataset show the superiority of our proposed method.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
运动转移旨在将驱动视频的运动转移到源图像。当驾驶视频中的对象与源图像中的对象之间存在很大差异时,传统的单个域运动转移方法通常会产生显着的伪影。例如,合成的图像可能无法保留源图像的人类形状(参见图1(a))。为了解决这个问题,在这项工作中,我们提出了一种运动和外观适应(MAA)进行跨域运动转移的方法,在该方法中,我们将合成图像中的对象正规化,以捕获驾驶框架中对象的运动,而仍保留对象在源图像中的形状和外观。一方面,考虑合成图像和驾驶框架的对象形状可能有所不同,我们设计了一个形状不变的运动适应模块,该模块可以在两个图像中强制对象零件的角度的一致性来捕获运动信息。另一方面,我们引入了一个结构引导的外观一致性模块,旨在使合成图像的相应贴片和源图像之间的相似性正式化,而不会影响合成图像中学习的运动。我们提出的MAA模型可以通过循环重建损失以端到端的方式进行训练,并最终产生令人满意的运动转移结果(参见图1(b))。我们在人类舞蹈数据集Mixamo-Video上进行了广泛的实验,以便于时尚视频和人脸数据集vox-celeb到cufs;在这两个方面,我们的MAA模型在定量和定性上都优于现有方法。
translated by 谷歌翻译