面姿势估计是指通过单个RGB图像预测面部取向的任务。这是一个重要的研究主题,在计算机视觉中具有广泛的应用。最近已经提出了基于标签的分布学习(LDL)方法进行面部姿势估计,从而实现了有希望的结果。但是,现有的LDL方法有两个主要问题。首先,标签分布的期望是偏见的,导致姿势估计。其次,将固定的分布参数用于所有学习样本,严重限制了模型能力。在本文中,我们提出了一种各向异性球形高斯(ASG)的LDL方法进行面部姿势估计。特别是,我们的方法在单位球体上采用了球形高斯分布,该分布不断产生公正的期望。同时,我们引入了一个新的损失功能,该功能使网络可以灵活地学习每个学习样本的分布参数。广泛的实验结果表明,我们的方法在AFLW2000和BIWI数据集上设置了新的最新记录。
translated by 谷歌翻译
In this paper, we present a method for unconstrained end-to-end head pose estimation. We address the problem of ambiguous rotation labels by introducing the rotation matrix formalism for our ground truth data and propose a continuous 6D rotation matrix representation for efficient and robust direct regression. This way, our method can learn the full rotation appearance which is contrary to previous approaches that restrict the pose prediction to a narrow-angle for satisfactory results. In addition, we propose a geodesic distance-based loss to penalize our network with respect to the SO(3) manifold geometry. Experiments on the public AFLW2000 and BIWI datasets demonstrate that our proposed method significantly outperforms other state-of-the-art methods by up to 20\%. We open-source our training and testing code along with our pre-trained models: https://github.com/thohemp/6DRepNet.
translated by 谷歌翻译
随着人类生活中的许多实际应用,包括制造监控摄像机,分析和加工客户行为,许多研究人员都注明了对数字图像的面部检测和头部姿势估计。大量提出的深度学习模型具有最先进的准确性,如YOLO,SSD,MTCNN,解决了面部检测或HOPENET的问题,FSA-NET,用于头部姿势估计问题的速度。根据许多最先进的方法,该任务的管道由两部分组成,从面部检测到头部姿势估计。这两个步骤完全独立,不共享信息。这使得模型在设置中清除但不利用每个模型中提取的大部分特色资源。在本文中,我们提出了多任务净模型,具有利用从面部检测模型提取的特征的动机,将它们与头部姿势估计分支共享以提高精度。此外,随着各种数据,表示面部的欧拉角域大,我们的模型可以预测360欧拉角域的结果。应用多任务学习方法,多任务净模型可以同时预测人头的位置和方向。为了提高预测模型的头部方向的能力,我们将人脸从欧拉角呈现到旋转矩阵的载体。
translated by 谷歌翻译
随着服务机器人和监控摄像头的出现,近年来野外的动态面部识别(DFR)受到了很多关注。面部检测和头部姿势估计是DFR的两个重要步骤。经常,在面部检测后估计姿势。然而,这种顺序计算导致更高的延迟。在本文中,我们提出了一种低延迟和轻量级网络,用于同时脸部检测,地标定位和头部姿势估计。灵感来自观察,以大角度定位面部的面部地标更具挑战性,提出了一个姿势损失来限制学习。此外,我们还提出了不确定性的多任务损失,以便自动学习各个任务的权重。另一个挑战是,机器人通常使用武器基的计算核心等低计算单元,我们经常需要使用轻量级网络而不是沉重的网络,这导致性能下降,特别是对于小型和硬面。在本文中,我们提出了在线反馈采样来增加不同尺度的培训样本,这会自动增加培训数据的多样性。通过验证常用的更广泛的脸,AFLW和AFLW2000数据集,结果表明,该方法在低计算资源中实现了最先进的性能。代码和数据将在https://github.com/lyp-deeplearning/mos-multi-task-face-detect上使用。
translated by 谷歌翻译
在本文中,我们介绍了一种新的方法来估计从一小组头关键点开始的单个图像中的人们的头部姿势。为此目的,我们提出了一种回归模型,其利用2D姿势估计算法自动计算的关键点,并输出由偏航,间距和滚动表示的头部姿势。我们的模型很容易实现和更高效地相对于最先进的最新技术 - 在记忆占用方面的推动和更小的速度更快 - 具有可比的准确性。我们的方法还通过适当设计的损耗功能提供与三个角度相关的异源间不确定性的量度;我们在误差和不确定值之间显示了相关性,因此可以在后续计算步骤中使用这种额外的信息来源。作为示例申请,我们解决了图像中的社交交互分析:我们提出了一种算法,以定量估计人们之间的互动水平,从他们的头部姿势和推理在其相互阵地上。代码可在https://github.com/cantarinigiorgio/hhp-net中获得。
translated by 谷歌翻译
头视点标签的成本是改善细粒度头姿势估计算法的主要障碍。缺乏大量标签的一种解决方案正在使用自我监督的学习(SSL)。 SSL可以从未标记的数据中提取良好的功能,用于下游任务。因此,本文试图显示头部姿势估计的SSL方法之间的差异。通常,使用SSL的两个主要方法:(1)使用它以预先培训权重,(2)在一个训练期间除了监督学习(SL)之外的SSL作为辅助任务。在本文中,我们通过设计混合多任务学习(HMTL)架构并使用两个SSL预先文本任务,旋转和令人困惑来评估两种方法。结果表明,两种方法的组合在其中使用旋转进行预训练和使用令人难以用于辅助头的令人费示。与基线相比,误差率降低了23.1%,这与电流的SOTA方法相当。最后,我们比较了初始权重对HMTL和SL的影响。随后,通过HMTL,使用各种初始权重减少错误:随机,想象成和SSL。
translated by 谷歌翻译
3D gaze estimation is most often tackled as learning a direct mapping between input images and the gaze vector or its spherical coordinates. Recently, it has been shown that pose estimation of the face, body and hands benefits from revising the learning target from few pose parameters to dense 3D coordinates. In this work, we leverage this observation and propose to tackle 3D gaze estimation as regression of 3D eye meshes. We overcome the absence of compatible ground truth by fitting a rigid 3D eyeball template on existing gaze datasets and propose to improve generalization by making use of widely available in-the-wild face images. To this end, we propose an automatic pipeline to retrieve robust gaze pseudo-labels from arbitrary face images and design a multi-view supervision framework to balance their effect during training. In our experiments, our method achieves improvement of 30% compared to state-of-the-art in cross-dataset gaze estimation, when no ground truth data are available for training, and 7% when they are. We make our project publicly available at https://github.com/Vagver/dense3Deyes.
translated by 谷歌翻译
头部姿势估计是一个具有挑战性的任务,旨在解决与预测三维向量相关的问题,这为人机互动或客户行为中的许多应用程序提供服务。以前的研究提出了一些用于收集头部姿势数据的精确方法。但这些方法需要昂贵的设备,如深度摄像机或复杂的实验室环境设置。在这项研究中,我们引入了一种新的方法,以有效的成本和易于设置,以收集头部姿势图像,即UET-HEADBETS数据集,具有顶视图头姿势数据。该方法使用绝对方向传感器而不是深度摄像机快速设置,但仍然可以确保良好的效果。通过实验,我们的数据集已显示其分发和可用数据集之间的差异,如CMU Panoptic DataSet \ Cite {CMU}。除了使用UET符号数据集和其他头部姿势数据集外,我们还介绍了称为FSANET的全范围模型,这显着优于UET-HEALPETS数据集的头部姿势估计结果,尤其是在顶视图上。此外,该模型非常重量轻,占用小尺寸图像。
translated by 谷歌翻译
使用卷积神经网络,面部属性(例如,年龄和吸引力)估算性能得到了大大提高。然而,现有方法在培训目标和评估度量之间存在不一致,因此它们可能是次优。此外,这些方法始终采用具有大量参数的图像分类或面部识别模型,其携带昂贵的计算成本和存储开销。在本文中,我们首先分析了两种最新方法(排名CNN和DLDL)之间的基本关系,并表明排名方法实际上是隐含的学习标签分布。因此,该结果首先将两个现有的最新方法统一到DLDL框架中。其次,为了减轻不一致和降低资源消耗,我们设计了一种轻量级网络架构,并提出了一个统一的框架,可以共同学习面部属性分发和回归属性值。在面部年龄和吸引力估算任务中都证明了我们的方法的有效性。我们的方法使用单一模型实现新的最先进的结果,使用36美元\倍,参数减少3美元,在面部年龄/吸引力估算上的推动速度为3美元。此外,即使参数的数量进一步降低到0.9m(3.8MB磁盘存储),我们的方法也可以实现与最先进的结果。
translated by 谷歌翻译
在本文中,我们考虑了同时找到和从单个2D图像中恢复多手的具有挑战性的任务。先前的研究要么关注单手重建,要么以多阶段的方式解决此问题。此外,常规的两阶段管道首先检测到手部区域,然后估计每个裁剪贴片的3D手姿势。为了减少预处理和特征提取中的计算冗余,我们提出了一条简洁但有效的单阶段管道。具体而言,我们为多手重建设计了多头自动编码器结构,每个HEAD网络分别共享相同的功能图并分别输出手动中心,姿势和纹理。此外,我们采用了一个弱监督的计划来减轻昂贵的3D现实世界数据注释的负担。为此,我们提出了一系列通过舞台训练方案优化的损失,其中根据公开可用的单手数据集生成具有2D注释的多手数据集。为了进一步提高弱监督模型的准确性,我们在单手和多个手设置中采用了几个功能一致性约束。具体而言,从本地功能估算的每只手的关键点应与全局功能预测的重新投影点一致。在包括Freihand,HO3D,Interhand 2.6M和RHD在内的公共基准测试的广泛实验表明,我们的方法在弱监督和完全监督的举止中优于基于最先进的模型方法。代码和模型可在{\ url {https://github.com/zijinxuxu/smhr}}上获得。
translated by 谷歌翻译
本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
目前全面监督的面部地标检测方法迅速进行,实现了显着性能。然而,当在大型姿势和重闭合的面孔和重闭合时仍然遭受痛苦,以进行不准确的面部形状约束,并且标记的训练样本不足。在本文中,我们提出了一个半监督框架,即自我校准的姿势注意网络(SCPAN),以实现更具挑战性的情景中的更强大和精确的面部地标检测。具体地,建议通过定影边界和地标强度场信息来模拟更有效的面部形状约束的边界意识的地标强度(BALI)字段。此外,设计了一种自我校准的姿势注意力(SCPA)模型,用于提供自学习的目标函数,该功能通过引入自校准机制和姿势注意掩模而无需标签信息而无需标签信息。我们认为,通过将巴厘岛领域和SCPA模型集成到新颖的自我校准的姿势网络中,可以了解更多的面部现有知识,并且我们的面孔方法的检测精度和稳健性得到了改善。获得具有挑战性的基准数据集获得的实验结果表明,我们的方法优于文献中最先进的方法。
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
最近,深层回归森林(如深)差异模型(DDMS),最近已经广泛研究了面部年龄估计,头部姿势估计,凝视估计等问题。这些问题部分是挑战,因为没有噪声和偏差的大量有效培训数据通常不可用。虽然通过学习更具歧视特征或重新重量样本来实现的一些进展,但我们认为更可取的是逐渐学习以歧视人类。然后,我们诉诸自行节奏的学习(SPL)。但是,出现了自然问题:可以自花奏的政权引导DDMS实现更强大,偏差的解决方案吗? SPL的严重问题是通过这项工作首先讨论的,是倾向于加剧解决方案的偏差,特别是对于明显的不平衡数据。为此,本文提出了一种新的自定位范例,用于深鉴别模型,这根据与每个示例相关的产出似然和熵区分噪声和不足的例子,并从新的视角下解决SECT中的基本排名问题:公平性。此范例是根本的,可以轻松地与各种DDMS结合。在三个计算机视觉任务中进行了广泛的实验,例如面部年龄估计,头部姿态估计和凝视估计,证明了我们的范式的功效。据我们所知,我们的作品是SPL的文献中的第一篇论文,以为自我节奏政权建设的排名公平。
translated by 谷歌翻译
Deep neural networks are used for a wide range of regression problems. However, there exists a significant gap in accuracy between specialized approaches and generic direct regression in which a network is trained by minimizing the squared or absolute error of output labels. Prior work has shown that solving a regression problem with a set of binary classifiers can improve accuracy by utilizing well-studied binary classification algorithms. We introduce binary-encoded labels (BEL), which generalizes the application of binary classification to regression by providing a framework for considering arbitrary multi-bit values when encoding target values. We identify desirable properties of suitable encoding and decoding functions used for the conversion between real-valued and binary-encoded labels based on theoretical and empirical study. These properties highlight a tradeoff between classification error probability and error-correction capabilities of label encodings. BEL can be combined with off-the-shelf task-specific feature extractors and trained end-to-end. We propose a series of sample encoding, decoding, and training loss functions for BEL and demonstrate they result in lower error than direct regression and specialized approaches while being suitable for a diverse set of regression problems, network architectures, and evaluation metrics. BEL achieves state-of-the-art accuracies for several regression benchmarks. Code is available at https://github.com/ubc-aamodt-group/BEL_regression.
translated by 谷歌翻译
Recently, deep learning based 3D face reconstruction methods have shown promising results in both quality and efficiency. However, training deep neural networks typically requires a large volume of data, whereas face images with ground-truth 3D face shapes are scarce. In this paper, we propose a novel deep 3D face reconstruction approach that 1) leverages a robust, hybrid loss function for weakly-supervised learning which takes into account both low-level and perception-level information for supervision, and 2) performs multi-image face reconstruction by exploiting complementary information from different images for shape aggregation. Our method is fast, accurate, and robust to occlusion and large pose. We provide comprehensive experiments on three datasets, systematically comparing our method with fifteen recent methods and demonstrating its state-of-the-art performance. Code available at https://github.com/ Microsoft/Deep3DFaceReconstruction
translated by 谷歌翻译
Face Animation是计算机视觉中最热门的主题之一,在生成模型的帮助下取得了有希望的性能。但是,由于复杂的运动变形和复杂的面部细节建模,生成保留身份和光真实图像的身份仍然是一个关键的挑战。为了解决这些问题,我们提出了一个面部神经量渲染(FNEVR)网络,以充分探索在统一框架中2D运动翘曲和3D体积渲染的潜力。在FNEVR中,我们设计了一个3D面积渲染(FVR)模块,以增强图像渲染的面部细节。具体而言,我们首先使用精心设计的体系结构提取3D信息,然后引入一个正交自适应射线采样模块以进行有效的渲染。我们还设计了一个轻巧的姿势编辑器,使FNEVR能够以简单而有效的方式编辑面部姿势。广泛的实验表明,我们的FNEVR在广泛使用的说话头基准上获得了最佳的总体质量和性能。
translated by 谷歌翻译
Fine-grained semantic segmentation of a person's face and head, including facial parts and head components, has progressed a great deal in recent years. However, it remains a challenging task, whereby considering ambiguous occlusions and large pose variations are particularly difficult. To overcome these difficulties, we propose a novel framework termed Mask-FPAN. It uses a de-occlusion module that learns to parse occluded faces in a semi-supervised way. In particular, face landmark localization, face occlusionstimations, and detected head poses are taken into account. A 3D morphable face model combined with the UV GAN improves the robustness of 2D face parsing. In addition, we introduce two new datasets named FaceOccMask-HQ and CelebAMaskOcc-HQ for face paring work. The proposed Mask-FPAN framework addresses the face parsing problem in the wild and shows significant performance improvements with MIOU from 0.7353 to 0.9013 compared to the state-of-the-art on challenging face datasets.
translated by 谷歌翻译
Most recent head pose estimation (HPE) methods are dominated by the Euler angle representation. To avoid its inherent ambiguity problem of rotation labels, alternative quaternion-based and vector-based representations are introduced. However, they both are not visually intuitive, and often derived from equivocal Euler angle labels. In this paper, we present a novel single-stage keypoint-based method via an {\it intuitive} and {\it unconstrained} 2D cube representation for joint head detection and pose estimation. The 2D cube is an orthogonal projection of the 3D regular hexahedron label roughly surrounding one head, and itself contains the head location. It can reflect the head orientation straightforwardly and unambiguously in any rotation angle. Unlike the general 6-DoF object pose estimation, our 2D cube ignores the 3-DoF of head size but retains the 3-DoF of head pose. Based on the prior of equal side length, we can effortlessly obtain the closed-form solution of Euler angles from predicted 2D head cube instead of applying the error-prone PnP algorithm. In experiments, our proposed method achieves comparable results with other representative methods on the public AFLW2000 and BIWI datasets. Besides, a novel test on the CMU panoptic dataset shows that our method can be seamlessly adapted to the unconstrained full-view HPE task without modification.
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译