Full-body reconstruction is a fundamental but challenging task. Owing to the lack of annotated data, the performances of existing methods are largely limited. In this paper, we propose a novel method named Full-body Reconstruction from Part Experts~(FuRPE) to tackle this issue. In FuRPE, the network is trained using pseudo labels and features generated from part-experts. An simple yet effective pseudo ground-truth selection scheme is proposed to extract high-quality pseudo labels. In this way, a large-scale of existing human body reconstruction datasets can be leveraged and contribute to the model training. In addition, an exponential moving average training strategy is introduced to train the network in a self-supervised manner, further boosting the performance of the model. Extensive experiments on several widely used datasets demonstrate the effectiveness of our method over the baseline. Our method achieves the state-of-the-art performance. Code will be publicly available for further research.
translated by 谷歌翻译
Figure 1: Given challenging in-the-wild videos, a recent state-of-the-art video-pose-estimation approach [31] (top), fails to produce accurate 3D body poses. To address this, we exploit a large-scale motion-capture dataset to train a motion discriminator using an adversarial approach. Our model (VIBE) (bottom) is able to produce realistic and accurate pose and shape, outperforming previous work on standard benchmarks.
translated by 谷歌翻译
基于回归的方法可以通过直接以馈送方式将原始像素直接映射到模型参数来估算从单眼图像的身体,手甚至全身模型。但是,参数的微小偏差可能导致估计的网格和输入图像之间的明显未对准,尤其是在全身网格恢复的背景下。为了解决这个问题,我们建议在我们的回归网络中进行锥体网状对准反馈(PYMAF)循环,以进行良好的人类网格恢复,并将其扩展到PYMAF-X,以恢复表达全身模型。 PYMAF的核心思想是利用特征金字塔并根据网格图像对准状态明确纠正预测参数。具体而言,给定当前预测的参数,将相应地从更优质的特征中提取网格对准的证据,并将其送回以进行参数回流。为了增强一致性的看法,采用辅助密集的监督来提供网格图像对应指南,同时引入了空间对齐的注意,以使我们的网络对全球环境的认识。当扩展PYMAF以进行全身网状恢复时,PYMAF-X中提出了一种自适应整合策略来调整肘部扭转旋转,该旋转会产生自然腕部姿势,同时保持部分特定估计的良好性能。我们的方法的功效在几个基准数据集上得到了验证,以实现身体和全身网状恢复,在该数据集中,PYMAF和PYMAF-X有效地改善了网格图像的对准并实现了新的最新结果。具有代码和视频结果的项目页面可以在https://www.liuyebin.com/pymaf-x上找到。
translated by 谷歌翻译
人类姿势和形状估计的任务中的关键挑战是闭塞,包括自闭合,对象 - 人闭塞和人际闭塞。缺乏多样化和准确的姿势和形状训练数据成为一个主要的瓶颈,特别是对于野外闭塞的场景。在本文中,我们专注于在人际闭塞的情况下估计人类姿势和形状,同时处理对象 - 人闭塞和自动闭塞。我们提出了一种新颖的框架,该框架综合了遮挡感知的轮廓和2D关键点数据,并直接回归到SMPL姿势和形状参数。利用神经3D网格渲染器以启用剪影监控,这有助于形状估计的巨大改进。此外,合成了全景视点中的关键点和轮廓驱动的训练数据,以补偿任何现有数据集中缺乏视点的多样性。实验结果表明,在姿势估计准确性方面,我们在3DPW和3DPW-Crowd数据集中是最先进的。所提出的方法在形状估计方面显着优于秩1方法。在形状预测精度方面,SSP-3D还实现了顶级性能。
translated by 谷歌翻译
We present a new method, called MEsh TRansfOrmer (METRO), to reconstruct 3D human pose and mesh vertices from a single image. Our method uses a transformer encoder to jointly model vertex-vertex and vertex-joint interactions, and outputs 3D joint coordinates and mesh vertices simultaneously. Compared to existing techniques that regress pose and shape parameters, METRO does not rely on any parametric mesh models like SMPL, thus it can be easily extended to other objects such as hands. We further relax the mesh topology and allow the transformer self-attention mechanism to freely attend between any two vertices, making it possible to learn non-local relationships among mesh vertices and joints. With the proposed masked vertex modeling, our method is more robust and effective in handling challenging situations like partial occlusions. METRO generates new state-of-the-art results for human mesh reconstruction on the public Human3.6M and 3DPW datasets. Moreover, we demonstrate the generalizability of METRO to 3D hand reconstruction in the wild, outperforming existing state-of-the-art methods on FreiHAND dataset. Code and pre-trained models are available at https: //github.com/microsoft/MeshTransformer.
translated by 谷歌翻译
3D从单眼RGB图像中的人类姿势和形状恢复是一个具有挑战性的任务。基于现有的基于学习的方法高度依赖于弱监管信号,例如, 2D和3D联合位置,由于缺乏野外配对的3D监督。然而,考虑到这些弱监管标签中存在的2D-3D模糊,网络在用此类标签培训时容易在本地最佳状态下卡。在本文中,我们通过优化多个初始化来减少势措施。具体而言,我们提出了一个名为多初始化优化网络(MION)的三级框架。在第一阶段,我们策略性地选择与输入样本的2D关键点兼容的不同粗略的3D重建候选。每个粗略重建可以被视为初始化导致一个优化分支。在第二阶段,我们设计网格精制变压器(MRT)以分别通过自我关注机制来优化每个粗略重建结果。最后,提出了一种一致性估计网络(CEN)来通过评估RGB图像中的视觉证据与给定的3D重建匹配,以通过评估来查找来自候选的最佳结果。实验表明,我们的多初始化优化网络优于多个公共基准上的现有3D网格的方法。
translated by 谷歌翻译
从单眼RGB图像中重建3D手网络,由于其在AR/VR领域的巨大潜在应用,引起了人们的注意力越来越多。大多数最先进的方法试图以匿名方式解决此任务。具体而言,即使在连续录制会话中用户没有变化的实际应用程序中实际上可用,因此忽略了该主题的身份。在本文中,我们提出了一个身份感知的手网格估计模型,该模型可以结合由受试者的内在形状参数表示的身份信息。我们通过将提出的身份感知模型与匿名对待主题的基线进行比较来证明身份信息的重要性。此外,为了处理未见测试对象的用例,我们提出了一条新型的个性化管道来校准固有的形状参数,仅使用该受试者的少数未标记的RGB图像。在两个大型公共数据集上进行的实验验证了我们提出的方法的最先进性能。
translated by 谷歌翻译
在本文中,我们考虑了同时找到和从单个2D图像中恢复多手的具有挑战性的任务。先前的研究要么关注单手重建,要么以多阶段的方式解决此问题。此外,常规的两阶段管道首先检测到手部区域,然后估计每个裁剪贴片的3D手姿势。为了减少预处理和特征提取中的计算冗余,我们提出了一条简洁但有效的单阶段管道。具体而言,我们为多手重建设计了多头自动编码器结构,每个HEAD网络分别共享相同的功能图并分别输出手动中心,姿势和纹理。此外,我们采用了一个弱监督的计划来减轻昂贵的3D现实世界数据注释的负担。为此,我们提出了一系列通过舞台训练方案优化的损失,其中根据公开可用的单手数据集生成具有2D注释的多手数据集。为了进一步提高弱监督模型的准确性,我们在单手和多个手设置中采用了几个功能一致性约束。具体而言,从本地功能估算的每只手的关键点应与全局功能预测的重新投影点一致。在包括Freihand,HO3D,Interhand 2.6M和RHD在内的公共基准测试的广泛实验表明,我们的方法在弱监督和完全监督的举止中优于基于最先进的模型方法。代码和模型可在{\ url {https://github.com/zijinxuxu/smhr}}上获得。
translated by 谷歌翻译
我们提出了一种基于优化的新型范式,用于在图像和扫描上拟合3D人类模型。与直接回归输入图像中低维统计体模型(例如SMPL)的参数的现有方法相反,我们训练了每个vertex神经场网络的集合。该网络以分布式的方式预测基于当前顶点投影处提取的神经特征的顶点下降方向。在推断时,我们在梯度降低的优化管道中采用该网络,称为LVD,直到其收敛性为止,即使将所有顶点初始化为单个点,通常也会以一秒钟的分数出现。一项详尽的评估表明,我们的方法能够捕获具有截然不同的身体形状的穿着的人体,与最先进的人相比取得了重大改进。 LVD也适用于人类和手的3D模型配合,为此,我们以更简单,更快的方法对SOTA显示出显着改善。
translated by 谷歌翻译
From an image of a person in action, we can easily guess the 3D motion of the person in the immediate past and future. This is because we have a mental model of 3D human dynamics that we have acquired from observing visual sequences of humans in motion. We present a framework that can similarly learn a representation of 3D dynamics of humans from video via a simple but effective temporal encoding of image features. At test time, from video, the learned temporal representation give rise to smooth 3D mesh predictions. From a single image, our model can recover the current 3D mesh as well as its 3D past and future motion. Our approach is designed so it can learn from videos with 2D pose annotations in a semi-supervised manner. Though annotated data is always limited, there are millions of videos uploaded daily on the Internet. In this work, we harvest this Internet-scale source of unlabeled data by training our model on unlabeled video with pseudo-ground truth 2D pose obtained from an off-the-shelf 2D pose detector. Our experiments show that adding more videos with pseudo-ground truth 2D pose monotonically improves 3D prediction performance. We evaluate our model, Human Mesh and Motion Recovery (HMMR), on the recent challenging dataset of 3D Poses in the Wild and obtain state-of-the-art performance on the 3D prediction task without any fine-tuning. The project website with video, code, and data can be found at https://akanazawa.github.io/ human_dynamics/.
translated by 谷歌翻译
This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of 3D shape from a single color image.
translated by 谷歌翻译
从单眼图像中重建多人类的身体网格是一个重要但具有挑战性的计算机视觉问题。除了单个身体网格模型外,我们还需要估计受试者之间的相对3D位置以产生连贯的表示。在这项工作中,通过单个图形神经网络,名为MUG(多人类图网络),我们仅使用多人2D姿势作为输入来构建相干的多人类网格。与现有的方法相比,采用检测风格的管道(即提取图像特征,然后找到人体实例并从中恢复身体网格),并遭受实验室收集的训练数据集和野外测试之间的显着域间隙数据集,我们的方法从2D姿势中受益,该姿势具有跨数据集具有相对一致的几何特性。我们的方法工作如下:首先,为了建模多人类环境,它处理多人2D姿势并构建一个新颖的异质图,其中来自不同人和一个人内部的节点与一个人内部连接在一起,以捕获人际关系间的互动并绘制人际关系身体几何形状(即骨骼和网格结构)。其次,它采用双分支图神经网络结构 - 一种用于预测人间深度关系,另一个用于预测与根系接线相关的网格坐标。最后,通过将两个分支的输出组合来构建整个多人类3D网格。广泛的实验表明,杯子在标准3D人体基准的先前多人类网格估计方法 - Panoptic,Mupots-3D和3DPW上的表现。
translated by 谷歌翻译
To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we use thousands of 3D scans to train a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with fully articulated hands and an expressive face. Learning to regress the parameters of SMPL-X directly from images is challenging without paired images and 3D ground truth. Consequently, we follow the approach of SMPLify, which estimates 2D features and then optimizes model parameters to fit the features. We improve on SMPLify in several significant ways: (1) we detect 2D features corresponding to the face, hands, and feet and fit the full SMPL-X model to these; (2) we train a new neural network pose prior using a large MoCap dataset; (3) we define a new interpenetration penalty that is both fast and accurate; (4) we automatically detect gender and the appropriate body models (male, female, or neutral); (5) our PyTorch implementation achieves a speedup of more than 8× over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild. We evaluate 3D accuracy on a new curated dataset comprising 100 images with pseudo ground-truth. This is a step towards automatic expressive human capture from monocular RGB data. The models, code, and data are available for research purposes at https://smpl-x.is.tue.mpg.de.
translated by 谷歌翻译
培训视频中人类姿势估计的最先进模型需要具有很难获得的注释的数据集。尽管最近已将变压器用于身体姿势序列建模,但相关方法依靠伪地真相来增强目前有限的培训数据可用于学习此类模型。在本文中,我们介绍了Posebert,Posebert是一个通过掩盖建模对3D运动捕获(MOCAP)数据进行全面训练的变压器模块。它是简单,通用和通用的,因为它可以插入任何基于图像的模型的顶部,以在基于视频的模型中使用时间信息。我们展示了Posebert的变体,不同的输入从3D骨骼关键点到全身或仅仅是手(Mano)的3D参数模型的旋转。由于Posebert培训是任务不可知论的,因此该模型可以应用于姿势细化,未来的姿势预测或运动完成等几个任务。我们的实验结果验证了在各种最新姿势估计方法之上添加Posebert始终提高其性能,而其低计算成本使我们能够在实时演示中使用它,以通过A的机器人手使机器人手通过摄像头。可以在https://github.com/naver/posebert上获得测试代码和型号。
translated by 谷歌翻译
自上而下的方法主导了3D人类姿势和形状估计的领域,因为它们与人类的检测脱钩,并使研究人员能够专注于核心问题。但是,裁剪是他们的第一步,从一开始就丢弃了位置信息,这使自己无法准确预测原始摄像机坐标系中的全局旋转。为了解决此问题,我们建议将完整框架(悬崖)的位置信息携带到此任务中。具体而言,我们通过将裁剪图像功能与其边界盒信息连接在一起来养活更多的整体功能来悬崖。我们通过更广泛的全帧视图来计算2D再投影损失,进行了类似于图像中投射的人的投影过程。克里夫(Cliff)通过全球态度感知信息进行了喂养和监督,直接预测全球旋转以及更准确的明确姿势。此外,我们提出了一个基于Cliff的伪基真实注释,该注释为野外2D数据集提供了高质量的3D注释,并为基于回归的方法提供了至关重要的全面监督。对流行基准测试的广泛实验表明,悬崖的表现要超过先前的艺术,并在Agora排行榜上获得了第一名(SMPL-Algorithms曲目)。代码和数据可在https://github.com/huawei-noah/noah-research/tree/master/cliff中获得。
translated by 谷歌翻译
全面监督的人类网格恢复方法是渴望数据的,由于3D规定基准数据集的可用性有限和多样性,因此具有较差的概括性。使用合成数据驱动的训练范例,已经从合成配对的2D表示(例如2D关键点和分段掩码)和3D网格中训练了模型的最新进展,其中已使用合成数据驱动的训练范例和3D网格进行了训练。但是,由于合成训练数据和实际测试数据之间的域间隙很难解决2D密集表示,因此很少探索合成密集的对应图(即IUV)。为了减轻IUV上的这个领域差距,我们提出了使用可靠但稀疏表示的互补信息(2D关键点)提出的交叉代理对齐。具体而言,初始网格估计和两个2D表示之间的比对误差将转发为回归器,并在以下网格回归中动态校正。这种适应性的交叉代理对准明确地从偏差和捕获互补信息中学习:从稀疏的表示和浓郁的浓度中的稳健性。我们对多个标准基准数据集进行了广泛的实验,并展示了竞争结果,帮助减少在人类网格估计中生产最新模型所需的注释工作。
translated by 谷歌翻译
我们提出了一种以弱监督方式培训的人类和四足动物的端到端统一3D网格恢复。与最近的工作仅关注单个目标类别,我们的目标是通过单个多任务模型恢复更广泛类的3D网格。然而,没有存在可以直接启用多任务学习的数据集,因为没有人类对象的人类和动物注释,例如,人类图像没有动物姿势注释;因此,我们必须设计一种利用异构数据集的新方法。为了使不稳定的不相交的多任务学习联合培训,我们建议利用人类和动物之间的形态相似性,通过动物锻炼的动机,人类模仿动物的姿势。我们通过语义对应的形态相似性,称为子关键点,其能够联合训练人和动物网格回归分支。此外,我们提出了类敏感的正则化方法,以避免平均形状的偏差,并提高多级别的独特性。我们的方法在各种人类和动物数据集上对最近的Uni-Modal模型进行了有利的,同时更紧凑。
translated by 谷歌翻译
尽管近年来3D人姿势和形状估计方法的性能显着提高,但是现有方法通常在相机或以人为本的坐标系中定义的3D姿势。这使得难以估计使用移动相机捕获的视频的世界坐标系中的人的纯姿势和运动。为了解决这个问题,本文提出了一种用于预测世界坐标系中定义的3D人姿势和网格的相机运动不可知论方法。所提出的方法的核心思想是估计不变选择坐标系的两个相邻的全局姿势(即全局运动)之间的差异,而不是耦合到相机运动的全局姿势。为此,我们提出了一种基于双向门控复发单元(GRUS)的网络,该单元从局部姿势序列预测全局运动序列,由称为全局运动回归(GMR)的关节相对旋转组成。我们使用3DPW和合成数据集,该数据集在移动相机环境中构建,进行评估。我们进行广泛的实验,并经验证明了提出的方法的有效性。代码和数据集可在https://github.com/seonghyunkim1212/gmr获得
translated by 谷歌翻译
This paper addresses the problem of 3D human pose and shape estimation from a single image. Previous approaches consider a parametric model of the human body, SMPL, and attempt to regress the model parameters that give rise to a mesh consistent with image evidence. This parameter regression has been a very challenging task, with modelbased approaches underperforming compared to nonparametric solutions in terms of pose estimation. In our work, we propose to relax this heavy reliance on the model's parameter space. We still retain the topology of the SMPL template mesh, but instead of predicting model parameters, we directly regress the 3D location of the mesh vertices. This is a heavy task for a typical network, but our key insight is that the regression becomes significantly easier using a Graph-CNN. This architecture allows us to explicitly encode the template mesh structure within the network and leverage the spatial locality the mesh has to offer. Image-based features are attached to the mesh vertices and the Graph-CNN is responsible to process them on the mesh structure, while the regression target for each vertex is its 3D location. Having recovered the complete 3D geometry of the mesh, if we still require a specific model parametrization, this can be reliably regressed from the vertices locations. We demonstrate the flexibility and the effectiveness of our proposed graphbased mesh regression by attaching different types of features on the mesh vertices. In all cases, we outperform the comparable baselines relying on model parameter regression, while we also achieve state-of-the-art results among model-based pose estimation approaches. 1
translated by 谷歌翻译
闭塞对单眼多人3D人体姿势估计构成了极大的威胁,这是由于封闭器的形状,外观和位置方面的差异很大。尽管现有的方法试图用姿势先验/约束,数据增强或隐性推理处理遮挡,但它们仍然无法概括地看不见姿势或遮挡案例,并且在出现多人时可能会犯大错误。受到人类从可见线索推断关节的显着能力的启发,我们开发了一种方法来显式建模该过程,该过程可以显着改善有或没有遮挡的情况下,可以显着改善自下而上的多人姿势估计。首先,我们将任务分为两个子任务:可见的关键点检测和遮挡的关键点推理,并提出了深入监督的编码器蒸馏(DSED)网络以求解第二个网络。为了训练我们的模型,我们提出了一种骨骼引导的人形拟合(SSF)方法,以在现有数据集上生成伪遮挡标签,从而实现明确的遮挡推理。实验表明,从遮挡中明确学习可以改善人类姿势估计。此外,利用可见关节的特征级信息使我们可以更准确地推理遮挡关节。我们的方法的表现优于几个基准的最新自上而下和自下而上的方法。
translated by 谷歌翻译