StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image. High-fidelity 3D GAN inversion is inherently challenging due to the geometry-texture trade-off in 3D inversion, where overfitting to a single view input image often damages the estimated geometry during the latent optimization. To solve this challenge, we propose a novel pipeline that builds on the pseudo-multi-view estimation with visibility analysis. We keep the original textures for the visible parts and utilize generative priors for the occluded parts. Extensive experiments show that our approach achieves advantageous reconstruction and novel view synthesis quality over state-of-the-art methods, even for images with out-of-distribution textures. The proposed pipeline also enables image attribute editing with the inverted latent code and 3D-aware texture modification. Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
translated by 谷歌翻译
The neural radiance field (NeRF) has shown promising results in preserving the fine details of objects and scenes. However, unlike mesh-based representations, it remains an open problem to build dense correspondences across different NeRFs of the same category, which is essential in many downstream tasks. The main difficulties of this problem lie in the implicit nature of NeRF and the lack of ground-truth correspondence annotations. In this paper, we show it is possible to bypass these challenges by leveraging the rich semantics and structural priors encapsulated in a pre-trained NeRF-based GAN. Specifically, we exploit such priors from three aspects, namely 1) a dual deformation field that takes latent codes as global structural indicators, 2) a learning objective that regards generator features as geometric-aware local descriptors, and 3) a source of infinite object-specific NeRF samples. Our experiments demonstrate that such priors lead to 3D dense correspondence that is accurate, smooth, and robust. We also show that established dense correspondence across NeRFs can effectively enable many NeRF-based downstream applications such as texture transfer.
translated by 谷歌翻译
我们介绍了我们称呼STYLESDF的高分辨率,3D一致的图像和形状生成技术。我们的方法仅在单视图RGB数据上培训,并站在StyleGan2的肩部,用于图像生成,同时解决3D感知GANS中的两个主要挑战:1)RGB图像的高分辨率,视图 - 一致生成RGB图像,以及2)详细的3D形状。通过使用基于样式的2D发生器合并基于SDF的3D表示来实现这一目标。我们的3D隐式网络呈现出低分辨率的特征映射,其中基于样式的网络生成了View-Consive,1024x1024图像。值得注意的是,基于SDF的3D建模定义了详细的3D曲面,导致一致的卷渲染。在视觉和几何质量方面,我们的方法显示出更高的质量结果。
translated by 谷歌翻译
以前的纵向图像生成方法大致分为两类:2D GAN和3D感知的GAN。 2D GAN可以产生高保真肖像,但具有低视图一致性。 3D感知GaN方法可以维护查看一致性,但它们所生成的图像不是本地可编辑的。为了克服这些限制,我们提出了FENERF,一个可以生成查看一致和本地可编辑的纵向图像的3D感知生成器。我们的方法使用两个解耦潜码,以在具有共享几何体的空间对齐的3D卷中生成相应的面部语义和纹理。从这种底层3D表示中受益,FENERF可以联合渲染边界对齐的图像和语义掩码,并使用语义掩模通过GaN反转编辑3D音量。我们进一步示出了可以从广泛可用的单手套图像和语义面膜对中学习这种3D表示。此外,我们揭示了联合学习语义和纹理有助于产生更精细的几何形状。我们的实验表明FENERF在各种面部编辑任务中优于最先进的方法。
translated by 谷歌翻译
Recent years have witnessed the tremendous progress of 3D GANs for generating view-consistent radiance fields with photo-realism. Yet, high-quality generation of human radiance fields remains challenging, partially due to the limited human-related priors adopted in existing methods. We present HumanGen, a novel 3D human generation scheme with detailed geometry and $\text{360}^{\circ}$ realistic free-view rendering. It explicitly marries the 3D human generation with various priors from the 2D generator and 3D reconstructor of humans through the design of "anchor image". We introduce a hybrid feature representation using the anchor image to bridge the latent space of HumanGen with the existing 2D generator. We then adopt a pronged design to disentangle the generation of geometry and appearance. With the aid of the anchor image, we adapt a 3D reconstructor for fine-grained details synthesis and propose a two-stage blending scheme to boost appearance generation. Extensive experiments demonstrate our effectiveness for state-of-the-art 3D human generation regarding geometry details, texture quality, and free-view performance. Notably, HumanGen can also incorporate various off-the-shelf 2D latent editing methods, seamlessly lifting them into 3D.
translated by 谷歌翻译
基于生成神经辐射场(GNERF)基于生成神经辐射场(GNERF)的3D感知gan已达到令人印象深刻的高质量图像产生,同时保持了强3D一致性。最显着的成就是在面部生成领域中取得的。但是,这些模型中的大多数都集中在提高视图一致性上,但忽略了分离的方面,因此这些模型无法提供高质量的语义/属性控制对生成。为此,我们引入了一个有条件的GNERF模型,该模型使用特定属性标签作为输入,以提高3D感知生成模型的控制能力和解散能力。我们利用预先训练的3D感知模型作为基础,并集成了双分支属性编辑模块(DAEM),该模块(DAEM)利用属性标签来提供对生成的控制。此外,我们提出了一个Triot(作为INIT的训练,并针对调整进行优化),以优化潜在矢量以进一步提高属性编辑的精度。广泛使用的FFHQ上的广泛实验表明,我们的模型在保留非目标区域的同时产生具有更好视图一致性的高质量编辑。该代码可在https://github.com/zhangqianhui/tt-gnerf上找到。
translated by 谷歌翻译
使用单视图2D照片仅集合,无监督的高质量多视图 - 一致的图像和3D形状一直是一个长期存在的挑战。现有的3D GAN是计算密集型的,也是没有3D-一致的近似;前者限制了所生成的图像的质量和分辨率,并且后者对多视图一致性和形状质量产生不利影响。在这项工作中,我们提高了3D GAN的计算效率和图像质量,而无需依赖这些近似。为此目的,我们介绍了一种表现力的混合明确隐式网络架构,与其他设计选择一起,不仅可以实时合成高分辨率多视图一致图像,而且还产生高质量的3D几何形状。通过解耦特征生成和神经渲染,我们的框架能够利用最先进的2D CNN生成器,例如Stylega2,并继承它们的效率和表现力。在其他实验中,我们展示了与FFHQ和AFHQ猫的最先进的3D感知合成。
translated by 谷歌翻译
与传统的头像创建管道相反,这是一个昂贵的过程,现代生成方法直接从照片中学习数据分布,而艺术的状态现在可以产生高度的照片现实图像。尽管大量作品试图扩展无条件的生成模型并达到一定程度的可控性,但要确保多视图一致性,尤其是在大型姿势中,仍然具有挑战性。在这项工作中,我们提出了一个3D肖像生成网络,该网络可产生3D一致的肖像,同时根据有关姿势,身份,表达和照明的语义参数可控。生成网络使用神经场景表示在3D中建模肖像,其生成以支持明确控制的参数面模型为指导。尽管可以通过将图像与部分不同的属性进行对比,但可以进一步增强潜在的分离,但在非面积区域(例如,在动画表达式)时,仍然存在明显的不一致。我们通过提出一种体积混合策略来解决此问题,在该策略中,我们通过将动态和静态辐射场融合在一起,形成一个复合输出,并从共同学习的语义场中分割了两个部分。我们的方法在广泛的实验中优于先前的艺术,在自由视点中观看时,在自然照明中产生了逼真的肖像。所提出的方法还证明了真实图像以及室外卡通面孔的概括能力,在实际应用中显示出巨大的希望。其他视频结果和代码将在项目网页上提供。
translated by 谷歌翻译
生成辐射场的进步推动了3D感知图像合成的边界。通过观察到3D对象应该从多个观点看起来真实的观察,这些方法将多视图约束引入正则化以从2D图像学习有效的3D辐射场。尽管有了进步,但由于形状彩色模糊,它们通常会缺少准确的3D形状,这限制了它们在下游任务中的适用性。在这项工作中,我们通过提出一种新的阴影引导的生成隐式模型来解决这种模糊性,能够学习持续改进的形状表示。我们的主要洞察力是,在不同的照明条件下,精确的3D形状还应产生逼真的渲染。通过明确地模拟照明和具有各种照明条件的阴影来实现这种多照明约束。通过将合成的图像馈送到鉴别器来导出梯度。为了补偿计算表面法线的额外计算负担,我们进一步通过表面跟踪设计了高效的体积渲染策略,将培训和推理时间分别将培训和推理时间减少了24%和48%。我们在多个数据集上的实验表明,该方法在捕获准确的基础3D形状时实现了光电型3D感知图像合成。我们展示了我们对现有方法的3D形重建的方法的改进性能,并展示了其对图像复兴的适用性。我们的代码将在https://github.com/xingangpan/shadegan发布。
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
制作生成模型3D感知桥梁2D图像空间和3D物理世界仍然挑战。最近尝试用神经辐射场(NERF)配备生成的对抗性网络(GAN),其将3D坐标映射到像素值,作为3D之前。然而,nerf中的隐式功能具有一个非常局部的接收领域,使得发电机难以意识到全局结构。与此同时,NERF建立在体积渲染上,这可能太昂贵,无法产生高分辨率结果,提高优化难度。为了减轻这两个问题,我们通过明确学习结构表示和纹理表示,向高保真3D感知图像综合提出了一种作为Volumegan称为Volumegan的新颖框架。我们首先学习一个特征卷来表示底层结构,然后使用类似NERF的模型转换为特征字段。特征字段进一步累积到作为纹理表示的2D特征图中,然后是用于外观合成的神经渲染器。这种设计使得能够独立控制形状和外观。广泛的数据集的大量实验表明,我们的方法比以前的方法实现了足够更高的图像质量和更好的3D控制。
translated by 谷歌翻译
图像翻译和操纵随着深层生成模型的快速发展而引起了越来越多的关注。尽管现有的方法带来了令人印象深刻的结果,但它们主要在2D空间中运行。鉴于基于NERF的3D感知生成模型的最新进展,我们介绍了一项新的任务,语义到网络翻译,旨在重建由NERF模型的3D场景,该场景以一个单视语义掩码作为输入为条件。为了启动这项新颖的任务,我们提出了SEM2NERF框架。特别是,SEM2NERF通过将语义面膜编码到控制预训练的解码器的3D场景表示形式中来解决高度挑战的任务。为了进一步提高映射的准确性,我们将新的区域感知学习策略集成到编码器和解码器的设计中。我们验证了提出的SEM2NERF的功效,并证明它在两个基准数据集上的表现优于几个强基础。代码和视频可从https://donydchen.github.io/sem2nerf/获得
translated by 谷歌翻译
多年来,2d Gans在影像肖像的一代中取得了巨大的成功。但是,他们在生成过程中缺乏3D理解,因此他们遇到了多视图不一致问题。为了减轻这个问题,已经提出了许多3D感知的甘斯,并显示出显着的结果,但是3D GAN在编辑语义属性方面努力。 3D GAN的可控性和解释性并未得到太多探索。在这项工作中,我们提出了两种解决方案,以克服2D GAN和3D感知gan的这些弱点。我们首先介绍了一种新颖的3D感知gan,Surf-Gan,它能够在训练过程中发现语义属性,并以无监督的方式控制它们。之后,我们将先验的Surf-GAN注入stylegan,以获得高保真3D控制的发电机。与允许隐姿姿势控制的现有基于潜在的方法不同,所提出的3D控制样式gan可实现明确的姿势控制对肖像生成的控制。这种蒸馏允许3D控制与许多基于样式的技术(例如,反转和风格化)之间的直接兼容性,并且在计算资源方面也带来了优势。我们的代码可从https://github.com/jgkwak95/surf-gan获得。
translated by 谷歌翻译
最近已经示出了从2D图像中提取隐式3D表示的生成神经辐射场(GNERF)模型,以产生代表刚性物体的现实图像,例如人面或汽车。然而,他们通常难以产生代表非刚性物体的高质量图像,例如人体,这对许多计算机图形应用具有很大的兴趣。本文提出了一种用于人类图像综合的3D感知语义导向生成模型(3D-SAGGA),其集成了GNERF和纹理发生器。前者学习人体的隐式3D表示,并输出一组2D语义分段掩模。后者将这些语义面部掩模转化为真实的图像,为人类的外观添加了逼真的纹理。如果不需要额外的3D信息,我们的模型可以使用照片现实可控生成学习3D人类表示。我们在Deepfashion DataSet上的实验表明,3D-SAGGAN显着优于最近的基线。
translated by 谷歌翻译
Single-image 3D human reconstruction aims to reconstruct the 3D textured surface of the human body given a single image. While implicit function-based methods recently achieved reasonable reconstruction performance, they still bear limitations showing degraded quality in both surface geometry and texture from an unobserved view. In response, to generate a realistic textured surface, we propose ReFu, a coarse-to-fine approach that refines the projected backside view image and fuses the refined image to predict the final human body. To suppress the diffused occupancy that causes noise in projection images and reconstructed meshes, we propose to train occupancy probability by simultaneously utilizing 2D and 3D supervisions with occupancy-based volume rendering. We also introduce a refinement architecture that generates detail-preserving backside-view images with front-to-back warping. Extensive experiments demonstrate that our method achieves state-of-the-art performance in 3D human reconstruction from a single image, showing enhanced geometry and texture quality from an unobserved view.
translated by 谷歌翻译
最近的研究表明,基于预训练的gan的可控图像生成可以使广泛的计算机视觉任务受益。但是,较少的关注专用于3D视觉任务。鉴于此,我们提出了一个新颖的图像条件神经隐式领域,该领域可以利用GAN生成的多视图图像的2D监督,并执行通用对象的单视图重建。首先,提出了一个新颖的基于脱机的发电机,以生成具有对视点的完全控制的合理伪图像。然后,我们建议利用神经隐式函数,以及可区分的渲染器,从带有对象掩模和粗糙姿势初始化的伪图像中学习3D几何形状。为了进一步检测不可靠的监督,我们引入了一个新颖的不确定性模块来预测不确定性图,该模块可以补救伪图像中不确定区域的负面影响,从而导致更好的重建性能。我们方法的有效性是通过通用对象的出色单视3D重建结果证明的。
translated by 谷歌翻译
无监督的生成的虚拟人类具有各种外观和动画姿势对于创建3D人体化身和其他AR/VR应用非常重要。现有方法要么仅限于刚性对象建模,要么不生成,因此无法合成高质量的虚拟人类并使它们进行动画化。在这项工作中,我们提出了Avatargen,这是第一种不仅可以具有不同外观的非刚性人类产生的方法,而且还可以完全控制姿势和观点,同时仅需要2D图像进行训练。具体而言,它通过利用粗糙的人体模型作为代理将观察空间扭曲到规范空间下的标准头像,将最近的3D甘斯扩展到了人类的衣服。为了建模非刚性动力学,它引入了一个变形网络,以学习规范空间中的姿势依赖性变形。为了提高生成的人类化身的几何质量,它利用签名距离字段作为几何表示,从而可以从几何学学习上的身体模型中进行更直接的正则化。从这些设计中受益,我们的方法可以生成具有高质量外观和几何形状建模的动画人体化身,从而极大地表现了先前的3D gan。此外,它有能力用于许多应用,例如单视重构造,复活和文本引导的合成。代码和预培训模型将可用。
translated by 谷歌翻译
我们提出了一种新型神经渲染管线,混合体积纹理渲染(HVTR),其合成了从任意姿势和高质量的任意姿势的虚拟人体化身。首先,我们学会在人体表面的致密UV歧管上编码铰接的人体运动。为了处理复杂的运动(例如,自闭电),我们将基于动态姿势的神经辐射场建造关于UV歧管的编码信息来构建基于动态姿态条件的神经辐射场的3D体积表示。虽然这允许我们表示具有更改拓扑的3D几何形状,但体积渲染是计算沉重的。因此,我们仅使用姿势调节的下采样的神经辐射场(PD-NERF)使用粗糙的体积表示,我们可以以低分辨率有效地呈现。此外,我们学习2D纹理功能,这些功能与图像空间中呈现的体积功能融合。我们的方法的关键优势是,我们可以通过快速GaN的纹理渲染器将融合功能转换为高分辨率,高质量的化身。我们证明混合渲染使HVTR能够处理复杂的动作,在用户控制的姿势/形状下呈现高质量的化身,甚至松散的衣服,最重要的是,在推理时间快速。我们的实验结果还证明了最先进的定量结果。
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译