允许合成现实细胞形状的方法可以帮助生成训练数据集,以改善生物医学图像中的细胞跟踪和分割。细胞形状合成的深层生成模型需要对细胞形状进行轻巧和柔性表示。但是,通常使用体素的表示不适合高分辨率形状合成,而多边形网格在建模拓扑变化(例如细胞生长或有丝分裂)时具有局限性。在这项工作中,我们建议使用符号距离功能(SDF)的级别集来表示细胞形状。我们将神经网络优化为3D+时域中任何点的SDF值的隐式神经表示。该模型以潜在代码为条件,从而允许合成新的和看不见的形状序列。我们在生长和分裂的秀丽隐杆线虫细胞上进行定量和质量验证方法,并具有生长的复杂丝虫突起的肺癌细胞。我们的结果表明,合成细胞的形状描述符类似于真实细胞的形状,并且我们的模型能够在3D+时间内生成复杂细胞形状的拓扑合理序列。
translated by 谷歌翻译
个性化的3D血管模型对于心血管疾病患者的诊断,预后和治疗计划很有价值。传统上,这样的模型是用明确表示(例如网格和体素掩码)构建的,或隐式表示,例如径向基函数或原子(管状)形状。在这里,我们建议在可区分的隐式神经表示(INR)中以其签名距离函数(SDF)的零级集表示表面。这使我们能够用隐性,连续,轻巧且易于与深度学习算法集成的表示复杂的血管结构对复杂的血管结构进行建模。我们在这里通过三个实际示例证明了这种方法的潜力。首先,我们从CT图像中获得了腹主动脉瘤(AAA)的精确和水密表面,并显示出从表面上的200点出现的可靠拟合。其次,我们同时将嵌套的容器壁贴在一个没有交叉点的单个INR中。第三,我们展示了如何将3D模型的单个动脉模型平滑地混合到单个水密表面。我们的结果表明,INR是一种灵活的表示,具有微小互动注释和操纵复杂血管结构的潜力。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
我们介绍DMTET,深度3D条件生成模型,可以使用诸如粗体素的简单用户指南来合成高分辨率3D形状。它通过利用新型混合3D表示来结婚隐式和显式3D表示的优点。与当前隐含的方法相比,培训涉及符号距离值,DMTET直接针对重建的表面进行了优化,这使我们能够用更少的伪像来合成更精细的几何细节。与直接生成诸如网格之类的显式表示的深度3D生成模型不同,我们的模型可以合成具有任意拓扑的形状。 DMTET的核心包括可变形的四面体网格,其编码离散的符号距离函数和可分行的行进Tetrahedra层,其将隐式符号距离表示转换为显式谱图表示。这种组合允许使用在表面网格上明确定义的重建和对抗性损耗来联合优化表面几何形状和拓扑以及生成细分层次结构。我们的方法显着优于来自粗体素输入的条件形状合成的现有工作,培训在复杂的3D动物形状的数据集上。项目页面:https://nv-tlabs.github.io/dmtet/
translated by 谷歌翻译
深度生成模型的最新进展导致了3D形状合成的巨大进展。虽然现有模型能够合成表示为体素,点云或隐式功能的形状,但这些方法仅间接强制执行最终3D形状表面的合理性。在这里,我们提出了一种直接将对抗训练施加到物体表面的3D形状合成框架(Surfgen)。我们的方法使用可分解的球面投影层来捕获并表示隐式3D发生器的显式零IsoSurface作为在单元球上定义的功能。通过在对手设置中用球形CNN处理3D对象表面的球形表示,我们的发电机可以更好地学习自然形状表面的统计数据。我们在大规模形状数据集中评估我们的模型,并证明了端到端训练的模型能够产生具有不同拓扑的高保真3D形状。
translated by 谷歌翻译
我们的方法从单个RGB-D观察中研究了以对象为中心的3D理解的复杂任务。由于这是一个不适的问题,因此现有的方法在3D形状和6D姿势和尺寸估计中都遭受了遮挡的复杂多对象方案的尺寸估计。我们提出了Shapo,这是一种联合多对象检测的方法,3D纹理重建,6D对象姿势和尺寸估计。 Shapo的关键是一条单杆管道,可回归形状,外观和构成潜在的代码以及每个对象实例的口罩,然后以稀疏到密集的方式进一步完善。首先学到了一种新颖的剖面形状和前景数据库,以将对象嵌入各自的形状和外观空间中。我们还提出了一个基于OCTREE的新颖的可区分优化步骤,使我们能够以分析的方式进一步改善对象形状,姿势和外观。我们新颖的联合隐式纹理对象表示使我们能够准确地识别和重建新颖的看不见的对象,而无需访问其3D网格。通过广泛的实验,我们表明我们的方法在模拟的室内场景上进行了训练,可以准确地回归现实世界中新颖物体的形状,外观和姿势,并以最小的微调。我们的方法显着超过了NOCS数据集上的所有基准,对于6D姿势估计,MAP的绝对改进为8%。项目页面:https://zubair-irshad.github.io/projects/shapo.html
translated by 谷歌翻译
机器学习的最近进步已经创造了利用一类基于坐标的神经网络来解决视觉计算问题的兴趣,该基于坐标的神经网络在空间和时间跨空间和时间的场景或对象的物理属性。我们称之为神经领域的这些方法已经看到在3D形状和图像的合成中成功应用,人体的动画,3D重建和姿势估计。然而,由于在短时间内的快速进展,许多论文存在,但尚未出现全面的审查和制定问题。在本报告中,我们通过提供上下文,数学接地和对神经领域的文学进行广泛综述来解决这一限制。本报告涉及两种维度的研究。在第一部分中,我们通过识别神经字段方法的公共组件,包括不同的表示,架构,前向映射和泛化方法来专注于神经字段的技术。在第二部分中,我们专注于神经领域的应用在视觉计算中的不同问题,超越(例如,机器人,音频)。我们的评论显示了历史上和当前化身的视觉计算中已覆盖的主题的广度,展示了神经字段方法所带来的提高的质量,灵活性和能力。最后,我们展示了一个伴随着贡献本综述的生活版本,可以由社区不断更新。
translated by 谷歌翻译
Training parts from ShapeNet. (b) t-SNE plot of part embeddings. (c) Reconstructing entire scenes with Local Implicit Grids Figure 1:We learn an embedding of parts from objects in ShapeNet [3] using a part autoencoder with an implicit decoder. We show that this representation of parts is generalizable across object categories, and easily scalable to large scenes. By localizing implicit functions in a grid, we are able to reconstruct entire scenes from points via optimization of the latent grid.
translated by 谷歌翻译
本文提出了一种新的3D形状生成方法,从而在小波域中的连续隐式表示上实现了直接生成建模。具体而言,我们提出了一个带有一对粗糙和细节系数的紧凑型小波表示,通过截短的签名距离函数和多尺度的生物联盟波波隐式表示3D形状,并制定了一对神经网络:基于生成器基于扩散模型的生成器以粗糙系数的形式产生不同的形状;以及一个细节预测因子,以进一步生成兼容的细节系数量,以丰富具有精细结构和细节的生成形状。定量和定性实验结果都表现出我们的方法在产生具有复杂拓扑和结构,干净表面和细节的多样化和高质量形状方面的优势,超过了最先进的模型的3D生成能力。
translated by 谷歌翻译
Compact and accurate representations of 3D shapes are central to many perception and robotics tasks. State-of-the-art learning-based methods can reconstruct single objects but scale poorly to large datasets. We present a novel recursive implicit representation to efficiently and accurately encode large datasets of complex 3D shapes by recursively traversing an implicit octree in latent space. Our implicit Recursive Octree Auto-Decoder (ROAD) learns a hierarchically structured latent space enabling state-of-the-art reconstruction results at a compression ratio above 99%. We also propose an efficient curriculum learning scheme that naturally exploits the coarse-to-fine properties of the underlying octree spatial representation. We explore the scaling law relating latent space dimension, dataset size, and reconstruction accuracy, showing that increasing the latent space dimension is enough to scale to large shape datasets. Finally, we show that our learned latent space encodes a coarse-to-fine hierarchical structure yielding reusable latents across different levels of details, and we provide qualitative evidence of generalization to novel shapes outside the training set.
translated by 谷歌翻译
长期以来,众所周知,在从嘈杂或不完整数据中重建3D形状时,形状先验是有效的。当使用基于深度学习的形状表示时,这通常涉及学习潜在表示,可以以单个全局向量的形式或多个局部媒介。后者可以更灵活,但容易过度拟合。在本文中,我们主张一种与三个网眼相结合的混合方法,该方法在每个顶点处与单独的潜在向量。在训练过程中,潜在向量被限制为具有相同的值,从而避免过度拟合。为了推断,潜在向量是独立更新的,同时施加空间正规化约束。我们表明,这赋予了我们灵活性和概括功能,我们在几个医学图像处理任务上证明了这一点。
translated by 谷歌翻译
Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to represent and generate 3D shapes, as well as a vast number of use cases. However, single-view reconstruction remains a challenging topic that can unlock various interesting use cases such as interactive design. In this work, we propose a novel framework that leverages the intermediate latent spaces of Vision Transformer (ViT) and a joint image-text representational model, CLIP, for fast and efficient Single View Reconstruction (SVR). More specifically, we propose a novel mapping network architecture that learns a mapping between deep features extracted from ViT and CLIP, and the latent space of a base 3D generative model. Unlike previous work, our method enables view-agnostic reconstruction of 3D shapes, even in the presence of large occlusions. We use the ShapeNetV2 dataset and perform extensive experiments with comparisons to SOTA methods to demonstrate our method's effectiveness.
translated by 谷歌翻译
Implicit fields have been very effective to represent and learn 3D shapes accurately. Signed distance fields and occupancy fields are the preferred representations, both with well-studied properties, despite their restriction to closed surfaces. Several other variations and training principles have been proposed with the goal to represent all classes of shapes. In this paper, we develop a novel and yet fundamental representation by considering the unit vector field defined on 3D space: at each point in $\mathbb{R}^3$ the vector points to the closest point on the surface. We theoretically demonstrate that this vector field can be easily transformed to surface density by applying the vector field divergence. Unlike other standard representations, it directly encodes an important physical property of the surface, which is the surface normal. We further show the advantages of our vector field representation, specifically in learning general (open, closed, or multi-layered) surfaces as well as piecewise planar surfaces. We compare our method on several datasets including ShapeNet where the proposed new neural implicit field shows superior accuracy in representing any type of shape, outperforming other standard methods. The code will be released at https://github.com/edomel/ImplicitVF
translated by 谷歌翻译
Implicit shape representations, such as Level Sets, provide a very elegant formulation for performing computations involving curves and surfaces. However, including implicit representations into canonical Neural Network formulations is far from straightforward. This has consequently restricted existing approaches to shape inference, to significantly less effective representations, perhaps most commonly voxels occupancy maps or sparse point clouds.To overcome this limitation we propose a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability. Specifically, we propose to represent the output as an oriented level set of a continuous and discretised embedding function. We investigate the benefits of our approach on the task of 3D shape prediction from a single image and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems.
translated by 谷歌翻译
Neural signed distance functions (SDFs) are emerging as an effective representation for 3D shapes. State-of-theart methods typically encode the SDF with a large, fixedsize neural network to approximate complex shapes with implicit surfaces. Rendering with these large networks is, however, computationally expensive since it requires many forward passes through the network for every pixel, making these representations impractical for real-time graphics. We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality. We represent implicit surfaces using an octree-based feature volume which adaptively fits shapes with multiple discrete levels of detail (LODs), and enables continuous LOD with SDF interpolation. We further develop an efficient algorithm to directly render our novel neural SDF representation in real-time by querying only the necessary LODs with sparse octree traversal. We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works. Furthermore, it produces state-of-the-art reconstruction quality for complex shapes under both 3D geometric and 2D image-space metrics.
translated by 谷歌翻译
基于坐标的网络成为3D表示和场景重建的强大工具。这些网络训练以将连续输入坐标映射到每个点处的信号的值。尽管如此,当前的架构是黑色盒子:不能轻易分析它们的光谱特性,并且在无监督点处的行为难以预测。此外,这些网络通常接受训练以以单个刻度表示信号,并且如此天真的下采样或上采样导致伪像。我们引入带限量坐标网络(BACON),具有分析傅里叶谱的网络架构。培根在无监督点处具有可预测的行为,可以基于所代表信号的光谱特性设计,并且可以在没有明确的监督的情况下代表多个尺度的信号。我们向培根展示用于使用符号距离功能的图像,辐射字段和3D场景的多尺度神经表示的培根,并表明它在可解释性和质量方面优于传统的单尺度坐标网络。
translated by 谷歌翻译
将3D坐标映射到签名距离函数(SDF)或占用值的神经网络具有启用对象形状的高保真隐式表示。本文开发了一种新的形状模型,允许通过优化连续符号定向距离功能(SDDF)来合成新颖距离视图。与Deep SDF模型类似,我们的SDDF配方可以代表整个类别的形状并从部分输入数据中跨越形状填写或插入。与SDF不同,该SDF在任何方向上测量到最近表面的距离,SDDF测量给定方向的距离。这允许训练没有3D形状监控的SDDF模型,仅使用距离测量,从深度相机或激光雷达传感器易获得。我们的模型还通过直接在任意位置和观察方向上直接预测距离,去除像表面提取或渲染的后处理步骤。与深色视角综合技术不同,例如培训高容量黑盒型号的神经辐射字段,我们的模型通过构造SDDF值沿着观察方向线性降低的性质。这种结构约束不仅导致维度降低,而且还提供了关于SDDF预测的准确性的分析信心,无论到物体表面的距离如何。
translated by 谷歌翻译
我们为3D形状生成(称为SDF-Stylegan)提供了一种基于stylegan2的深度学习方法,目的是降低生成形状和形状集合之间的视觉和几何差异。我们将stylegan2扩展到3D世代,并利用隐式签名的距离函数(SDF)作为3D形状表示,并引入了两个新颖的全球和局部形状鉴别器,它们区分了真实和假的SDF值和梯度,以显着提高形状的几何形状和视觉质量。我们进一步补充了基于阴影图像的FR \'Echet Inception距离(FID)分数的3D生成模型的评估指标,以更好地评估生成形状的视觉质量和形状分布。对形状生成的实验证明了SDF-Stylegan比最先进的表现出色。我们进一步证明了基于GAN倒置的各种任务中SDF-Stylegan的功效,包括形状重建,部分点云的形状完成,基于单图像的形状形状生成以及形状样式编辑。广泛的消融研究证明了我们框架设计的功效。我们的代码和训练有素的模型可在https://github.com/zhengxinyang/sdf-stylegan上找到。
translated by 谷歌翻译
Diffusion models have shown great promise for image generation, beating GANs in terms of generation diversity, with comparable image quality. However, their application to 3D shapes has been limited to point or voxel representations that can in practice not accurately represent a 3D surface. We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder. This allows us to generate diverse and high quality 3D surfaces. We additionally show that we can condition our model on images or text to enable image-to-3D generation and text-to-3D generation using CLIP embeddings. Furthermore, adding noise to the latent codes of existing shapes allows us to explore shape variations.
translated by 谷歌翻译