卷积神经网络(CNNS)在2D计算机视觉中取得了很大的突破。然而,它们的不规则结构使得难以在网格上直接利用CNNS的潜力。细分表面提供分层多分辨率结构,其中闭合的2 - 歧管三角网格中的每个面正恰好邻近三个面。本文推出了这两种观察,介绍了具有环形细分序列连接的3D三角形网格的创新和多功能CNN框架。在2D图像中的网格面和像素之间进行类比允许我们呈现网状卷积操作者以聚合附近面的局部特征。通过利用面部街区,这种卷积可以支持标准的2D卷积网络概念,例如,可变内核大小,步幅和扩张。基于多分辨率层次结构,我们利用汇集层,将四个面均匀地合并成一个和上采样方法,该方法将一个面分为四个。因此,许多流行的2D CNN架构可以容易地适应处理3D网格。可以通过自我参数化来回收具有任意连接的网格,以使循环细分序列连接,使子变量是一般的方法。广泛的评估和各种应用展示了SubDIVNet的有效性和效率。
translated by 谷歌翻译
基于简单的扩散层对空间通信非常有效的洞察力,我们对3D表面进行深度学习的新的通用方法。由此产生的网络是自动稳健的,以改变表面的分辨率和样品 - 一种对实际应用至关重要的基本属性。我们的网络可以在各种几何表示上离散化,例如三角网格或点云,甚至可以在一个表示上培训然后应用于另一个表示。我们优化扩散的空间支持,作为连续网络参数,从纯粹的本地到完全全球范围,从而消除手动选择邻域大小的负担。该方法中唯一的其他成分是在每个点处独立地施加的多层的Perceptron,以及用于支持方向滤波器的空间梯度特征。由此产生的网络简单,坚固,高效。这里,我们主要专注于三角网格表面,并且展示了各种任务的最先进的结果,包括表面分类,分割和非刚性对应。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
3D网格的几何特征学习是计算机图形的核心,对于许多视觉应用非常重要。然而,由于缺乏所需的操作和/或其有效的实现,深度学习目前滞后于异构3D网格的层次建模。在本文中,我们提出了一系列模块化操作,以实现异构3D网格的有效几何深度学习。这些操作包括网格卷曲,(UN)池和高效的网格抽取。我们提供这些操作的开源实施,统称为\ Texit {Picasso}。 Picasso的网格抽取模块是GPU加速的模块,可以在飞行中加工一批用于深度学习的网格。我们(联合国)汇集操作在不同分辨率的网络层跨网络层计算新创建的神经元的功能。我们的网格卷曲包括FaceT2Vertex,Vertex2Facet和FaceT2Facet卷积,用于利用VMF混合物和重心插值来包含模糊建模。利用Picasso的模块化操作,我们贡献了一个新型的分层神经网络Picassonet-II,以了解3D网格的高度辨别特征。 Picassonet-II接受原始地理学和Mesh Facet的精细纹理作为输入功能,同时处理完整场景网格。我们的网络达到了各种基准的形状分析和场景的竞争性能。我们在github https://github.com/enyahermite/picasso发布Picasso和Picassonet-II。
translated by 谷歌翻译
Intelligent mesh generation (IMG) refers to a technique to generate mesh by machine learning, which is a relatively new and promising research field. Within its short life span, IMG has greatly expanded the generalizability and practicality of mesh generation techniques and brought many breakthroughs and potential possibilities for mesh generation. However, there is a lack of surveys focusing on IMG methods covering recent works. In this paper, we are committed to a systematic and comprehensive survey describing the contemporary IMG landscape. Focusing on 110 preliminary IMG methods, we conducted an in-depth analysis and evaluation from multiple perspectives, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations. With the aim of literature collection and classification based on content extraction, we propose three different taxonomies from three views of key technique, output mesh unit element, and applicable input data types. Finally, we highlight some promising future research directions and challenges in IMG. To maximize the convenience of readers, a project page of IMG is provided at \url{https://github.com/xzb030/IMG_Survey}.
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
Point clouds are characterized by irregularity and unstructuredness, which pose challenges in efficient data exploitation and discriminative feature extraction. In this paper, we present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology as a completely regular 2D point geometry image (PGI) structure, in which coordinates of spatial points are captured in colors of image pixels. \mr{Intuitively, Flattening-Net implicitly approximates a locally smooth 3D-to-2D surface flattening process while effectively preserving neighborhood consistency.} \mr{As a generic representation modality, PGI inherently encodes the intrinsic property of the underlying manifold structure and facilitates surface-style point feature aggregation.} To demonstrate its potential, we construct a unified learning framework directly operating on PGIs to achieve \mr{diverse types of high-level and low-level} downstream applications driven by specific task networks, including classification, segmentation, reconstruction, and upsampling. Extensive experiments demonstrate that our methods perform favorably against the current state-of-the-art competitors. We will make the code and data publicly available at https://github.com/keeganhk/Flattening-Net.
translated by 谷歌翻译
We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.
translated by 谷歌翻译
近年来,由于强大的3D CNN,基于体素的方法已成为室内场景3D语义分割的最新方法。然而,基于体素的方法忽略了基础的几何形状,由于缺乏地理位置信息而在空间上闭合物体上的模棱两可的特征遭受了含糊的特征,并努力处理复杂和不规则的几何形状。鉴于此,我们提出了Voxel-Mesh网络(VMNET),这是一种新颖的3D深度体系结构,该架构在Voxel和网格表示上运行,并利用了欧几里得和地球信息。从直觉上讲,从体素中提取的欧几里得信息可以提供代表附近对象之间交互的上下文提示,而从网格中提取的地理信息可以帮助空间上接近但断开表面的分离对象。为了合并两个域中的此类信息,我们设计了一个内域的专注模块,以进行有效的特征聚集和一个用于自适应特征融合的专注于域间的模块。实验结果验证了VMNET的有效性:具体而言,在具有挑战性的扫描仪数据集上,用于大规模的室内场景分割,它的表现优于最先进的Sparseconvnet和Minkowskownet(74.6%vs 72.5%和73.6%)更简单的网络结构(17m vs 30m和38m参数)。代码发布:https://github.com/hzykent/vmnet
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译
由于缺乏连接性信息,对局部表面几何形状进行建模在3D点云的理解中具有挑战性。大多数先前的作品使用各种卷积操作模拟本地几何形状。我们观察到,卷积可以等效地分解为局部和全球成分的加权组合。通过这种观察,我们明确地将这两个组件解散了,以便可以增强局部的组件并促进局部表面几何形状的学习。具体而言,我们提出了Laplacian单元(LU),这是一个简单而有效的建筑单元,可以增强局部几何学的学习。广泛的实验表明,配备有LU的网络在典型的云理解任务上实现了竞争性或卓越的性能。此外,通过建立平均曲率流之间的连接,基于曲率的LU进行了进一步研究,以解释LU的自适应平滑和锐化效果。代码将可用。
translated by 谷歌翻译
最近,自我监督的预训练在W.R.T.的各种任务上具有先进的视觉变压器。不同的数据模式,例如图像和3D点云数据。在本文中,我们探讨了基于变压器的3D网格数据分析的学习范式。由于将变压器体系结构应用于新模式通常是非平凡的,因此我们首先将视觉变压器适应3D网格数据处理,即网格变压器。具体而言,我们将网格分为几个非重叠的本地贴片,每个贴片包含相同数量的面部,并使用每个贴片中心点的3D位置形成位置嵌入。受MAE的启发,我们探讨了如何使用基于变压器的结构对3D网格数据进行预训练如何使下游3D网格分析任务受益。我们首先随机掩盖网格的一些补丁,并将损坏的网格馈入网格变形金刚。然后,通过重建蒙版补丁的信息,该网络能够学习网格数据的区分表示。因此,我们命名我们的方法meshmae,可以在网格分析任务(即分类和分割)上产生最先进或可比性的性能。此外,我们还进行了全面的消融研究,以显示我们方法中关键设计的有效性。
translated by 谷歌翻译
变压器在自然语言处理中的成功最近引起了计算机视觉领域的关注。由于能够学习长期依赖性,变压器已被用作广泛使用的卷积运算符的替代品。事实证明,这种替代者在许多任务中都取得了成功,其中几种最先进的方法依靠变压器来更好地学习。在计算机视觉中,3D字段还见证了使用变压器来增加3D卷积神经网络和多层感知器网络的增加。尽管许多调查都集中在视力中的变压器上,但由于与2D视觉相比,由于数据表示和处理的差异,3D视觉需要特别注意。在这项工作中,我们介绍了针对不同3D视觉任务的100多种变压器方法的系统和彻底审查,包括分类,细分,检测,完成,姿势估计等。我们在3D Vision中讨论了变形金刚的设计,该设计使其可以使用各种3D表示形式处理数据。对于每个应用程序,我们强调了基于变压器的方法的关键属性和贡献。为了评估这些方法的竞争力,我们将它们的性能与12个3D基准测试的常见非转化方法进行了比较。我们通过讨论3D视觉中变压器的不同开放方向和挑战来结束调查。除了提出的论文外,我们的目标是频繁更新最新的相关论文及其相应的实现:https://github.com/lahoud/3d-vision-transformers。
translated by 谷歌翻译
通常在特定对象类别的大型3D数据集上对3D形状的现有生成模型进行培训。在本文中,我们研究了仅从单个参考3D形状学习的深层生成模型。具体而言,我们提出了一个基于GAN的多尺度模型,旨在捕获一系列空间尺度的输入形状的几何特征。为了避免在3D卷上操作引起的大量内存和计算成本,我们在三平面混合表示上构建了我们的发电机,这仅需要2D卷积。我们在参考形状的体素金字塔上训练我们的生成模型,而无需任何外部监督或手动注释。一旦受过训练,我们的模型就可以产生不同尺寸和宽高比的多样化和高质量的3D形状。所得的形状会跨不同的尺度呈现变化,同时保留了参考形状的全局结构。通过广泛的评估,无论是定性还是定量,我们都证明了我们的模型可以生成各种类型的3D形状。
translated by 谷歌翻译
网状denoising是数字几何处理中的基本问题。它试图消除表面噪声,同时尽可能准确地保留表面固有信号。尽管传统的智慧是基于专门的先验来平稳表面的,但基于学习的方法在概括和自动化方面取得了巨大的成功。在这项工作中,我们对网格denoising的进步进行了全面的综述,其中包含传统的几何方法和最近的基于学习的方法。首先,要熟悉读者的denoising任务,我们总结了网格denoising中的四个常见问题。然后,我们提供了两种现有的脱氧方法的分类。此外,分别详细介绍和分析了三个重要类别,包括优化,过滤器和基于数据驱动的技术。说明了定性和定量比较,以证明最先进的去核方法的有效性。最后,指出未来工作的潜在方向来解决这些方法的共同问题。这项工作还建立了网格denoising基准测试,未来的研究人员将通过最先进的方法轻松方便地评估其方法。
translated by 谷歌翻译
点云正在获得突出的突出,作为代表3D形状的方法,但其不规则结构对深度学习方法构成了挑战。在本文中,我们提出了一种使用随机散步学习3D形状的新方法。以前的作品试图调整卷积神经网络(CNNS)或将网格或网格结构强加到3D点云。这项工作提出了一种不同的方法来表示和学习特定点集的形状。关键的想法是在多个随机散步通过云设置的点上施加结构,用于探索3D对象的不同区域。然后我们学习每次和每次步行代表,并在推理时聚合多个步行预测。我们的方法实现了两个3D形状分析任务的最先进结果:分类和检索。此外,我们提出了一种形状复杂性指示器功能,该函数使用交叉步道和步行间方差措施来细分形状空间。
translated by 谷歌翻译
A number of problems can be formulated as prediction on graph-structured data. In this work, we generalize the convolution operator from regular grids to arbitrary graphs while avoiding the spectral domain, which allows us to handle graphs of varying size and connectivity. To move beyond a simple diffusion, filter weights are conditioned on the specific edge labels in the neighborhood of a vertex. Together with the proper choice of graph coarsening, we explore constructing deep neural networks for graph classification. In particular, we demonstrate the generality of our formulation in point cloud classification, where we set the new state of the art, and on a graph classification dataset, where we outperform other deep learning approaches. The source code is available at https://github.com/mys007/ecc.
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译