蒙面自动编码是一种流行而有效的自我监督学习方法,可以指向云学习。但是,大多数现有方法仅重建掩盖点并忽略本地几何信息,这对于了解点云数据也很重要。在这项工作中,据我们所知,我们首次尝试将局部几何信息明确考虑到掩盖的自动编码中,并提出一种新颖的蒙版表面预测(Masksurf)方法。具体而言,考虑到以高比例掩盖的输入点云,我们学习一个基于变压器的编码器码头网络,通过同时预测表面位置(即点)和每条效率方向(即,正常),以估算基础掩盖的表面。 。点和正态的预测由倒角距离和新引入的位置指标的正常距离以设定的方式进行监督。在三种微调策略下,我们的Masksurf在六个下游任务上得到了验证。特别是,MaskSurf在OBJ-BG设置下的ScanoBjectNN的现实世界数据集上胜过其最接近的竞争对手Point-Mae,证明了掩盖的表面预测的优势比蒙版的预测优势比蒙版的预测。代码将在https://github.com/ybzh/masksurf上找到。
translated by 谷歌翻译
点云的学习表示是3D计算机视觉中的重要任务,尤其是没有手动注释的监督。以前的方法通常会从自动编码器中获得共同的援助,以通过重建输入本身来建立自我判断。但是,现有的基于自我重建的自动编码器仅关注全球形状,而忽略本地和全球几何形状之间的层次结构背景,这是3D表示学习的重要监督。为了解决这个问题,我们提出了一个新颖的自我监督点云表示学习框架,称为3D遮挡自动编码器(3D-OAE)。我们的关键想法是随机遮住输入点云的某些局部补丁,并通过使用剩余的可见图来恢复遮挡的补丁,从而建立监督。具体而言,我们设计了一个编码器,用于学习可见的本地贴片的特征,并设计了一个用于利用这些功能预测遮挡贴片的解码器。与以前的方法相反,我们的3D-OAE可以去除大量的斑块,并仅使用少量可见斑块进行预测,这使我们能够显着加速训练并产生非平凡的自我探索性能。训练有素的编码器可以进一步转移到各种下游任务。我们证明了我们在广泛使用基准下的不同判别和生成应用中的最先进方法的表现。
translated by 谷歌翻译
基于变压器的自我监督表示方法学习方法从未标记的数据集中学习通用功能,以提供有用的网络初始化参数,用于下游任务。最近,基于掩盖3D点云数据的局部表面斑块的自我监督学习的探索还不足。在本文中,我们提出了3D点云表示学习中的蒙版自动编码器(缩写为MAE3D),这是一种新颖的自动编码范式,用于自我监督学习。我们首先将输入点云拆分为补丁,然后掩盖其中的一部分,然后使用我们的补丁嵌入模块提取未掩盖的补丁的功能。其次,我们采用贴片的MAE3D变形金刚学习点云补丁的本地功能以及补丁之间的高级上下文关系,并完成蒙版补丁的潜在表示。我们将点云重建模块与多任务损失一起完成,从而完成不完整的点云。我们在Shapenet55上进行了自我监督的预训练,并使用点云完成前文本任务,并在ModelNet40和ScanObjectnn(PB \ _t50 \ _RS,最难的变体)上微调预训练的模型。全面的实验表明,我们的MAE3D从Point Cloud补丁提取的本地功能对下游分类任务有益,表现优于最先进的方法($ 93.4 \%\%\%\%$和$ 86.2 \%$ $分类精度)。
translated by 谷歌翻译
我们呈现Point-Bert,一种用于学习变压器的新范式,以概括BERT对3D点云的概念。灵感来自BERT,我们将屏蔽点建模(MPM)任务设计为预列火车点云变压器。具体地,我们首先将点云划分为几个本地点修补程序,并且具有离散变化性AutoEncoder(DVAE)的点云标记器被设计为生成包含有意义的本地信息的离散点令牌。然后,我们随机掩盖了一些输入点云的补丁并将它们送入骨干变压器。预训练目标是在销售器获得的点代币的监督下恢复蒙面地点的原始点令牌。广泛的实验表明,拟议的BERT风格的预训练策略显着提高了标准点云变压器的性能。配备了我们的预培训策略,我们表明,纯变压器架构对ModelNet40的准确性为93.8%,在ScanObjectnn的最艰难的设置上的准确性为83.1%,超越精心设计的点云模型,手工制作的设计更少。我们还证明,Point-Bert从新的任务和域中获悉的表示,我们的模型在很大程度上推动了几个射击点云分类任务的最先进。代码和预先训练的型号可在https://github.com/lulutang0608/pint -bert上获得
translated by 谷歌翻译
蒙面自动编码在图像和语言领域的自我监督学习方面取得了巨大的成功。但是,基于面具的预处理尚未显示出对点云理解的好处,这可能是由于PointNet(PointNet)无法正确处理训练的标准骨架,而不是通过训练期间掩盖引入的测试分配不匹配。在本文中,我们通过提出一个判别性掩码式变压器框架,maskPoint}来弥合这一差距。我们的关键想法是将点云表示为离散的占用值(1如果点云的一部分;如果不是的,则为0),并在蒙版对象点和采样噪声点之间执行简单的二进制分类作为代理任务。这样,我们的方法是对点云中的点采样差异的强大,并促进了学习丰富的表示。我们在几个下游任务中评估了验证的模型,包括3D形状分类,分割和现实词对象检测,并展示了最新的结果,同时获得了明显的预读速度(例如,扫描仪上的4.1倍)先前的最新变压器基线。代码可在https://github.com/haotian-liu/maskpoint上找到。
translated by 谷歌翻译
Pre-training by numerous image data has become de-facto for robust 2D representations. In contrast, due to the expensive data acquisition and annotation, a paucity of large-scale 3D datasets severely hinders the learning for high-quality 3D features. In this paper, we propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE. By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding, which reconstructs the masked point tokens with an encoder-decoder architecture. Specifically, we first utilize off-the-shelf 2D models to extract the multi-view visual features of the input point cloud, and then conduct two types of image-to-point learning schemes on top. For one, we introduce a 2D-guided masking strategy that maintains semantically important point tokens to be visible for the encoder. Compared to random masking, the network can better concentrate on significant 3D structures and recover the masked tokens from key spatial cues. For another, we enforce these visible tokens to reconstruct the corresponding multi-view 2D features after the decoder. This enables the network to effectively inherit high-level 2D semantics learned from rich image data for discriminative 3D modeling. Aided by our image-to-point pre-training, the frozen I2P-MAE, without any fine-tuning, achieves 93.4% accuracy for linear SVM on ModelNet40, competitive to the fully trained results of existing methods. By further fine-tuning on on ScanObjectNN's hardest split, I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity. Code will be available at https://github.com/ZrrSkywalker/I2P-MAE.
translated by 谷歌翻译
最近,自我监督的预训练在W.R.T.的各种任务上具有先进的视觉变压器。不同的数据模式,例如图像和3D点云数据。在本文中,我们探讨了基于变压器的3D网格数据分析的学习范式。由于将变压器体系结构应用于新模式通常是非平凡的,因此我们首先将视觉变压器适应3D网格数据处理,即网格变压器。具体而言,我们将网格分为几个非重叠的本地贴片,每个贴片包含相同数量的面部,并使用每个贴片中心点的3D位置形成位置嵌入。受MAE的启发,我们探讨了如何使用基于变压器的结构对3D网格数据进行预训练如何使下游3D网格分析任务受益。我们首先随机掩盖网格的一些补丁,并将损坏的网格馈入网格变形金刚。然后,通过重建蒙版补丁的信息,该网络能够学习网格数据的区分表示。因此,我们命名我们的方法meshmae,可以在网格分析任务(即分类和分割)上产生最先进或可比性的性能。此外,我们还进行了全面的消融研究,以显示我们方法中关键设计的有效性。
translated by 谷歌翻译
The recent success of pre-trained 2D vision models is mostly attributable to learning from large-scale datasets. However, compared with 2D image datasets, the current pre-training data of 3D point cloud is limited. To overcome this limitation, we propose a knowledge distillation method for 3D point cloud pre-trained models to acquire knowledge directly from the 2D representation learning model, particularly the image encoder of CLIP, through concept alignment. Specifically, we introduce a cross-attention mechanism to extract concept features from 3D point cloud and compare them with the semantic information from 2D images. In this scheme, the point cloud pre-trained models learn directly from rich information contained in 2D teacher models. Extensive experiments demonstrate that the proposed knowledge distillation scheme achieves higher accuracy than the state-of-the-art 3D pre-training methods for synthetic and real-world datasets on downstream tasks, including object classification, object detection, semantic segmentation, and part segmentation.
translated by 谷歌翻译
The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages. This promotes the potential of utilizing models pretrained with data more than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked modeling in a unified fashion of knowledge distillation, and we show that foundational Transformers pretrained with 2D images or natural languages can help self-supervised 3D representation learning through training Autoencoders as Cross-Modal Teachers (ACT). The pretrained Transformers are transferred as cross-modal 3D teachers using discrete variational autoencoding self-supervision, during which the Transformers are frozen with prompt tuning for better knowledge inheritance. The latent features encoded by the 3D teachers are used as the target of masked point modeling, wherein the dark knowledge is distilled to the 3D Transformer students as foundational geometry understanding. Our ACT pretrained 3D learner achieves state-of-the-art generalization capacity across various downstream benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes will be released at https://github.com/RunpeiDong/ACT.
translated by 谷歌翻译
Masked Modeling (MM) has demonstrated widespread success in various vision challenges, by reconstructing masked visual patches. Yet, applying MM for large-scale 3D scenes remains an open problem due to the data sparsity and scene complexity. The conventional random masking paradigm used in 2D images often causes a high risk of ambiguity when recovering the masked region of 3D scenes. To this end, we propose a novel informative-preserved reconstruction, which explores local statistics to discover and preserve the representative structured points, effectively enhancing the pretext masking task for 3D scene understanding. Integrated with a progressive reconstruction manner, our method can concentrate on modeling regional geometry and enjoy less ambiguity for masked reconstruction. Besides, such scenes with progressive masking ratios can also serve to self-distill their intrinsic spatial consistency, requiring to learn the consistent representations from unmasked areas. By elegantly combining informative-preserved reconstruction on masked areas and consistency self-distillation from unmasked areas, a unified framework called MM-3DScene is yielded. We conduct comprehensive experiments on a host of downstream tasks. The consistent improvement (e.g., +6.1 mAP@0.5 on object detection and +2.2% mIoU on semantic segmentation) demonstrates the superiority of our approach.
translated by 谷歌翻译
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural rendering. Motivated by the fact that informative point cloud features should be able to encode rich geometry and appearance cues and render realistic images, we train a point-cloud encoder within a devised point-based neural renderer by comparing the rendered images with real images on massive RGB-D data. The learned point-cloud encoder can be easily integrated into various downstream tasks, including not only high-level tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image synthesis. Extensive experiments on various tasks demonstrate the superiority of our approach compared to existing pre-training methods.
translated by 谷歌翻译
我们提出SERP,这是3D点云的自我监督学习的框架。 SERP由编码器编码器架构组成,该体系结构将被扰动或损坏的点云作为输入和旨在重建原始点云而无需损坏。编码器在低维子空间中学习了点云的高级潜在表示,并恢复原始结构。在这项工作中,我们使用了基于变压器和基于点网的自动编码器。所提出的框架还解决了基于变形金刚的掩盖自动编码器的一些局限性,这些框架容易泄漏位置信息和不均匀的信息密度。我们在完整的Shapenet数据集上训练了模型,并将它们作为下游分类任务评估。我们已经表明,审慎的模型比从头开始训练的网络实现了0.5-1%的分类精度。此外,我们还提出了VASP:对矢量定量的自动编码器,用于对点云进行自我监督的表示学习,这些学习用于基于变压器的自动编码器的离散表示学习。
translated by 谷歌翻译
The pretraining-finetuning paradigm has demonstrated great success in NLP and 2D image fields because of the high-quality representation ability and transferability of their pretrained models. However, pretraining such a strong model is difficult in the 3D point cloud field since the training data is limited and point cloud collection is expensive. This paper introduces \textbf{E}fficient \textbf{P}oint \textbf{C}loud \textbf{L}earning (EPCL), an effective and efficient point cloud learner for directly training high-quality point cloud models with a frozen CLIP model. Our EPCL connects the 2D and 3D modalities by semantically aligning the 2D features and point cloud features without paired 2D-3D data. Specifically, the input point cloud is divided into a sequence of tokens and directly fed into the frozen CLIP model to learn point cloud representation. Furthermore, we design a task token to narrow the gap between 2D images and 3D point clouds. Comprehensive experiments on 3D detection, semantic segmentation, classification and few-shot learning demonstrate that the 2D CLIP model can be an efficient point cloud backbone and our method achieves state-of-the-art accuracy on both real-world and synthetic downstream tasks. Code will be available.
translated by 谷歌翻译
Current outdoor LiDAR-based 3D object detection methods mainly adopt the training-from-scratch paradigm. Unfortunately, this paradigm heavily relies on large-scale labeled data, whose collection can be expensive and time-consuming. Self-supervised pre-training is an effective and desirable way to alleviate this dependence on extensive annotated data. Recently, masked modeling has become a successful self-supervised learning approach for point clouds. However, current works mainly focus on synthetic or indoor datasets. When applied to large-scale and sparse outdoor point clouds, they fail to yield satisfactory results. In this work, we present BEV-MAE, a simple masked autoencoder pre-training framework for 3D object detection on outdoor point clouds. Specifically, we first propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation in a BEV perspective and avoid complex decoder design during pre-training. Besides, we introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder with fine-tuning for masked point cloud inputs. Finally, based on the property of outdoor point clouds, i.e., the point clouds of distant objects are more sparse, we propose point density prediction to enable the 3D encoder to learn location information, which is essential for object detection. Experimental results show that BEV-MAE achieves new state-of-the-art self-supervised results on both Waymo and nuScenes with diverse 3D object detectors. Furthermore, with only 20% data and 7% training cost during pre-training, BEV-MAE achieves comparable performance with the state-of-the-art method ProposalContrast. The source code and pre-trained models will be made publicly available.
translated by 谷歌翻译
如今,大规模数据集的大型培训大型模型已成为深度学习的关键主题。具有较高表示能力和可传递性的预训练模型取得了巨大的成功,并在自然语言处理和2D视觉中占据了许多下游任务。但是,鉴于有限的训练数据相对不便,因此将这种预处理的调整范式促进这种预处理的调整范式是非平凡的。在本文中,我们提供了一个新的观点,即利用3D域中的预训练的2D知识来解决此问题,以新颖的点对像素来调整预训练的图像模型,以较小的参数成本提示点云分析。遵循促使工程的原理,我们将点云转换为具有几何形状的投影和几何学吸引着色的色彩图像,以适应预训练的图像模型,在点云分析的端到端优化期间,其权重冻结了任务。我们进行了广泛的实验,以证明与提议的点对像素提示合作,更好的预训练图像模型将导致在3D视觉中始终如一地表现更好的性能。享受图像预训练领域的繁荣发展,我们的方法在Scanobjectnn的最困难环境中获得了89.3%的精度,超过了传统的点云模型,具有较少的可训练参数。我们的框架在模型网分类和塑形部分分割方面还表现出非常具竞争力的性能。代码可从https://github.com/wangzy22/p2p获得
translated by 谷歌翻译
许多3D表示(例如,点云)是下面连续3D表面的离散样本。该过程不可避免地介绍了底层的3D形状上的采样变化。在学习3D表示中,应忽略应忽略变化,而应捕获基础3D形状的可转换知识。这成为现有代表学习范式的大挑战。本文在点云上自动编码。标准自动编码范例强制编码器捕获这种采样变体,因为解码器必须重建具有采样变化的原始点云。我们介绍了隐式AutoEncoder(IAE),这是一种简单而有效的方法,通过用隐式解码器替换点云解码器来解决这一挑战。隐式解码器输出与相同模型的不同点云采样之间共享的连续表示。在隐式表示下重建可以优先考虑编码器丢弃采样变体,引入更多空间以学习有用的功能。在一个简单的线性AutoEncoder下,理论上理论地证明这一索赔。此外,隐式解码器提供丰富的空间来为不同的任务设计合适的隐式表示。我们展示了IAE对3D对象和3D场景的各种自我监督学习任务的有用性。实验结果表明,IAE在每项任务中始终如一地优于最先进的。
translated by 谷歌翻译
The past few years have witnessed the prevalence of self-supervised representation learning within the language and 2D vision communities. However, such advancements have not been fully migrated to the community of 3D point cloud learning. Different from previous pre-training pipelines for 3D point clouds that generally fall into the scope of either generative modeling or contrastive learning, in this paper, we investigate a translative pre-training paradigm, namely PointVST, driven by a novel self-supervised pretext task of cross-modal translation from an input 3D object point cloud to its diverse forms of 2D rendered images (e.g., silhouette, depth, contour). Specifically, we begin with deducing view-conditioned point-wise embeddings via the insertion of the viewpoint indicator, and then adaptively aggregate a view-specific global codeword, which is further fed into the subsequent 2D convolutional translation heads for image generation. We conduct extensive experiments on common task scenarios of 3D shape analysis, where our PointVST shows consistent and prominent performance superiority over current state-of-the-art methods under diverse evaluation protocols. Our code will be made publicly available.
translated by 谷歌翻译
近期云的自我监督学习最近取得了很大的关注,因为它在点云任务上解决了标签效率和域间隙问题。在本文中,我们提出了一种新颖的自我监督框架,用于学习部分点云的信息陈述。我们利用包含内容和姿势属性的LIDAR扫描的部分点云,我们表明解开部分点云等两个因素增强了特征表示学习。为此,我们的框架由三个主要部分组成:1)完成网络以捕获点云的整体语义; 2)一个姿势回归网络,了解从扫描部分数据的视角; 3)局部重建网络,以鼓励模型学习内容和构成功能。为了展示学习特征表示的稳健性,我们开展了几个下游任务,包括分类,部分分割和登记,并进行了最先进的方法的比较。我们的方法不仅优于现有的自我监督方法,而且还展示了合成和现实世界数据集的更好普遍性。
translated by 谷歌翻译
蒙面语言建模(MLM)已成为最成功的自我保护的预训练任务之一。受其成功的启发,Point-Bert作为Point Cloud的先驱工作,提出了蒙版点建模(MPM),以便在大规模无动物数据集上预先训练点变压器。尽管表现出色,但我们发现语言和点云之间的固有区别倾向于引起点云的模棱两可的令牌化。对于点云,没有用于点云令牌化的黄金标准。尽管Point-Bert引入了离散的变异自动编码器(DVAE)作为令牌,以将令牌ID分配给本地补丁,但它倾向于为本地补丁生成模棱两可的令牌ID。我们发现,这种不完美的令牌可能会为语义相似的补丁产生不同的令牌ID,并为语义 - 差异贴片提供相同的令牌ID。为了解决上述问题,我们提出了我们的Point-Mcbert,这是一个带有缓解和精致的监督信号的预训练框架。具体而言,我们简化了对补丁的先前单选择约束,并为每个补丁作为监督提供多项选择令牌ID。此外,我们利用了Transformer学到的高级语义,以进一步完善我们的监督信号。关于点云分类,几乎没有射击分类和部分分割任务的广泛实验证明了我们方法的优势,例如,预训练的变压器在ModelNet40上实现了94.1%的精度,在ScanObjectnn和新的ScanObjectnn和新的Satactnn New State-nate Satactnn NEC中的精度为84.28% - 几次学习的表现。我们还证明,我们的方法不仅可以提高所有下游任务上的点 - 伯特的性能,而且几乎没有额外的计算开销。
translated by 谷歌翻译
大规模点云的注释仍然耗时,并且对于许多真实世界任务不可用。点云预训练是用于获得快速适配的可扩展模型的一个潜在解决方案。因此,在本文中,我们调查了一种新的自我监督学习方法,称为混合和解除戒(MD),用于点云预培训。顾名思义,我们探索如何将原始点云与混合点云分开,并利用这一具有挑战的任务作为模型培训的借口优化目标。考虑到原始数据集中的有限培训数据,这远低于普遍的想象,混合过程可以有效地产生更高质量的样本。我们构建一个基线网络以验证我们的直觉,只包含两个模块,编码器和解码器。给定混合点云,首先预先训练编码器以提取语义嵌入。然后,利用实例 - 自适应解码器根据嵌入来解除点云。尽管简单,编码器本质上是能够在训练后捕获点云关键点,并且可以快速适应下游任务,包括预先训练和微调范例的分类和分割。在两个数据集上的广泛实验表明编码器+我们的(MD)显着超越了从头划痕培训的编码器和快速收敛的编码器。在消融研究中,我们进一步研究了每个部件的效果,并讨论了拟议的自我监督学习策略的优势。我们希望这种自我监督的学习尝试点云可以铺平了减少对大规模标记数据的深度学习模型依赖的方式,并在将来节省了大量的注释成本。
translated by 谷歌翻译