The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
预测以过去观察和电动机命令为条件的未来视觉观察的能力可以使体现的代理能够计划复杂环境中各种任务的解决方案。这项工作表明,我们可以通过掩盖的视觉建模预训练变压器来创建良好的视频预测模型。我们的方法名为MaskVit,基于两个简单的设计决策。首先,为了记忆和训练效率,我们使用两种类型的窗户注意力:时空和时空。其次,在训练期间,我们掩盖了一个可变百分比的令牌,而不是固定蒙版比率。对于推断,MaskVit通过迭代改进生成所有令牌,在该迭代中,我们会在掩码调度函数后逐步降低掩蔽率。在几个数据集上,我们证明了MaskVit优于视频预测中的先前作品,这是参数有效的,并且可以生成高分辨率视频(256x256)。此外,我们通过使用MaskVit在真实机器人上进行计划,证明了推理加速器的好处(最高512x)。我们的工作表明,我们可以通过利用最小的域知识的掩盖视觉建模的一般框架来赋予体现的代理具有强大的预测模型。
translated by 谷歌翻译
机器人在仓库和工厂等受控环境中执行重复和精确的敏感任务方面表现出色,但尚未扩展到体现在家庭任务中提供帮助的AI代理。受到基准在AI领域(例如计算机视觉和自然语言处理)中的催化效果的启发,社区正在寻找用于体现AI的新基准。体现AI基准的先前工作使用不同的形式主义定义任务,通常特定于一个环境,模拟器或域,从而难以开发一般和可比较的解决方案。在这项工作中,我们将一部分行为活动带入了栖息地2.0中,以从其快速模拟速度中受益,这是证明逻辑空间中定义的适应活动的第一步,将其定义为不同的模拟器。
translated by 谷歌翻译
在移动操作(MM)中,机器人可以在内部导航并与其环境进行交互,因此能够完成比仅能够导航或操纵的机器人的更多任务。在这项工作中,我们探讨如何应用模仿学习(IL)来学习MM任务的连续Visuo-Motor策略。许多事先工作表明,IL可以为操作或导航域训练Visuo-Motor策略,但很少有效应用IL到MM域。这样做是挑战的两个原因:在数据方面,当前的接口使得收集高质量的人类示范困难,在学习方面,有限数据培训的政策可能会在部署时遭受协变速转变。为了解决这些问题,我们首先提出了移动操作Roboturk(Momart),这是一种新颖的遥控框架,允许同时导航和操纵移动操纵器,并在现实的模拟厨房设置中收集一类大规模的大规模数据集。然后,我们提出了一个学习错误检测系统来解决通过检测代理处于潜在故障状态时的协变量转变。我们从该数据中培训表演者的IL政策和错误探测器,在专家数据培训时,在多个多级任务中达到超过45%的任务成功率和85%的错误检测成功率。 CodeBase,DataSets,Visualization,以及更多可用的https://sites.google.com/view/il-for-mm/home。
translated by 谷歌翻译
最近在体现AI中的研究已经通过使用模拟环境来开发和培训机器人学习方法。然而,使用模拟已经引起了只需要机器人模拟器可以模拟的任务:运动和物理接触的任务。我们呈现IGIBSON 2.0,一个开源仿真环境,通过三个关键创新支持模拟更多样化的家庭任务。首先,IGIBSON 2.0支持对象状态,包括温度,湿度水平,清洁度和切割和切片状态,以涵盖更广泛的任务。其次,IGIBSON 2.0实现了一组谓词逻辑函数,该逻辑函数将模拟器状态映射到烹饪或浸泡等逻辑状态。另外,给定逻辑状态,IGIBSON 2.0可以对满足它的有效物理状态进行示例。此功能可以以最少的努力从用户生成潜在的无限实例。采样机制允许我们的场景在语义有意义的位置中的小对象更密集地填充。第三,IGIBSON 2.0包括虚拟现实(VR)界面,以将人类浸入其场景以收集示威操作。因此,我们可以从这些新型任务中收集人类的示威活动,并使用它们进行模仿学习。我们评估了IGIBSON 2.0的新功能,以实现新的任务的机器人学习,希望能够展示这一新模拟器的潜力来支持体现AI的新研究。 IGIBSON 2.0及其新数据集可在http://svl.stanford.edu/igibson/上公开提供。
translated by 谷歌翻译
A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.
translated by 谷歌翻译
This paper proposes a question-answering system that can answer questions whose supporting evidence is spread over multiple (potentially long) documents. The system, called Visconde, uses a three-step pipeline to perform the task: decompose, retrieve, and aggregate. The first step decomposes the question into simpler questions using a few-shot large language model (LLM). Then, a state-of-the-art search engine is used to retrieve candidate passages from a large collection for each decomposed question. In the final step, we use the LLM in a few-shot setting to aggregate the contents of the passages into the final answer. The system is evaluated on three datasets: IIRC, Qasper, and StrategyQA. Results suggest that current retrievers are the main bottleneck and that readers are already performing at the human level as long as relevant passages are provided. The system is also shown to be more effective when the model is induced to give explanations before answering a question. Code is available at \url{https://github.com/neuralmind-ai/visconde}.
translated by 谷歌翻译
A systematic review on machine-learning strategies for improving generalizability (cross-subjects and cross-sessions) electroencephalography (EEG) based in emotion classification was realized. In this context, the non-stationarity of EEG signals is a critical issue and can lead to the Dataset Shift problem. Several architectures and methods have been proposed to address this issue, mainly based on transfer learning methods. 418 papers were retrieved from the Scopus, IEEE Xplore and PubMed databases through a search query focusing on modern machine learning techniques for generalization in EEG-based emotion assessment. Among these papers, 75 were found eligible based on their relevance to the problem. Studies lacking a specific cross-subject and cross-session validation strategy and making use of other biosignals as support were excluded. On the basis of the selected papers' analysis, a taxonomy of the studies employing Machine Learning (ML) methods was proposed, together with a brief discussion on the different ML approaches involved. The studies with the best results in terms of average classification accuracy were identified, supporting that transfer learning methods seem to perform better than other approaches. A discussion is proposed on the impact of (i) the emotion theoretical models and (ii) psychological screening of the experimental sample on the classifier performances.
translated by 谷歌翻译
Plastic shopping bags that get carried away from the side of roads and tangled on cotton plants can end up at cotton gins if not removed before the harvest. Such bags may not only cause problem in the ginning process but might also get embodied in cotton fibers reducing its quality and marketable value. Therefore, it is required to detect, locate, and remove the bags before cotton is harvested. Manually detecting and locating these bags in cotton fields is labor intensive, time-consuming and a costly process. To solve these challenges, we present application of four variants of YOLOv5 (YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x) for detecting plastic shopping bags using Unmanned Aircraft Systems (UAS)-acquired RGB (Red, Green, and Blue) images. We also show fixed effect model tests of color of plastic bags as well as YOLOv5-variant on average precision (AP), mean average precision (mAP@50) and accuracy. In addition, we also demonstrate the effect of height of plastic bags on the detection accuracy. It was found that color of bags had significant effect (p < 0.001) on accuracy across all the four variants while it did not show any significant effect on the AP with YOLOv5m (p = 0.10) and YOLOv5x (p = 0.35) at 95% confidence level. Similarly, YOLOv5-variant did not show any significant effect on the AP (p = 0.11) and accuracy (p = 0.73) of white bags, but it had significant effects on the AP (p = 0.03) and accuracy (p = 0.02) of brown bags including on the mAP@50 (p = 0.01) and inference speed (p < 0.0001). Additionally, height of plastic bags had significant effect (p < 0.0001) on overall detection accuracy. The findings reported in this paper can be useful in speeding up removal of plastic bags from cotton fields before harvest and thereby reducing the amount of contaminants that end up at cotton gins.
translated by 谷歌翻译