Semantic navigation is necessary to deploy mobile robots in uncontrolled environments like our homes, schools, and hospitals. Many learning-based approaches have been proposed in response to the lack of semantic understanding of the classical pipeline for spatial navigation, which builds a geometric map using depth sensors and plans to reach point goals. Broadly, end-to-end learning approaches reactively map sensor inputs to actions with deep neural networks, while modular learning approaches enrich the classical pipeline with learning-based semantic sensing and exploration. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot? We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality. For practitioners, we show that modular learning is a reliable approach to navigate to objects: modularity and abstraction in policy design enable Sim-to-Real transfer. For researchers, we identify two key issues that prevent today's simulators from being reliable evaluation benchmarks - (A) a large Sim-to-Real gap in images and (B) a disconnect between simulation and real-world error modes - and propose concrete steps forward.
translated by 谷歌翻译
We consider the problem of embodied visual navigation given an image-goal (ImageNav) where an agent is initialized in an unfamiliar environment and tasked with navigating to a location 'described' by an image. Unlike related navigation tasks, ImageNav does not have a standardized task definition which makes comparison across methods difficult. Further, existing formulations have two problematic properties; (1) image-goals are sampled from random locations which can lead to ambiguity (e.g., looking at walls), and (2) image-goals match the camera specification and embodiment of the agent; this rigidity is limiting when considering user-driven downstream applications. We present the Instance-specific ImageNav task (InstanceImageNav) to address these limitations. Specifically, the goal image is 'focused' on some particular object instance in the scene and is taken with camera parameters independent of the agent. We instantiate InstanceImageNav in the Habitat Simulator using scenes from the Habitat-Matterport3D dataset (HM3D) and release a standardized benchmark to measure community progress.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
We present the Habitat-Matterport 3D Semantics (HM3DSEM) dataset. HM3DSEM is the largest dataset of 3D real-world spaces with densely annotated semantics that is currently available to the academic community. It consists of 142,646 object instance annotations across 216 3D spaces and 3,100 rooms within those spaces. The scale, quality, and diversity of object annotations far exceed those of prior datasets. A key difference setting apart HM3DSEM from other datasets is the use of texture information to annotate pixel-accurate object boundaries. We demonstrate the effectiveness of HM3DSEM dataset for the Object Goal Navigation task using different methods. Policies trained using HM3DSEM perform outperform those trained on prior datasets. Introduction of HM3DSEM in the Habitat ObjectNav Challenge lead to an increase in participation from 400 submissions in 2021 to 1022 submissions in 2022.
translated by 谷歌翻译
我们研究了一种模块化方法,可以解决对象重排的长马移动操作任务,该任务将完整的任务分解为一系列子任务。为了解决整个任务,先前的工作将具有点目标导航技能的多个固定操作技巧,这些技巧是在子任务上单独学习的。尽管比整体端到端的RL政策更有效,但该框架遭受了技能链条的复杂错误,例如导航到一个不良位置,在这种情况下,固定操作技能无法达到其目标进行操纵。为此,我们建议操纵技巧应包括移动性,以便从多个位置与目标对象进行互动,同时导航技能可能具有多个终点,从而导致成功的操纵。我们通过实施移动操纵技能而不是固定技能来实现这些想法,并训练接受区域目标而不是积分目标的导航技能。我们在家庭助理基准(HAB)中评估了三个挑战性的长途移动操纵任务M3,并在3个挑战性的长途移动操纵任务上评估了我们的多技能,并且与基线相比表现出了出色的性能。
translated by 谷歌翻译
对象目标导航的最新方法依赖于增强学习,通常需要大量的计算资源和学习时间。我们提出了使用无互动学习(PONI)的对象导航的潜在功能,这是一种模块化方法,可以散布“在哪里看?”的技能?对于对象和“如何导航到(x,y)?”。我们的主要见解是“在哪里看?”可以纯粹将其视为感知问题,而没有环境相互作用就可以学习。为了解决这个问题,我们提出了一个网络,该网络可以预测两个在语义图上的互补电位功能,并使用它们来决定在哪里寻找看不见的对象。我们使用在自上而下的语义图的被动数据集上使用受监督的学习来训练潜在的功能网络,并将其集成到模块化框架中以执行对象目标导航。 Gibson和MatterPort3D的实验表明,我们的方法可实现对象目标导航的最新方法,同时减少培训计算成本高达1,600倍。可以使用代码和预训练的模型:https://vision.cs.utexas.edu/projects/poni/
translated by 谷歌翻译
人类可以从少量的2D视图中从3D中感知场景。对于AI代理商,只有几个图像的任何视点识别场景的能力使它们能够有效地与场景及其对象交互。在这项工作中,我们试图通过这种能力赋予机器。我们提出了一种模型,它通过将新场景的几个RGB图像进行输入,并通过将其分割为语义类别来识别新的视点中的场景。所有这一切都没有访问这些视图的RGB图像。我们将2D场景识别与隐式3D表示,并从数百个场景的多视图2D注释中学习,而无需超出相机姿势的3D监督。我们试验具有挑战性的数据集,并展示我们模型的能力,共同捕捉新颖场景的语义和几何形状,具有不同的布局,物体类型和形状。
translated by 谷歌翻译
我们考虑空间路径规划问题。与从划痕优化新计划的经典解决方案相比,我们以与地面真理障碍物位置访问完整地图,我们以可分散的方式从数据中学到策划员,允许我们利用过去数据的统计规则。我们提出了空间规划变压器(SPT),给出了障碍地图学习通过规划长期空间依赖性来生成动作,与以迭代方式通过卷积结构传播信息的先前数据驱动规划策规范。在地面真理地图对代理人未知的情况下,我们利用预先训练的SPTS在端到端的框架中,该框架具有映射器和计划内置的映射器和规划仪的结构,允许无缝概括到分配外地图和目标。 SPTS以处理和导航任务的所有设置均优于最先进的可分散规划者,导致7-19%的绝对提高。
translated by 谷歌翻译
在本文中,我们探索如何在互联网图像的数据和型号上构建,并使用它们适应机器人视觉,而无需任何额外的标签。我们提出了一个叫做自我监督体现的主动学习(密封)的框架。它利用互联网图像培训的感知模型来学习主动探索政策。通过3D一致性标记此探索策略收集的观察结果,并用于改善感知模型。我们构建并利用3D语义地图以完全自我监督的方式学习动作和感知。语义地图用于计算用于培训勘探政策的内在动机奖励,并使用时空3D一致性和标签传播标记代理观察。我们证明了密封框架可用于关闭动作 - 感知循环:通过在训练环境中移动,改善预读的感知模型的对象检测和实例分割性能,并且可以使用改进的感知模型来改善对象目标导航。
translated by 谷歌翻译
我们介绍了栖息地2.0(H2.0),这是一个模拟平台,用于培训交互式3D环境和复杂物理的场景中的虚拟机器人。我们为体现的AI堆栈 - 数据,仿真和基准任务做出了全面的贡献。具体来说,我们提出:(i)复制:一个由艺术家的,带注释的,可重新配置的3D公寓(匹配真实空间)与铰接对象(例如可以打开/关闭的橱柜和抽屉); (ii)H2.0:一个高性能物理学的3D模拟器,其速度超过8-GPU节点上的每秒25,000个模拟步骤(实时850x实时),代表先前工作的100倍加速;和(iii)家庭助理基准(HAB):一套辅助机器人(整理房屋,准备杂货,设置餐桌)的一套常见任务,以测试一系列移动操作功能。这些大规模的工程贡献使我们能够系统地比较长期结构化任务中的大规模加固学习(RL)和经典的感官平面操作(SPA)管道,并重点是对新对象,容器和布局的概括。 。我们发现(1)与层次结构相比,(1)平面RL政策在HAB上挣扎; (2)具有独立技能的层次结构遭受“交接问题”的困扰,(3)水疗管道比RL政策更脆。
translated by 谷歌翻译