本报告介绍了CVPR 2022中RXR-HABITAT竞赛获胜的方法。该竞赛解决了连续环境中的视觉和语言导航问题(VLN-CE),该问题要求代理商遵循逐步遵循步骤自然语言指示达到目标。我们为任务提供了模块化的计划与控制方法。我们的模型由三个模块组成:候选Waypoints预测器(CWP),历史增强的计划者和试用控制器。在每个决策循环中,CWP首先根据来自多个视图的深度观察来预测一组候选航路点。它可以降低动作空间的复杂性并促进计划。然后,采用历史增强的计划者选择候选航路点之一。计划者还编码历史记忆以跟踪导航进度,这对于长途导航特别有效。最后,我们提出了一个名为Trutout的非参数启发式控制器,以执行低级动作以达到计划的子目标。它是基于反复试验的机制,该机制可以帮助代理避免障碍并避免卡住。所有三个模块都在层次上工作,直到代理停止为止。我们进一步采取了视力和语言导航(VLN)的最新进展,以改善基于大规模合成域内数据集,环境级数据增强和快照模型集成等性能。我们的模型赢得了2022年RXR-HABITAT竞赛,比NDTW和​​SR指标的现有方法分别相对改善,相对改善为48%和90%。
translated by 谷歌翻译
Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360 degree panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky, a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities.
translated by 谷歌翻译
我们研究了开发自主代理的问题,这些自主代理可以遵循人类的指示来推断和执行一系列行动以完成基础任务。近年来取得了重大进展,尤其是对于短范围的任务。但是,当涉及具有扩展动作序列的长匹马任务时,代理可以轻松忽略某些指令或陷入长长指令中间,并最终使任务失败。为了应对这一挑战,我们提出了一个基于模型的里程碑的任务跟踪器(M-Track),以指导代理商并监视其进度。具体而言,我们提出了一个里程碑构建器,该建筑商通过导航和交互里程碑标记指令,代理商需要逐步完成,以及一个系统地检查代理商当前里程碑的进度并确定何时继续进行下一个的里程碑检查器。在具有挑战性的Alfred数据集上,我们的M轨道在两个竞争基本模型中,未见成功率的相对成功率显着提高了33%和52%。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
This study focuses on embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. Existing methods rely on a large amount of (instruction, gold trajectory) pairs to learn a good policy. The high data cost and poor sample efficiency prevents the development of versatile agents that are capable of many tasks and can learn new tasks quickly. In this work, we propose a novel method, LLM-Planner, that harnesses the power of large language models (LLMs) such as GPT-3 to do few-shot planning for embodied agents. We further propose a simple but effective way to enhance LLMs with physical grounding to generate plans that are grounded in the current environment. Experiments on the ALFRED dataset show that our method can achieve very competitive few-shot performance, even outperforming several recent baselines that are trained using the full training data despite using less than 0.5% of paired training data. Existing methods can barely complete any task successfully under the same few-shot setting. Our work opens the door for developing versatile and sample-efficient embodied agents that can quickly learn many tasks.
translated by 谷歌翻译
在视觉和语言导航(VLN)中,按照自然语言指令在现实的3D环境中需要具体的代理。现有VLN方法的一个主要瓶颈是缺乏足够的培训数据,从而导致对看不见的环境的概括不令人满意。虽然通常会手动收集VLN数据,但这种方法很昂贵,并且可以防止可扩展性。在这项工作中,我们通过建议从HM3D自动创建900个未标记的3D建筑物的大规模VLN数据集来解决数据稀缺问题。我们为每个建筑物生成一个导航图,并通过交叉视图一致性从2D传输对象预测,从2D传输伪3D对象标签。然后,我们使用伪对象标签来微调一个预处理的语言模型,作为减轻教学生成中跨模式差距的提示。在导航环境和说明方面,我们生成的HM3D-AUTOVLN数据集是比现有VLN数据集大的数量级。我们通过实验表明,HM3D-AUTOVLN显着提高了所得VLN模型的概括能力。在SPL指标上,我们的方法分别在Reverie和DataSet的看不见的验证分裂分别对艺术的状态提高了7.1%和8.1%。
translated by 谷歌翻译
了解空间和视觉信息对于遵循自然语言说明的导航代理至关重要。当前的基于变压器的VLN代理纠缠了方向和视觉信息,这限制了每个信息源的学习中的增益。在本文中,我们设计了具有明确取向和视觉模块的神经药物。这些模块学会了将空间信息和地标在视觉环境中的说明中提及。为了加强代理的空间推理和视觉感知,我们设计了特定的预训练任务,以进食并更好地利用我们最终导航模型中的相应模块。我们在Room2Room(R2R)和Room4Room(R4R)数据集上评估我们的方法,并在两个基准测试中实现最新结果。
translated by 谷歌翻译
视觉和语言导航(VLN)是一种任务,即遵循语言指令以导航到目标位置的语言指令,这依赖于在移动期间与环境的持续交互。最近的基于变压器的VLN方法取得了很大的进步,从视觉观测和语言指令之间的直接连接通过多模式跨关注机制。然而,这些方法通常代表通过使用LSTM解码器或使用手动设计隐藏状态来构建反复变压器的时间上下文作为固定长度矢量。考虑到单个固定长度向量通常不足以捕获长期时间上下文,在本文中,我们通过显式建模时间上下文来引入具有可变长度存储器(MTVM)的多模式变压器,通过模拟时间上下文。具体地,MTVM使代理能够通过直接存储在存储体中的先前激活来跟踪导航轨迹。为了进一步提高性能,我们提出了内存感知的一致性损失,以帮助学习随机屏蔽指令的时间上下文的更好关节表示。我们在流行的R2R和CVDN数据集上评估MTVM,我们的模型在R2R看不见的验证和测试中提高了2%的成功率,并在CVDN测试集上减少了1.6米的目标进程。
translated by 谷歌翻译
视觉和语言导航(VLN)任务要求代理根据语言说明浏览环境。在本文中,我们旨在解决此任务中的两个关键挑战:利用多语言指令改进教学路径接地,并在培训期间看不见的新环境中导航。为了应对这些挑战,我们提出了明确的:跨语性和环境不可屈服的表示。首先,我们的经纪人在室内室内数据集中学习了三种语言(英语,印地语和泰卢固语)的共享且视觉上的跨语言表示。我们的语言表示学习是由视觉信息对齐的文本对指导的。其次,我们的代理商通过从不同环境中最大化语义对齐的图像对(对象匹配的约束)之间的相似性来学习环境不足的视觉表示。我们的环境不可知的视觉表示可以减轻低级视觉信息引起的环境偏见。从经验上讲,在房间 - 室内数据集中,我们表明,当通过跨语性语言表示和环境 - 非局部视觉表示形式概括地看不见的环境时,我们的多语言代理在所有指标上都比强大的基线模型进行了巨大改进。此外,我们表明我们学到的语言和视觉表示可以成功地转移到房间和合作的视觉和二元式导航任务上,并提出详细的定性和定量的概括和基础分析。我们的代码可从https://github.com/jialuli-luka/clear获得
translated by 谷歌翻译
建立一个对话体现的代理执行现实生活任务一直是一个长期而又具有挑战性的研究目标,因为它需要有效的人类代理沟通,多模式理解,远程顺序决策等。传统的符号方法具有扩展和概括问题,而端到端的深度学习模型则遭受数据稀缺和高任务复杂性的影响,并且通常很难解释。为了从两全其美的世界中受益,我们提出了一个神经符号常识性推理(JARVIS)框架,用于模块化,可推广和可解释的对话体现的药物。首先,它通过提示大型语言模型(LLM)来获得符号表示,以了解语言理解和次目标计划,并通过从视觉观察中构建语义图。然后,基于任务和动作级别的常识,次目标计划和行动生成的符号模块。在Teach数据集上进行的大量实验验证了我们的JARVIS框架的功效和效率,该框架在所有三个基于对话框的具体任务上实现了最新的(SOTA)结果,包括对话记录(EDH)的执行,对话框的轨迹, (TFD)和两个代理任务完成(TATC)(例如,我们的方法将EDH看不见的成功率从6.1 \%\%提高到15.8 \%)。此外,我们系统地分析了影响任务绩效的基本因素,并在几个射击设置中证明了我们方法的优越性。我们的Jarvis模型在Alexa奖Simbot公共基准挑战赛中排名第一。
translated by 谷歌翻译
自然语言提供可访问和富有富有态度的界面,以指定机器人代理的长期任务。但是,非专家可能会使用高级指令指定此类任务,其中通过多个抽象层摘要通过特定的机器人操作。我们建议将语言和机器人行动之间的这种差距延长长的执行视野是持久的表示。我们提出了一种持久的空间语义表示方法,并展示它是如何构建执行分层推理的代理,以有效执行长期任务。尽管完全避免了常用的逐步说明,我们评估了我们对阿尔弗雷德基准的方法并实现了最先进的结果。
translated by 谷歌翻译
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a naturallanguage navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visuallygrounded navigation instructions, we present the Matter-port3D Simulator -a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -the Room-to-Room (R2R) dataset 1 .1 https://bringmeaspoon.org Instruction: Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.
translated by 谷歌翻译
在本文中,我们介绍了地图语言导航任务,代理在其中执行自然语言指令,并仅基于给定的3D语义图移至目标位置。为了解决任务,我们设计了指导感的路径建议和歧视模型(IPPD)。我们的方法利用MAP信息来提供指导感知的路径建议,即,它选择所有潜在的指令一致的候选路径以减少解决方案空间。接下来,为表示沿路径的地图观测值以获得更好的模态对准,提出了针对语义图定制的新型路径特征编码方案。基于注意力的语言驱动的歧视者旨在评估候选路径,并确定最佳路径作为最终结果。与单步贪婪决策方法相比,我们的方法自然可以避免误差积累。与单步仿制学习方法相比,IPPD在导航成功方面的性能增长超过17%,而在有挑战性的看不见的环境中,在路径匹配测量NDTW上的性能增长了0.18。
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
这项工作研究了图像目标导航问题,需要通过真正拥挤的环境引导具有嘈杂传感器和控制的机器人。最近的富有成效的方法依赖于深度加强学习,并学习模拟环境中的导航政策,这些环境比真实环境更简单。直接将这些训练有素的策略转移到真正的环境可能非常具有挑战性甚至危险。我们用由四个解耦模块组成的分层导航方法来解决这个问题。第一模块在机器人导航期间维护障碍物映射。第二个将定期预测实时地图上的长期目标。第三个计划碰撞命令集以导航到长期目标,而最终模块将机器人正确靠近目标图像。四个模块是单独开发的,以适应真实拥挤的情景中的图像目标导航。此外,分层分解对导航目标规划,碰撞避免和导航结束预测的学习进行了解耦,这在导航训练期间减少了搜索空间,并有助于改善以前看不见的真实场景的概括。我们通过移动机器人评估模拟器和现实世界中的方法。结果表明,我们的方法优于多种导航基线,可以在这些方案中成功实现导航任务。
translated by 谷歌翻译
在人类环境中,预计在简单的自然语言指导下,机器人将完成各种操纵任务。然而,机器人的操纵极具挑战性,因为它需要精细颗粒的运动控制,长期记忆以及对以前看不见的任务和环境的概括。为了应对这些挑战,我们提出了一种基于统一的变压器方法,该方法考虑了多个输入。特别是,我们的变压器体系结构集成了(i)自然语言指示和(ii)多视图场景观察,而(iii)跟踪观察和动作的完整历史。这种方法使历史和指示之间的学习依赖性可以使用多个视图提高操纵精度。我们评估我们的方法在具有挑战性的RLBench基准和现实世界机器人方面。值得注意的是,我们的方法扩展到74个不同的RLBench任务,并超越了最新的现状。我们还解决了指导条件的任务,并证明了对以前看不见的变化的出色概括。
translated by 谷歌翻译
In this work, we study the problem of Embodied Referring Expression Grounding, where an agent needs to navigate in a previously unseen environment and localize a remote object described by a concise high-level natural language instruction. When facing such a situation, a human tends to imagine what the destination may look like and to explore the environment based on prior knowledge of the environmental layout, such as the fact that a bathroom is more likely to be found near a bedroom than a kitchen. We have designed an autonomous agent called Layout-aware Dreamer (LAD), including two novel modules, that is, the Layout Learner and the Goal Dreamer to mimic this cognitive decision process. The Layout Learner learns to infer the room category distribution of neighboring unexplored areas along the path for coarse layout estimation, which effectively introduces layout common sense of room-to-room transitions to our agent. To learn an effective exploration of the environment, the Goal Dreamer imagines the destination beforehand. Our agent achieves new state-of-the-art performance on the public leaderboard of the REVERIE dataset in challenging unseen test environments with improvement in navigation success (SR) by 4.02% and remote grounding success (RGS) by 3.43% compared to the previous state-of-the-art. The code is released at https://github.com/zehao-wang/LAD
translated by 谷歌翻译
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with nonreversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing visionand-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.
translated by 谷歌翻译
旨在为通用机器人铺平道路的边界研究,视觉和语言导航(VLN)一直是计算机视觉和自然语言处理社区的热门话题。 VLN任务要求代理在不熟悉的环境中按照自然语言说明导航到目标位置。最近,基于变压器的模型已在VLN任务上获得了重大改进。由于变压器体系结构中的注意力机制可以更好地整合视觉和语言的模式内和模式信息。但是,当前基于变压器的模型中存在两个问题。 1)模型独立处理每个视图,而无需考虑对象的完整性。 2)在视觉模态的自我注意操作期间,在空间上遥远的视图可以彼此交织而无需明确的限制。这种混合可能会引入额外的噪音而不是有用的信息。为了解决这些问题,我们建议1)基于插槽注意的模块,以合并来自同一对象的分割的信息。 2)局部注意力掩模机制限制视觉注意力跨度。所提出的模块可以轻松地插入任何VLN体系结构中,我们将复发的VLN-Bert用作基本模型。 R2R数据集的实验表明,我们的模型已达到最新结果。
translated by 谷歌翻译