We propose a motion forecasting model that exploits a novel structured map representation as well as actor-map interactions. Instead of encoding vectorized maps as raster images, we construct a lane graph from raw map data to explicitly preserve the map structure. To capture the complex topology and long range dependencies of the lane graph, we propose LaneGCN which extends graph convolutions with multiple adjacency matrices and along-lane dilation. To capture the complex interactions between actors and maps, we exploit a fusion network consisting of four types of interactions, actor-to-lane, lane-to-lane, laneto-actor and actor-to-actor. Powered by LaneGCN and actor-map interactions, our model is able to predict accurate and realistic multi-modal trajectories. Our approach significantly outperforms the state-of-the-art on the large scale Argoverse motion forecasting benchmark.
translated by 谷歌翻译
预测公路参与者的未来运动对于自动驾驶至关重要,但由于令人震惊的运动不确定性,因此极具挑战性。最近,大多数运动预测方法求助于基于目标的策略,即预测运动轨迹的终点,作为回归整个轨迹的条件,以便可以减少解决方案的搜索空间。但是,准确的目标坐标很难预测和评估。此外,目的地的点表示限制了丰富的道路环境的利用,从而导致预测不准确。目标区域,即可能的目的地区域,而不是目标坐标,可以通过涉及更多的容忍度和指导来提供更软的限制,以搜索潜在的轨迹。考虑到这一点,我们提出了一个新的基于目标区域的框架,名为“目标区域网络”(GANET)进行运动预测,该框架对目标区域进行了建模,而不是确切的目标坐标作为轨迹预测的先决条件,更加可靠,更准确地执行。具体而言,我们建议一个goicrop(目标的目标区域)操作员有效地提取目标区域中的语义巷特征,并在目标区域和模型演员的未来互动中提取语义巷,这对未来的轨迹估计很大。 Ganet在所有公共文献(直到论文提交)中排名第一个,将其源代码排在第一位。
translated by 谷歌翻译
预测场景中代理的未来位置是自动驾驶中的一个重要问题。近年来,在代表现场及其代理商方面取得了重大进展。代理与场景和彼此之间的相互作用通常由图神经网络建模。但是,图形结构主要是静态的,无法表示高度动态场景中的时间变化。在这项工作中,我们提出了一个时间图表示,以更好地捕获流量场景中的动态。我们用两种类型的内存模块补充表示形式。一个专注于感兴趣的代理,另一个专注于整个场景。这使我们能够学习暂时意识的表示,即使对多个未来进行简单回归,也可以取得良好的结果。当与目标条件预测结合使用时,我们会显示出更好的结果,可以在Argoverse基准中达到最先进的性能。
translated by 谷歌翻译
交通参与者的运动预测对于安全和强大的自动化驾驶系统至关重要,特别是在杂乱的城市环境中。然而,由于复杂的道路拓扑以及其他代理的不确定意图,这是强大的挑战。在本文中,我们介绍了一种基于图形的轨迹预测网络,其命名为双级预测器(DSP),其以分层方式编码静态和动态驾驶环境。与基于光栅状地图或稀疏车道图的方法不同,我们将驾驶环境视为具有两层的图形,专注于几何和拓扑功能。图形神经网络(GNNS)应用于提取具有不同粒度级别的特征,随后通过基于关注的层间网络聚合,实现更好的本地全局特征融合。在最近的目标驱动的轨迹预测管道之后,提取了目标代理的高可能性的目标候选者,并在这些目标上产生预测的轨迹。由于提出的双尺度上下文融合网络,我们的DSP能够产生准确和人类的多模态轨迹。我们评估了大规模协会运动预测基准测试的提出方法,实现了有希望的结果,优于最近的最先进的方法。
translated by 谷歌翻译
自我监督学习(SSL)是一种新兴技术,已成功地用于培训卷积神经网络(CNNS)和图形神经网络(GNNS),以进行更可转移,可转换,可推广和稳健的代表性学习。然而,很少探索其对自动驾驶的运动预测。在这项研究中,我们报告了将自学纳入运动预测的首次系统探索和评估。我们首先建议研究四项新型的自我监督学习任务,以通过理论原理以及对挑战性的大规模argoverse数据集进行运动预测以及定量和定性比较。其次,我们指出,基于辅助SSL的学习设置不仅胜过预测方法,这些方法在性能准确性方面使用变压器,复杂的融合机制和复杂的在线密集目标候选优化算法,而且具有较低的推理时间和建筑复杂性。最后,我们进行了几项实验,以了解为什么SSL改善运动预测。代码在\ url {https://github.com/autovision-cloud/ssl-lanes}上开源。
translated by 谷歌翻译
预测附近代理商的合理的未来轨迹是自治车辆安全的核心挑战,主要取决于两个外部线索:动态邻居代理和静态场景上下文。最近的方法在分别表征两个线索方面取得了很大进展。然而,它们忽略了两个线索之间的相关性,并且大多数很难实现地图自适应预测。在本文中,我们使用Lane作为场景数据,并提出一个分阶段网络,即共同学习代理和车道信息,用于多模式轨迹预测(JAL-MTP)。 JAL-MTP使用社交到LANE(S2L)模块来共同代表静态道和相邻代理的动态运动作为实例级车道,一种用于利用实例级车道来预测的反复出的车道注意力(RLA)机制来预测Map-Adaptive Future Trajections和两个选择器,可识别典型和合理的轨迹。在公共协议数据集上进行的实验表明JAL-MTP在定量和定性中显着优于现有模型。
translated by 谷歌翻译
The task of motion forecasting is critical for self-driving vehicles (SDVs) to be able to plan a safe maneuver. Towards this goal, modern approaches reason about the map, the agents' past trajectories and their interactions in order to produce accurate forecasts. The predominant approach has been to encode the map and other agents in the reference frame of each target agent. However, this approach is computationally expensive for multi-agent prediction as inference needs to be run for each agent. To tackle the scaling challenge, the solution thus far has been to encode all agents and the map in a shared coordinate frame (e.g., the SDV frame). However, this is sample inefficient and vulnerable to domain shift (e.g., when the SDV visits uncommon states). In contrast, in this paper, we propose an efficient shared encoding for all agents and the map without sacrificing accuracy or generalization. Towards this goal, we leverage pair-wise relative positional encodings to represent geometric relationships between the agents and the map elements in a heterogeneous spatial graph. This parameterization allows us to be invariant to scene viewpoint, and save online computation by re-using map embeddings computed offline. Our decoder is also viewpoint agnostic, predicting agent goals on the lane graph to enable diverse and context-aware multimodal prediction. We demonstrate the effectiveness of our approach on the urban Argoverse 2 benchmark as well as a novel highway dataset.
translated by 谷歌翻译
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
translated by 谷歌翻译
Behavior prediction in dynamic, multi-agent systems is an important problem in the context of self-driving cars, due to the complex representations and interactions of road components, including moving agents (e.g. pedestrians and vehicles) and road context information (e.g. lanes, traffic lights). This paper introduces VectorNet, a hierarchical graph neural network that first exploits the spatial locality of individual road components represented by vectors and then models the high-order interactions among all components. In contrast to most recent approaches, which render trajectories of moving agents and road context information as bird-eye images and encode them with convolutional neural networks (ConvNets), our approach operates on a vector representation. By operating on the vectorized high definition (HD) maps and agent trajectories, we avoid lossy rendering and computationally intensive ConvNet encoding steps. To further boost VectorNet's capability in learning context features, we propose a novel auxiliary task to recover the randomly masked out map entities and agent trajectories based on their context. We evaluate VectorNet on our in-house behavior prediction benchmark and the recently released Argoverse forecasting dataset. Our method achieves on par or better performance than the competitive rendering approach on both benchmarks while saving over 70% of the model parameters with an order of magnitude reduction in FLOPs. It also outperforms the state of the art on the Argoverse dataset.
translated by 谷歌翻译
自主驾驶的运动预测领域的先前艺术倾向于寻找接近地面真理轨迹的轨迹。但是,这种问题的表述和方法经常导致多样性和偏见轨迹预测的丧失。因此,它们不适合现实世界的自主驾驶,在这种驾驶中,多样化和依赖道路的多模式轨迹预测对安全至关重要。为此,本研究提出了一种新颖的损失函数\ textit {lane损失},可确保地图自适应多样性并适应几何约束。对带有新型轨迹候选建议模块的两阶段轨迹预测架构,\ textit {轨迹预测注意(TPA)}经过训练,通过车道损失训练,鼓励多个轨迹分布多样,以涵盖可行的方式以图像意识的方式涵盖可行的操作。此外,考虑到现有的轨迹性能指标正在重点是基于地面真理未来轨迹评估准确性,因此还建议定量评估指标来评估预测的多个轨迹的多样性。在Argoverse数据集上进行的实验表明,所提出的方法显着提高了预测轨迹的多样性,而无需牺牲预测准确性。
translated by 谷歌翻译
Motion prediction is highly relevant to the perception of dynamic objects and static map elements in the scenarios of autonomous driving. In this work, we propose PIP, the first end-to-end Transformer-based framework which jointly and interactively performs online mapping, object detection and motion prediction. PIP leverages map queries, agent queries and mode queries to encode the instance-wise information of map elements, agents and motion intentions, respectively. Based on the unified query representation, a differentiable multi-task interaction scheme is proposed to exploit the correlation between perception and prediction. Even without human-annotated HD map or agent's historical tracking trajectory as guidance information, PIP realizes end-to-end multi-agent motion prediction and achieves better performance than tracking-based and HD-map-based methods. PIP provides comprehensive high-level information of the driving scene (vectorized static map and dynamic objects with motion information), and contributes to the downstream planning and control. Code and models will be released for facilitating further research.
translated by 谷歌翻译
在许多不同的领域中,对象之间的相互作用在确定其行为方面起着关键作用。图形神经网络(GNN)已成为建模相互作用的强大工具,尽管通常以增加相当大的复杂性和延迟为代价。在本文中,我们考虑了在预测围绕自动驾驶汽车的行为者运动并研究GNN的替代方案的背景下空间相互作用建模的问题。我们重新审视2D卷积,并表明它们可以在与较低延迟的空间相互作用时表现出与图网络相当的性能,从而在时间策略系统中提供了有效和有效的替代方案。此外,我们提出了一种新型的相互作用损失,以进一步改善所考虑方法的相互作用模型。
translated by 谷歌翻译
自动驾驶系统需要对周围环境有很好的了解,包括移动障碍物和静态高清(HD)语义图。现有方法通过离线手动注释来解决语义图问题,该注释遭受了严重的可伸缩性问题。最新的基于学习的方法产生了密集的栅格分割预测,这些预测不包含单个地图元素的实例信息,并且需要涉及许多手工设计的组件的启发式后处理,以获得矢量化的地图。为此,我们引入了一个端到端矢量化的高清图学习管道,称为ve​​ctormapnet。 Vectormapnet进行了板载传感器的观测值,并预测了鸟类视图中的一组稀疏的散布原料,以建模HD地图的几何形状。基于此管道,我们的方法可以明确地对地图元素之间的空间关系进行建模,并生成对矢量化的地图,这些矢量图对于下游自主驾驶任务友好而无需进行后处理。在我们的实验中,VectorMapnet在Nuscenes数据集上实现了强大的HD MAP学习性能,从而超过了先前的最新方法,可以通过14.2地图。从定性上讲,我们还表明Vectormapnet能够生成综合地图并捕获更多的道路几何细节。据我们所知,VectorMapnet是针对端到端矢量化的HD MAP学习问题设计的第一部作品。
translated by 谷歌翻译
Predicting the future motion of road agents is a critical task in an autonomous driving pipeline. In this work, we address the problem of generating a set of scene-level, or joint, future trajectory predictions in multi-agent driving scenarios. To this end, we propose FJMP, a Factorized Joint Motion Prediction framework for multi-agent interactive driving scenarios. FJMP models the future scene interaction dynamics as a sparse directed interaction graph, where edges denote explicit interactions between agents. We then prune the graph into a directed acyclic graph (DAG) and decompose the joint prediction task into a sequence of marginal and conditional predictions according to the partial ordering of the DAG, where joint future trajectories are decoded using a directed acyclic graph neural network (DAGNN). We conduct experiments on the INTERACTION and Argoverse 2 datasets and demonstrate that FJMP produces more accurate and scene-consistent joint trajectory predictions than non-factorized approaches, especially on the most interactive and kinematically interesting agents. FJMP ranks 1st on the multi-agent test leaderboard of the INTERACTION dataset.
translated by 谷歌翻译
轨迹预测和行为决策是自动驾驶汽车的两项重要任务,他们需要对环境环境有良好的了解;通过参考轨迹预测的输出,可以更好地做出行为决策。但是,大多数当前解决方案分别执行这两个任务。因此,提出了结合多个线索的联合神经网络,并将其命名为整体变压器,以预测轨迹并同时做出行为决策。为了更好地探索线索之间的内在关系,网络使用现有知识并采用三种注意力机制:稀疏的多头类型用于减少噪声影响,特征选择稀疏类型,可最佳地使用部分先验知识,并与Sigmoid多头激活类型,用于最佳使用后验知识。与其他轨迹预测模型相比,所提出的模型具有更好的综合性能和良好的解释性。感知噪声稳健性实验表明,所提出的模型具有良好的噪声稳健性。因此,结合多个提示的同时轨迹预测和行为决策可以降低计算成本并增强场景与代理之间的语义关系。
translated by 谷歌翻译
Predicting the future motion of dynamic agents is of paramount importance to ensure safety or assess risks in motion planning for autonomous robots. In this paper, we propose a two-stage motion prediction method, referred to as R-Pred, that effectively utilizes both the scene and interaction context using a cascade of the initial trajectory proposal network and the trajectory refinement network. The initial trajectory proposal network produces M trajectory proposals corresponding to M modes of a future trajectory distribution. The trajectory refinement network enhances each of M proposals using 1) the tube-query scene attention (TQSA) and 2) the proposal-level interaction attention (PIA). TQSA uses tube-queries to aggregate the local scene context features pooled from proximity around the trajectory proposals of interest. PIA further enhances the trajectory proposals by modeling inter-agent interactions using a group of trajectory proposals selected based on their distances from neighboring agents. Our experiments conducted on the Argoverse and nuScenes datasets demonstrate that the proposed refinement network provides significant performance improvements compared to the single-stage baseline and that R-Pred achieves state-of-the-art performance in some categories of the benchmark.
translated by 谷歌翻译
Level 5 Autonomous Driving, a technology that a fully automated vehicle (AV) requires no human intervention, has raised serious concerns on safety and stability before widespread use. The capability of understanding and predicting future motion trajectory of road objects can help AV plan a path that is safe and easy to control. In this paper, we propose a network architecture that parallelizes multiple convolutional neural network backbones and fuses features to make multi-mode trajectory prediction. In the 2020 ICRA Nuscene Prediction challenge, our model ranks 15th on the leaderboard across all teams.
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
在这项工作中,我们提出了一种新的多模态多代理轨迹预测架构,专注于使用图形表示的地图和交互建模。出于地图建模的目的,我们将丰富的拓扑结构捕获到基于向量的星形图中,使代理能够直接参加用于代表地图的折线上的相关区域。我们表示此架构Starnet,并将其集成在单次代理预测设置中。作为主要结果,我们将此架构扩展到联合场景级预测,同时产生多个代理的预测。联合赛斯网的关键思想在自己的参考框中将一个代理的意识与其他代理人的观点察觉到。我们通过蒙面的自我关注实现这一目标。两个提出的架构都建立在我们以前的工作中介绍的动作空间预测框架之上,这确保了运动学上可行的轨迹预测。我们评估了富含互动的IND和交互数据集的方法,其中STARNET和联合星网实现了最先进的技术。
translated by 谷歌翻译
预测道路用户的未来行为是自主驾驶中最具挑战性和最重要的问题之一。应用深度学习对此问题需要以丰富的感知信号和地图信息的形式融合异构世界状态,并在可能的期货上推断出高度多模态分布。在本文中,我们呈现MultiPath ++,这是一个未来的预测模型,实现了在流行的基准上实现最先进的性能。 MultiPath ++通过重新访问许多设计选择来改善多径架构。第一关键设计差异是偏离基于图像的基于输入世界状态的偏离,有利于异构场景元素的稀疏编码:多径++消耗紧凑且有效的折线,直接描述道路特征和原始代理状态信息(例如,位置,速度,加速)。我们提出了一种背景感知这些元素的融合,并开发可重用的多上下文选通融合组件。其次,我们重新考虑了预定义,静态锚点的选择,并开发了一种学习模型端到端的潜在锚嵌入的方法。最后,我们在其他ML域中探索合奏和输出聚合技术 - 常见的常见域 - 并为我们的概率多模式输出表示找到有效的变体。我们对这些设计选择进行了广泛的消融,并表明我们所提出的模型在协会运动预测竞争和Waymo开放数据集运动预测挑战上实现了最先进的性能。
translated by 谷歌翻译