Explainable AI, in the context of autonomous systems, like self-driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for autonomous vehicles' actions has many benefits (e.g., increased trust and acceptance), but put little emphasis on when an explanation is needed and how the content of explanation changes with driving context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in self-driving cars in different contexts. Moreover, we present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary. In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译
This paper presents a novel framework for planning in unknown and occluded urban spaces. We specifically focus on turns and intersections where occlusions significantly impact navigability. Our approach uses an inpainting model to fill in a sparse, occluded, semantic lidar point cloud and plans dynamically feasible paths for a vehicle to traverse through the open and inpainted spaces. We demonstrate our approach using a car's lidar data with real-time occlusions, and show that by inpainting occluded areas, we can plan longer paths, with more turn options compared to without inpainting; in addition, our approach more closely follows paths derived from a planner with no occlusions (called the ground truth) compared to other state of the art approaches.
translated by 谷歌翻译
Intelligent agents have great potential as facilitators of group conversation among older adults. However, little is known about how to design agents for this purpose and user group, especially in terms of agent embodiment. To this end, we conducted a mixed methods study of older adults' reactions to voice and body in a group conversation facilitation agent. Two agent forms with the same underlying artificial intelligence (AI) and voice system were compared: a humanoid robot and a voice assistant. One preliminary study (total n=24) and one experimental study comparing voice and body morphologies (n=36) were conducted with older adults and an experienced human facilitator. Findings revealed that the artificiality of the agent, regardless of its form, was beneficial for the socially uncomfortable task of conversation facilitation. Even so, talkative personality types had a poorer experience with the "bodied" robot version. Design implications and supplementary reactions, especially to agent voice, are also discussed.
translated by 谷歌翻译
Gender/ing guides how we view ourselves, the world around us, and each other--including non-humans. Critical voices have raised the alarm about stereotyped gendering in the design of socially embodied artificial agents like voice assistants, conversational agents, and robots. Yet, little is known about how this plays out in research and to what extent. As a first step, we critically reviewed the case of Pepper, a gender-ambiguous humanoid robot. We conducted a systematic review (n=75) involving meta-synthesis and content analysis, examining how participants and researchers gendered Pepper through stated and unstated signifiers and pronoun usage. We found that ascriptions of Pepper's gender were inconsistent, limited, and at times discordant, with little evidence of conscious gendering and some indication of researcher influence on participant gendering. We offer six challenges driving the state of affairs and a practical framework coupled with a critical checklist for centering gender in research on artificial agents.
translated by 谷歌翻译
We introduce an imitation learning-based physical human-robot interaction algorithm capable of predicting appropriate robot responses in complex interactions involving a superposition of multiple interactions. Our proposed algorithm, Blending Bayesian Interaction Primitives (B-BIP) allows us to achieve responsive interactions in complex hugging scenarios, capable of reciprocating and adapting to a hugs motion and timing. We show that this algorithm is a generalization of prior work, for which the original formulation reduces to the particular case of a single interaction, and evaluate our method through both an extensive user study and empirical experiments. Our algorithm yields significantly better quantitative prediction error and more-favorable participant responses with respect to accuracy, responsiveness, and timing, when compared to existing state-of-the-art methods.
translated by 谷歌翻译
We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by $\tilde{O}\left( \frac{k n_x}{HN_{\mathrm{shared}}} + \frac{k n_u}{N_{\mathrm{target}}}\right)$, where $n_x > k$ is the state dimension, $n_u$ is the input dimension, $N_{\mathrm{shared}}$ denotes the total amount of data collected for each policy during representation learning, and $N_{\mathrm{target}}$ is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation.
translated by 谷歌翻译
We introduce a new benchmark dataset, Placenta, for node classification in an underexplored domain: predicting microanatomical tissue structures from cell graphs in placenta histology whole slide images. This problem is uniquely challenging for graph learning for a few reasons. Cell graphs are large (>1 million nodes per image), node features are varied (64-dimensions of 11 types of cells), class labels are imbalanced (9 classes ranging from 0.21% of the data to 40.0%), and cellular communities cluster into heterogeneously distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for a single structure). Here, we release a dataset consisting of two cell graphs from two placenta histology images totalling 2,395,747 nodes, 799,745 of which have ground truth labels. We present inductive benchmark results for 7 scalable models and show how the unique qualities of cell graphs can help drive the development of novel graph neural network architectures.
translated by 谷歌翻译
本文着重于影响弹性的移动机器人的碰撞运动计划和控制的新兴范式转移,并开发了一个统一的层次结构框架,用于在未知和部分观察的杂物空间中导航。在较低级别上,我们开发了一种变形恢复控制和轨迹重新启动策略,该策略处理可能在本地运行时发生的碰撞。低级系统会积极检测碰撞(通过内部内置的移动机器人上的嵌入式霍尔效应传感器),使机器人能够从其内部恢复,并在本地调整后影响后的轨迹。然后,在高层,我们提出了一种基于搜索的计划算法,以确定如何最好地利用潜在的碰撞来改善某些指标,例如控制能量和计算时间。我们的方法建立在A*带有跳跃点的基础上。我们生成了一种新颖的启发式功能,并进行了碰撞检查和调整技术,从而使A*算法通过利用和利用可能的碰撞来更快地收敛到达目标。通过将全局A*算法和局部变形恢复和重新融合策略以及该框架的各个组件相结合而生成的整体分层框架在模拟和实验中都经过了广泛的测试。一项消融研究借鉴了与基于搜索的最先进的避免碰撞计划者(用于整体框架)的链接,以及基于搜索的避免碰撞和基于采样的碰撞 - 碰撞 - 全球规划师(对于更高的较高的碰撞 - 等级)。结果证明了我们的方法在未知环境中具有碰撞的运动计划和控制的功效,在2D中运行的一类撞击弹性机器人具有孤立的障碍物。
translated by 谷歌翻译
大脑区域之间的功能连通性(FC)通常是通过应用于功能磁共振成像(FMRI)数据的统计依赖度量来估计的。所得的功能连接矩阵(FCM)通常用于表示脑图的邻接矩阵。最近,图形神经网络(GNN)已成功应用于FCM,以学习脑图表示。但是,现有GNN方法的一个普遍局限性是,它们要求在模型训练之前知道图形邻接矩阵。因此,隐含地假设数据的基础依赖性结构已知。不幸的是,对于fMRI而言,情况并非如此,因为哪种统计度量的选择最能代表数据的依赖性结构是非平凡的。同样,大多数GNN应用于功能磁共振成像,FC都会随着时间的推移而静态,这与神经科学的证据相反,表明功能性脑网络是随时间变化且动态的。这些复合问题可能会对GNN学习脑图表示的能力产生不利影响。作为解决方案,我们提出了动态大脑图结构学习(DBGSL),这是一种学习fMRI数据最佳时变依赖性结构的监督方法。具体而言,DBGSL通过应用于大脑区域嵌入的时空注意力从fMRI时间表中学习了动态图。然后将所得的图馈送到空间GNN中,以学习分类的图表。大型休息状态以及性别分类任务的fMRI数据集的实验表明,DBGSL可以实现最新的性能。此外,对学习动态图的分析突出了与现有神经科学文献的发现相符的预测相关大脑区域。
translated by 谷歌翻译