我们引入了一种新型的自动驾驶汽车 - 一种自动推土机,有望以有效,健壮和安全的方式完成建筑工地任务。为了更好地处理推土机的路径规划并确保建筑工地的安全性,对象检测是感知任务中最关键的组成部分之一。在这项工作中,我们首先通过开车来收集建筑工地数据。然后,我们彻底分析数据以了解其分布。最后,对两个众所周知的对象检测模型进行了训练,他们的性能通过广泛的训练策略和超参数进行了基准测试。
translated by 谷歌翻译
Generating realistic 3D worlds occupied by moving humans has many applications in games, architecture, and synthetic data creation. But generating such scenes is expensive and labor intensive. Recent work generates human poses and motions given a 3D scene. Here, we take the opposite approach and generate 3D indoor scenes given 3D human motion. Such motions can come from archival motion capture or from IMU sensors worn on the body, effectively turning human movement in a "scanner" of the 3D world. Intuitively, human movement indicates the free-space in a room and human contact indicates surfaces or objects that support activities such as sitting, lying or touching. We propose MIME (Mining Interaction and Movement to infer 3D Environments), which is a generative model of indoor scenes that produces furniture layouts that are consistent with the human movement. MIME uses an auto-regressive transformer architecture that takes the already generated objects in the scene as well as the human motion as input, and outputs the next plausible object. To train MIME, we build a dataset by populating the 3D FRONT scene dataset with 3D humans. Our experiments show that MIME produces more diverse and plausible 3D scenes than a recent generative scene method that does not know about human movement. Code and data will be available for research at https://mime.is.tue.mpg.de.
translated by 谷歌翻译
来自多个RGB摄像机的无标记人类运动捕获(MOCAP)是一个广泛研究的问题。现有方法要么需要校准相机,要么相对于静态摄像头校准它们,该摄像头是MOCAP系统的参考框架。每个捕获会话都必须先验完成校准步骤,这是一个乏味的过程,并且每当有意或意外移动相机时,都需要重新校准。在本文中,我们提出了一种MOCAP方法,该方法使用了多个静态和移动的外部未校准的RGB摄像机。我们方法的关键组成部分如下。首先,由于相机和受试者可以自由移动,因此我们选择接地平面作为常见参考,以代表身体和相机运动,与代表摄像机坐标中身体的现有方法不同。其次,我们了解相对于接地平面的短人类运动序列($ \ sim $ 1SEC)的概率分布,并利用它在摄像机和人类运动之间消除歧义。第三,我们将此分布用作一种新型的多阶段优化方法的运动,以适合SMPL人体模型,并且摄像机在图像上的人体关键点构成。最后,我们证明我们的方法可以在从航空摄像机到智能手机的各种数据集上使用。与使用静态摄像头的单眼人类MOCAP任务相比,它还提供了更准确的结果。我们的代码可在https://github.com/robot-ception-group/smartmocap上进行研究。
translated by 谷歌翻译
推断人类场景接触(HSC)是了解人类如何与周围环境相互作用的第一步。尽管检测2D人类对象的相互作用(HOI)和重建3D人姿势和形状(HPS)已经取得了重大进展,但单个图像的3D人习惯接触的推理仍然具有挑战性。现有的HSC检测方法仅考虑几种类型的预定义接触,通常将身体和场景降低到少数原语,甚至忽略了图像证据。为了预测单个图像的人类场景接触,我们从数据和算法的角度解决了上述局限性。我们捕获了一个名为“真实场景,互动,联系和人类”的新数据集。 Rich在4K分辨率上包含多视图室外/室内视频序列,使用无标记运动捕获,3D身体扫描和高分辨率3D场景扫描捕获的地面3D人体。 Rich的一个关键特征是它还包含身体上精确的顶点级接触标签。使用Rich,我们训练一个网络,该网络可预测单个RGB图像的密集车身场景接触。我们的主要见解是,接触中的区域总是被阻塞,因此网络需要能够探索整个图像以获取证据。我们使用变压器学习这种非本地关系,并提出新的身体场景接触变压器(BSTRO)。很少有方法探索3D接触;那些只专注于脚的人,将脚接触作为后处理步骤,或从身体姿势中推断出无需看现场的接触。据我们所知,BSTRO是直接从单个图像中直接估计3D身体场景接触的方法。我们证明,BSTRO的表现明显优于先前的艺术。代码和数据集可在https://rich.is.tue.mpg.de上获得。
translated by 谷歌翻译
虽然从图像中回归3D人类的方法迅速发展,但估计的身体形状通常不会捕获真正的人形状。这是有问题的,因为对于许多应用,准确的身体形状与姿势一样重要。身体形状准确性差姿势准确性的关键原因是缺乏数据。尽管人类可以标记2D关节,并且这些约束3D姿势,但“标记” 3D身体形状并不容易。由于配对的数据与图像和3D身体形状很少见,因此我们利用了两个信息来源:(1)我们收集了各种“时尚”模型的互联网图像,以及一系列的人体测量值; (2)我们为3D身体网眼和模型图像收集语言形状属性。综上所述,这些数据集提供了足够的约束来推断密集的3D形状。我们利用几种新型方法来利用人体测量和语言形状属性来训练称为Shapy的神经网络,从而从RGB图像中回归了3D人类的姿势和形状。我们在公共基准测试上评估shapy,但请注意,它们要么缺乏明显的身体形状变化,地面真实形状或衣服变化。因此,我们收集了一个新的数据集,用于评估3D人类形状估计,称为HBW,其中包含“野生人体”的照片,我们为其具有地面3D身体扫描。在这个新的基准测试中,Shapy在3D身体估计的任务上的最先进方法极大地胜过。这是第一次演示,即可以从易于观察的人体测量和语言形状属性中训练来自图像的3D体形回归。我们的模型和数据可在以下网址获得:shapy.is.tue.mpg.de
translated by 谷歌翻译
尽管强化学习(RL)在许多领域都取得了巨大的成功,但是当很难指定奖励并且不允许探索奖励时,将RL应用于医疗保健等现实世界中的挑战。在这项工作中,我们专注于恢复临床医生在治疗患者方面的回报。我们结合了理由,根据其潜在的未来结果来解释临床医生的治疗方法。我们使用通用的添加剂模型(GAM) - 一类准确的,可解释的模型 - 恢复奖励。在模拟和现实世界医院的数据集中,我们显示模型的表现优于基准。最后,在治疗患者时,我们的模型的解释符合几个临床准则,而我们发现常用的线性模型通常与它们相矛盾。
translated by 谷歌翻译
在真正的高风险环境中部署机器学习模型(例如医疗保健)通常不仅取决于模型的准确性,而且还取决于其公平性,鲁棒性和可解释性。广义添加剂模型(Gams)是一类具有悠久的可解释模型,这些模型在这些高风险域中使用了悠久的使用,但它们缺乏深度学习的理想特征,例如可分利用和可扩展性。在这项工作中,我们提出了一个神经游戏(Node-Gam)和神经GA $ ^ 2 $ m(node-ga $ ^ 2 $ m),比展出良好,而不是大型数据集上的其他gam更好,同时剩下可解释其他集合和深层学习模式。我们展示了我们的模型在数据中找到了有趣的模式。最后,我们表明我们通过自我监督的预培训提高了模型准确性,这是不可分辨性的游戏不可能的改进。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译