In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file.
translated by 谷歌翻译
Efficient localization plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned aerial vehicles (UAVs), which would contribute to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities for enhancing localization of UAVs and UGVs. In this paper, we review the radio frequency (RF) based approaches for localization. We review the RF features that can be utilized for localization and investigate the current methods suitable for Unmanned vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localization for both UAVs and UGVs is examined, and the envisioned 5G NR for localization enhancement, and the future research direction are explored.
translated by 谷歌翻译
建筑行业的机器人可以使用高精度数据捕获来通过不断监视工作进度来降低成本。准确的数据捕获需要在环境中精确的移动机器人定位。在本文中,我们介绍了有关机器人本地化的新颖作品,该工作以墙壁和房间的形式提取了从建筑计划中提取几何,语义以及拓扑信息,并创建了情境图的拓扑和度量语言层(S-图)在环境中导航之前。当机器人在施工环境中导航时,它使用机器人的探光仪和从3D LIDAR测量中提取的平面壁的形式的感觉观测来估算其依靠粒子过滤器方法的姿势,并利用先前构建的情境图和它可用的几何,语义和拓扑信息。我们在将其与基于传统几何的本地化技术进行比较时,在实际持续的施工站点上捕获的模拟和真实数据集中验证了我们的方法。
translated by 谷歌翻译
软机器人抓手具有许多优势,可以解决动态空中抓握方面的挑战。最近展示的用于空中抓握的典型多指的软握把高度依赖于成功抓握的目标对象的方向。这项研究通过开发一种用于自主空气操纵的全向系统来推动动态空中抓地力的边界。特别是,该论文研究了一种新型,高度集成,模块化,传感器富含通用的握把的设计,制造和实验验证,专为空中应用而设计。提出的抓手利用粒子堵塞和软颗粒材料的最新发展产生了强大的握持力,同时非常轻巧,节能,并且只需要低激活力。我们表明,通过在膜的硅硅混合物中添加添加剂,可以将持有力提高多达50%。实验表明,即使没有几何互锁,我们的轻质抓地力也可以以低至2.5n的激活力发育高达15n的持有力。最后,通过将抓地力安装到多旋风的情况下,在实际条件下执行了一个选择和释放任务。开发的空中抓握系统具有许多有用的属性,例如对碰撞的弹性和鲁棒性以及将无人机与环境脱离的固有的被动合规性。
translated by 谷歌翻译
空中操纵器(AM)表现出特别具有挑战性的非线性动力学;无人机和操纵器携带的是一个紧密耦合的动态系统,相互影响。描述这些动力学的数学模型构成了非线性控制和深度强化学习中许多解决方案的核心。传统上,动力学的配方涉及在拉格朗日框架中的欧拉角参数化或牛顿 - 欧拉框架中的四元素参数化。前者的缺点是诞生奇异性,而后者在算法上是复杂的。这项工作提出了一个混合解决方案,结合了两者的好处,即利用拉格朗日框架的四元化方法,将无奇异参数化与拉格朗日方法的算法简单性联系起来。我们通过提供有关运动学建模过程的详细见解以及一般空中操纵器动力学的表述。获得的动力学模型对实时物理引擎进行了实验验证。获得的动力学模型的实际应用显示在计算的扭矩反馈控制器(反馈线性化)的上下文中,我们通过日益复杂的模型分析其实时功能。
translated by 谷歌翻译
移动机器人应该意识到他们的情况,包括对周围环境的深刻理解,以及对自己的状态的估计,成功地做出智能决策并在真实环境中自动执行任务。 3D场景图是一个新兴的研究领域,建议在包含几何,语义和关系/拓扑维度的联合模型中表示环境。尽管3D场景图已经与SLAM技术相结合,以提供机器人的情境理解,但仍需要进一步的研究才能有效地部署它们在板载移动机器人。为此,我们在本文中介绍了一个小说,实时的在线构建情境图(S-Graph),该图在单个优化图中结合在一起,环境的表示与上述三个维度以及机器人姿势一起。我们的方法利用了从3D激光扫描提取的轨道读数和平面表面,以实时构造和优化三层S图,其中包括(1)机器人跟踪层,其中机器人姿势已注册,(2)衡量标准。语义层具有诸如平面壁和(3)我们的新颖拓扑层之类的特征,从而使用高级特征(例如走廊和房间)来限制平面墙。我们的建议不仅证明了机器人姿势估计的最新结果,而且还以度量的环境模型做出了贡献
translated by 谷歌翻译
Machine learning methods like neural networks are extremely successful and popular in a variety of applications, however, they come at substantial computational costs, accompanied by high energy demands. In contrast, hardware capabilities are limited and there is evidence that technology scaling is stuttering, therefore, new approaches to meet the performance demands of increasingly complex model architectures are required. As an unsafe optimization, noisy computations are more energy efficient, and given a fixed power budget also more time efficient. However, any kind of unsafe optimization requires counter measures to ensure functionally correct results. This work considers noisy computations in an abstract form, and gears to understand the implications of such noise on the accuracy of neural-network-based classifiers as an exemplary workload. We propose a methodology called "Walking Noise" that allows to assess the robustness of different layers of deep architectures by means of a so-called "midpoint noise level" metric. We then investigate the implications of additive and multiplicative noise for different classification tasks and model architectures, with and without batch normalization. While noisy training significantly increases robustness for both noise types, we observe a clear trend to increase weights and thus increase the signal-to-noise ratio for additive noise injection. For the multiplicative case, we find that some networks, with suitably simple tasks, automatically learn an internal binary representation, hence becoming extremely robust. Overall this work proposes a method to measure the layer-specific robustness and shares first insights on how networks learn to compensate injected noise, and thus, contributes to understand robustness against noisy computations.
translated by 谷歌翻译
End-to-End speech-to-speech translation (S2ST) is generally evaluated with text-based metrics. This means that generated speech has to be automatically transcribed, making the evaluation dependent on the availability and quality of automatic speech recognition (ASR) systems. In this paper, we propose a text-free evaluation metric for end-to-end S2ST, named BLASER, to avoid the dependency on ASR systems. BLASER leverages a multilingual multimodal encoder to directly encode the speech segments for source input, translation output and reference into a shared embedding space and computes a score of the translation quality that can be used as a proxy to human evaluation. To evaluate our approach, we construct training and evaluation sets from more than 40k human annotations covering seven language directions. The best results of BLASER are achieved by training with supervision from human rating scores. We show that when evaluated at the sentence level, BLASER correlates significantly better with human judgment compared to ASR-dependent metrics including ASR-SENTBLEU in all translation directions and ASR-COMET in five of them. Our analysis shows combining speech and text as inputs to BLASER does not increase the correlation with human scores, but best correlations are achieved when using speech, which motivates the goal of our research. Moreover, we show that using ASR for references is detrimental for text-based metrics.
translated by 谷歌翻译
Compressing neural network architectures is important to allow the deployment of models to embedded or mobile devices, and pruning and quantization are the major approaches to compress neural networks nowadays. Both methods benefit when compression parameters are selected specifically for each layer. Finding good combinations of compression parameters, so-called compression policies, is hard as the problem spans an exponentially large search space. Effective compression policies consider the influence of the specific hardware architecture on the used compression methods. We propose an algorithmic framework called Galen to search such policies using reinforcement learning utilizing pruning and quantization, thus providing automatic compression for neural networks. Contrary to other approaches we use inference latency measured on the target hardware device as an optimization goal. With that, the framework supports the compression of models specific to a given hardware target. We validate our approach using three different reinforcement learning agents for pruning, quantization and joint pruning and quantization. Besides proving the functionality of our approach we were able to compress a ResNet18 for CIFAR-10, on an embedded ARM processor, to 20% of the original inference latency without significant loss of accuracy. Moreover, we can demonstrate that a joint search and compression using pruning and quantization is superior to an individual search for policies using a single compression method.
translated by 谷歌翻译
With the rise of AI in recent years and the increase in complexity of the models, the growing demand in computational resources is starting to pose a significant challenge. The need for higher compute power is being met with increasingly more potent accelerators and the use of large compute clusters. However, the gain in prediction accuracy from large models trained on distributed and accelerated systems comes at the price of a substantial increase in energy demand, and researchers have started questioning the environmental friendliness of such AI methods at scale. Consequently, energy efficiency plays an important role for AI model developers and infrastructure operators alike. The energy consumption of AI workloads depends on the model implementation and the utilized hardware. Therefore, accurate measurements of the power draw of AI workflows on different types of compute nodes is key to algorithmic improvements and the design of future compute clusters and hardware. To this end, we present measurements of the energy consumption of two typical applications of deep learning models on different types of compute nodes. Our results indicate that 1. deriving energy consumption directly from runtime is not accurate, but the consumption of the compute node needs to be considered regarding its composition; 2. neglecting accelerator hardware on mixed nodes results in overproportional inefficiency regarding energy consumption; 3. energy consumption of model training and inference should be considered separately - while training on GPUs outperforms all other node types regarding both runtime and energy consumption, inference on CPU nodes can be comparably efficient. One advantage of our approach is that the information on energy consumption is available to all users of the supercomputer, enabling an easy transfer to other workloads alongside a raise in user-awareness of energy consumption.
translated by 谷歌翻译