ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
Accurate activity location prediction is a crucial component of many mobility applications and is particularly required to develop personalized, sustainable transportation systems. Despite the widespread adoption of deep learning models, next location prediction models lack a comprehensive discussion and integration of mobility-related spatio-temporal contexts. Here, we utilize a multi-head self-attentional (MHSA) neural network that learns location transition patterns from historical location visits, their visit time and activity duration, as well as their surrounding land use functions, to infer an individual's next location. Specifically, we adopt point-of-interest data and latent Dirichlet allocation for representing locations' land use contexts at multiple spatial scales, generate embedding vectors of the spatio-temporal features, and learn to predict the next location with an MHSA network. Through experiments on two large-scale GNSS tracking datasets, we demonstrate that the proposed model outperforms other state-of-the-art prediction models, and reveal the contribution of various spatio-temporal contexts to the model's performance. Moreover, we find that the model trained on population data achieves higher prediction performance with fewer parameters than individual-level models due to learning from collective movement patterns. We also reveal mobility conducted in the recent past and one week before has the largest influence on the current prediction, showing that learning from a subset of the historical mobility is sufficient to obtain an accurate location prediction result. We believe that the proposed model is vital for context-aware mobility prediction. The gained insights will help to understand location prediction models and promote their implementation for mobility applications.
translated by 谷歌翻译
Curiosity for machine agents has been a focus of lively research activity. The study of human and animal curiosity, particularly specific curiosity, has unearthed several properties that would offer important benefits for machine learners, but that have not yet been well-explored in machine intelligence. In this work, we conduct a comprehensive, multidisciplinary survey of the field of animal and machine curiosity. As a principal contribution of this work, we use this survey as a foundation to introduce and define what we consider to be five of the most important properties of specific curiosity: 1) directedness towards inostensible referents, 2) cessation when satisfied, 3) voluntary exposure, 4) transience, and 5) coherent long-term learning. As a second main contribution of this work, we show how these properties may be implemented together in a proof-of-concept reinforcement learning agent: we demonstrate how the properties manifest in the behaviour of this agent in a simple non-episodic grid-world environment that includes curiosity-inducing locations and induced targets of curiosity. As we would hope, our example of a computational specific curiosity agent exhibits short-term directed behaviour while updating long-term preferences to adaptively seek out curiosity-inducing situations. This work, therefore, presents a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning and provides a novel view into how specific curiosity operates and in the future might be integrated into the behaviour of goal-seeking, decision-making computational agents in complex environments.
translated by 谷歌翻译
We address 2D floorplan reconstruction from 3D scans. Existing approaches typically employ heuristically designed multi-stage pipelines. Instead, we formulate floorplan reconstruction as a single-stage structured prediction task: find a variable-size set of polygons, which in turn are variable-length sequences of ordered vertices. To solve it we develop a novel Transformer architecture that generates polygons of multiple rooms in parallel, in a holistic manner without hand-crafted intermediate stages. The model features two-level queries for polygons and corners, and includes polygon matching to make the network end-to-end trainable. Our method achieves a new state-of-the-art for two challenging datasets, Structured3D and SceneCAD, along with significantly faster inference than previous methods. Moreover, it can readily be extended to predict additional information, i.e., semantic room types and architectural elements like doors and windows. Our code and models will be available at: https://github.com/ywyue/RoomFormer.
translated by 谷歌翻译
The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBed
translated by 谷歌翻译
Fine-grained population maps are needed in several domains, like urban planning, environmental monitoring, public health, and humanitarian operations. Unfortunately, in many countries only aggregate census counts over large spatial units are collected, moreover, these are not always up-to-date. We present POMELO, a deep learning model that employs coarse census counts and open geodata to estimate fine-grained population maps with 100m ground sampling distance. Moreover, the model can also estimate population numbers when no census counts at all are available, by generalizing across countries. In a series of experiments for several countries in sub-Saharan Africa, the maps produced with POMELOare in good agreement with the most detailed available reference counts: disaggregation of coarse census counts reaches R2 values of 85-89%; unconstrained prediction in the absence of any counts reaches 48-69%.
translated by 谷歌翻译
自动面部识别是一个知名的研究领域。在该领域的最后三十年的深入研究中,已经提出了许多不同的面部识别算法。随着深度学习的普及及其解决各种不同问题的能力,面部识别研究人员集中精力在此范式下创建更好的模型。从2015年开始,最先进的面部识别就植根于深度学习模型。尽管有大规模和多样化的数据集可用于评估面部识别算法的性能,但许多现代数据集仅结合了影响面部识别的不同因素,例如面部姿势,遮挡,照明,面部表情和图像质量。当算法在这些数据集上产生错误时,尚不清楚哪些因素导致了此错误,因此,没有指导需要多个方向进行更多的研究。这项工作是我们以前在2014年开发的作品的后续作品,最终于2016年发表,显示了各种面部方面对面部识别算法的影响。通过将当前的最新技术与过去的最佳系统进行比较,我们证明了在强烈的遮挡下,某些类型的照明和强烈表达的面孔是深入学习算法所掌握的问题,而具有低分辨率图像的识别,极端的姿势变化和开放式识别仍然是一个开放的问题。为了证明这一点,我们使用六个不同的数据集和五种不同的面部识别算法以开源和可重现的方式运行一系列实验。我们提供了运行所有实验的源代码,这很容易扩展,因此在我们的评估中利用自己的深网只有几分钟的路程。
translated by 谷歌翻译
流程的执行留下了信息系统中事件数据的痕迹。这些事件数据可以通过过程挖掘技术进行分析。对于传统的流程​​挖掘技术,必须将每个事件与一个对象(例如公司的客户)相关联。与一个对象相关的事件形成一个称为案例的事件序列。一个案例描述了通过流程进行的端到端运行。事件数据中包含的案例可用于发现过程模型,检测频繁的瓶颈或学习预测模型。但是,在现实生活中遇到的事件,例如ERP系统通常可以与多个对象关联。传统的顺序案例概念缺少这些以对象为中心的事件数据,因为这些数据显示了图形结构。一个人可能会通过使其变色将以对象为中心的事件数据迫使传统案例概念。但是,扁平化操纵数据并删除信息。因此,与传统事件日志的案例概念相似的概念对于启用以对象为中心的事件数据应用不同的过程挖掘任务是必要的。在本文中,我们介绍了以对象为中心的过程挖掘的案例概念:过程执行。这些是基于图形的案例概括,如传统过程采矿中所考虑的。此外,我们提供了提取过程执行的技术。基于这些执行,我们确定了使用图同构的属性相对于属性的等效过程行为。关于事件活动的等效过程执行是以对象为中心的变体,即传统过程挖掘中变体的概括。我们为以对象为中心的变体提供了可视化技术。贡献的可伸缩性和效率得到了广泛的评估。此外,我们提供了一个案例研究,显示了现实生活中最常见的以对象为中心的变体。
translated by 谷歌翻译
与2D栅格图像不同,没有用于3D视觉数据处理的单个主导表示。点云,网格或隐式功能等不同格式都具有其优点和劣势。尽管如此,诸如签名距离函数之类的网格表示在3D中也具有吸引人的属性。特别是,它们提供恒定的随机访问,并且非常适合现代机器学习。不幸的是,网格的存储大小随其尺寸而呈指数增长。因此,即使在中等分辨率下,它们也经常超过内存限制。这项工作探讨了各种低量张量格式,包括Tucker,Tensor Train和Wartenics Tensor tensor tensor tensor tensor分解,以压缩时间变化的3D数据。我们的方法迭代地计算,体素化和压缩每个帧的截断符号距离函数,并将张量式截断施加到代表整个4D场景的单个压缩张量中,将所有框架凝结到一个单个压缩张量中。我们表明,低级张量压缩对于存储和查询时间变化的签名距离功能非常紧凑。它大大降低了4D场景的内存足迹,同时令人惊讶地保留了它们的几何质量。与现有的基于迭代学习的方法(如DEEPSDF和NERF)不同,我们的方法使用具有理论保证的封闭式算法。
translated by 谷歌翻译