这项工作提出了专门针对粒子探测器的低潜伏期图神经网络(GNN)设计的新型可重构体系结构。加速粒子探测器的GNN是具有挑战性的,因为它需要次微秒延迟才能在CERN大型强子撞机实验的级别1触发器中部署网络以进行在线事件选择。本文提出了一种自定义代码转换,并在基于互动网络的GNN中使用完全连接的图表中的矩阵乘法操作降低了强度,从而避免了昂贵的乘法。它利用了稀疏模式以及二进制邻接矩阵,并避免了不规则的内存访问,从而降低了延迟和硬件效率的提高。此外,我们引入了一种基于外部产品的基质乘法方法,该方法通过降低潜伏期设计的强度降低来增强。此外,引入了融合步骤,以进一步降低设计延迟。此外,提出了GNN特异性算法 - 硬件共同设计方法,该方法不仅找到了具有更好延迟的设计,而且在给定的延迟约束下发现了高精度的设计。最后,已经设计和开源了此低延迟GNN硬件体系结构的可自定义模板,该模板可以使用高级合成工具来生成低延迟的FPGA设计,并有效地利用资源。评估结果表明,我们的FPGA实施速度高24倍,并且消耗的功率比GPU实施少45倍。与我们以前的FPGA实施相比,这项工作的延迟降低了6.51至16.7倍。此外,我们的FPGA设计的延迟足以使GNN在亚微秒,实时撞机触发器系统中部署,从而使其能够从提高的精度中受益。
translated by 谷歌翻译
从限制黑暗部门的暗物质颗粒的生产可能导致许多新颖的实验签名。根据理论的细节,质子 - 质子碰撞中的黑暗夸克生产可能导致颗粒的半衰期:黑暗强度的准直喷雾,其中颗粒碰撞器实验只有一些。实验签名的特征在于,具有与喷射器的可见部件相结合的重建缺失的动量。这种复杂的拓扑对检测器效率低下和错误重建敏感,从而产生人为缺失的势头。通过这项工作,我们提出了一种信号不可知的策略来拒绝普通喷射,并通过异常检测技术鉴定半衰期喷射。具有喷射子结构变量的深度神经自动化器网络作为输入,证明了对分析异常喷射的非常有用。该研究重点介绍了半意射流签名;然而,该技术可以适用于任何新的物理模型,该模型预测来自非SM粒子的喷射器的签名。
translated by 谷歌翻译
AutoEncoders在异常检测中具有高能物理学中的有用应用,特别是对于喷气机 - 在碰撞中产生的颗粒的准直淋浴,例如Cern大型强子撞机的碰撞。我们探讨了基于图形的AutoEncoders,它们在其“粒子云”表示中的喷射器上运行,并且可以在喷气机内的粒子中利用相互依存的依赖性,用于这种任务。另外,我们通过图形神经网络对能量移动器的距离开发可差的近似,这随后可以用作自动化器的重建损耗函数。
translated by 谷歌翻译
粒子流(PF)算法用于通用粒子检测器中,通过组合来自不同子目录的信息来重建碰撞的综合粒子级视图。已经开发出作为机器学习粒子流(MLPF)算法的图形神经网络(GNN)模型,以替代基于规则的PF算法。但是,了解模型的决策并不简单,特别是鉴于设定的预测任务,动态图形构建和消息传递步骤的复杂性。在本文中,我们适应了GNN的层状相关性传播技术,并将其应用于MLPF算法,以衡量相关节点和特征的预测。通过这个过程,我们深入了解模型的决策。
translated by 谷歌翻译
我们介绍了基于深频自动化器的异常检测技术在激光干涉仪中检测重力波信号的问题。在噪声数据上接受训练,这类算法可以使用无监督的策略来检测信号,即,不瞄准特定类型的来源。我们开发了自定义架构,以分析来自两个干涉仪的数据。我们将所获得的性能与其他AutoEncoder架构和卷积分类器进行比较。与更传统的监督技术相比,拟议战略的无监督性质在准确性方面具有成本。另一方面,在预先计算信号模板的集合之外,存在定性增益。经常性AutoEncoder超越基于不同架构的其他AutoEncoder。本文呈现的复发性自动额片的类可以补充用于引力波检测的搜索策略,并延长正在进行的检测活动的范围。
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
The paper addresses the problem of time offset synchronization in the presence of temperature variations, which lead to a non-Gaussian environment. In this context, regular Kalman filtering reveals to be suboptimal. A functional optimization approach is developed in order to approximate optimal estimation of the clock offset between master and slave. A numerical approximation is provided to this aim, based on regular neural network training. Other heuristics are provided as well, based on spline regression. An extensive performance evaluation highlights the benefits of the proposed techniques, which can be easily generalized to several clock synchronization protocols and operating environments.
translated by 谷歌翻译
The paper addresses state estimation for clock synchronization in the presence of factors affecting the quality of synchronization. Examples are temperature variations and delay asymmetry. These working conditions make synchronization a challenging problem in many wireless environments, such as Wireless Sensor Networks or WiFi. Dynamic state estimation is investigated as it is essential to overcome non-stationary noises. The two-way timing message exchange synchronization protocol has been taken as a reference. No a-priori assumptions are made on the stochastic environments and no temperature measurement is executed. The algorithms are unequivocally specified offline, without the need of tuning some parameters in dependence of the working conditions. The presented approach reveals to be robust to a large set of temperature variations, different delay distributions and levels of asymmetry in the transmission path.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Feature selection is of great importance in Machine Learning, where it can be used to reduce the dimensionality of classification, ranking and prediction problems. The removal of redundant and noisy features can improve both the accuracy and scalability of the trained models. However, feature selection is a computationally expensive task with a solution space that grows combinatorically. In this work, we consider in particular a quadratic feature selection problem that can be tackled with the Quantum Approximate Optimization Algorithm (QAOA), already employed in combinatorial optimization. First we represent the feature selection problem with the QUBO formulation, which is then mapped to an Ising spin Hamiltonian. Then we apply QAOA with the goal of finding the ground state of this Hamiltonian, which corresponds to the optimal selection of features. In our experiments, we consider seven different real-world datasets with dimensionality up to 21 and run QAOA on both a quantum simulator and, for small datasets, the 7-qubit IBM (ibm-perth) quantum computer. We use the set of selected features to train a classification model and evaluate its accuracy. Our analysis shows that it is possible to tackle the feature selection problem with QAOA and that currently available quantum devices can be used effectively. Future studies could test a wider range of classification models as well as improve the effectiveness of QAOA by exploring better performing optimizers for its classical step.
translated by 谷歌翻译