流感病毒迅速变异,可能对公共卫生构成威胁,尤其是对弱势群体的人。在整个历史中,流感A病毒在不同物种之间引起了大流行病。重要的是要识别病毒的起源,以防止爆发的传播。最近,人们对使用机器学习算法来为病毒序列提供快速准确的预测一直引起人们的兴趣。在这项研究中,使用真实的测试数据集和各种评估指标用于评估不同分类学水平的机器学习算法。由于血凝素是免疫反应中的主要蛋白质,因此仅使用血凝素序列并由位置特异性评分基质和单词嵌入来表示。结果表明,5-grams-transformer神经网络是预测病毒序列起源的最有效算法,大约99.54%的AUCPR,98.01%的F1分数和96.60%的MCC,在较高的分类水平上,约94.74%AUCPR,87.41%,87.41%,87.41% %F1分数%和80.79%的MCC在较低的分类水平下。
translated by 谷歌翻译
递归是有限地描述潜在无限物体的基本范例。由于最先进的强化学习(RL)算法无法直接推理递归,因此他们必须依靠从业者的创造力来设计适当的“平坦”环境代表。由此产生的手动特征结构和近似值繁琐且容易出错。他们缺乏透明度会阻碍可伸缩性。为了克服这些挑战,我们开发了能够在被描述为Markov决策过程集合(MDP)的环境中计算最佳策略的RL算法,这些算法可以递归调用。每个成分MDP的特征是几个进入点和出口点,与这些调用的输入和输出值相对应。这些递归的MDP(或RMDPS)与概率下降系统(呼叫堆栈扮演起作用堆栈的角色)相同,并且可以用递归程序性调用对概率程序进行建模。我们介绍了递归Q学习 - RMDPS的无模型RL算法 - 并证明它在轻度假设下会收敛于有限的,单位和确定性的多EXIT RMDP。
translated by 谷歌翻译
流感每个季节都会发生,偶尔会引起大流行。尽管死亡率较低,但流感却是一个主要的公共卫生问题,因为肺炎等严重疾病可能会使它复杂化。一种快速,准确和低成本的方法来预测流感病毒的原始宿主和亚型,可以帮助减少病毒的传播并使资源贫乏的地区受益。在这项工作中,我们提出了多通道神经网络,以预测具有黑凝集素和神经氨酸酶蛋白序列的流感类型和宿主的抗原类型和宿主。包含完整蛋白质序列的集成数据集用于产生预训练的模型,并使用其他两个数据集来测试模型的性能。一个测试组包含完整的蛋白质序列,另一个测试组包含不完整的蛋白质序列。结果表明,多通道神经网络适用于预测具有完整和部分蛋白质序列的流感病毒宿主和抗原亚型。
translated by 谷歌翻译
流感病毒的快速突变威胁着公共卫生。具有不同主体的病毒中的重新排列可能导致致命的大流行。然而,随着流感病毒可以在不同物种之间循环,难以在爆发期间或之后检测原始病毒的原始宿主。因此,早期和快速检测病毒宿主将有助于减少病毒的进一步扩散。我们使用各种机器学习模型,其中具有从位置特定的评分矩阵(PSSM)和从单词嵌入和单词编码中学习的特征来推断出原点寄生病毒的功能。结果表明,基于PSSM的模型的性能达到了95%的MCC,F1约为96%。使用具有Word Embedated的模型获得的MCC约为96%,F1约为97%。
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
In this work, a method for obtaining pixel-wise error bounds in Bayesian regularization of inverse imaging problems is introduced. The proposed method employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain coverage guarantees for the error bounds, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner. Guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, experiments with multiple regularization approaches presented in the paper confirm that in practice, the obtained error bounds are rather tight. For realizing the numerical experiments, also a novel primal-dual Langevin algorithm for sampling from non-smooth distributions is introduced in this work.
translated by 谷歌翻译
Machine learning (ML) on graph-structured data has recently received deepened interest in the context of intrusion detection in the cybersecurity domain. Due to the increasing amounts of data generated by monitoring tools as well as more and more sophisticated attacks, these ML methods are gaining traction. Knowledge graphs and their corresponding learning techniques such as Graph Neural Networks (GNNs) with their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies, are finding application in the cybersecurity domain. However, similar to other connectionist models, GNNs are lacking transparency in their decision making. This is especially important as there tend to be a high number of false positive alerts in the cybersecurity domain, such that triage needs to be done by domain experts, requiring a lot of man power. Therefore, we are addressing Explainable AI (XAI) for GNNs to enhance trust management by exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge. We experimented with this approach by generating explanations in an industrial demonstrator system. The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios. Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.
translated by 谷歌翻译
Earthquakes, fire, and floods often cause structural collapses of buildings. The inspection of damaged buildings poses a high risk for emergency forces or is even impossible, though. We present three recent selected missions of the Robotics Task Force of the German Rescue Robotics Center, where both ground and aerial robots were used to explore destroyed buildings. We describe and reflect the missions as well as the lessons learned that have resulted from them. In order to make robots from research laboratories fit for real operations, realistic test environments were set up for outdoor and indoor use and tested in regular exercises by researchers and emergency forces. Based on this experience, the robots and their control software were significantly improved. Furthermore, top teams of researchers and first responders were formed, each with realistic assessments of the operational and practical suitability of robotic systems.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface (OSI) standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in real-world spray scenarios significantly. Furthermore, a systematic real-world data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
translated by 谷歌翻译