该卷包含来自机器学习挑战的选定贡献“发现玛雅人的奥秘”,该挑战在欧洲机器学习和数据库中知识发现的欧洲挑战赛曲目(ECML PKDD 2021)中提出。遥感大大加速了古代玛雅人森林地区的传统考古景观调查。典型的探索和发现尝试,除了关注整个古老的城市外,还集中在单个建筑物和结构上。最近,已经成功地尝试了使用机器学习来识别古代玛雅人定居点。这些尝试虽然相关,但却集中在狭窄的区域上,并依靠高质量的空中激光扫描(ALS)数据,该数据仅涵盖古代玛雅人曾经定居的地区的一小部分。另一方面,由欧洲航天局(ESA)哨兵任务制作的卫星图像数据很丰富,更重要的是公开。旨在通过执行不同类型的卫星图像(Sentinel-1和Sentinel-2和ALS)的集成图像细分来定位和识别古老的Maya架构(建筑物,Aguadas和平台)的“发现和识别古代玛雅体系结构(建筑物,Aguadas和平台)的挑战的“发现和识别古老的玛雅体系结构(建筑物,阿吉达斯和平台)的“发现玛雅的奥秘”的挑战, (LIDAR)数据。
translated by 谷歌翻译
每种算法选择旨在为给定的问题实例和给定的性能标准推荐一种或几种合适的算法,这些算法有望在特定设置中表现良好。选择是经典的离线完成的,使用有关问题实例或在专用功能提​​取步骤中从实例中提取的功能的公开可用信息。这忽略了算法在优化过程中积累的有价值的信息。在这项工作中,我们提出了一种替代性的在线算法选择方案,我们每次算法选择该方案。在我们的方法中,我们使用默认算法启动优化,在经过一定数量的迭代之后,从该初始优化器的观察到的轨迹中提取实例功能,以确定是否切换到另一个优化器。我们使用CMA-E作为默认求解器测试这种方法,以及六个不同优化器的投资组合作为可切换的潜在算法。与其他关于在线人均算法选择的最新工作相反,我们使用在第一个优化阶段累积的信息进行了第二个优化器。我们表明,我们的方法的表现优于静态算法选择。我们还基于探索性景观分析和分别对CMA-ES内部状态变量的探索性景观分析和时间序列分析进行比较。我们表明,这两种功能集的组合为我们的测试用例提供了最准确的建议,该建议是从可可平台的BBOB功能套件和Nevergrad平台的Yabbob Suite中获取的。
translated by 谷歌翻译
到目前为止,景观感知算法选择方法主要依靠景观特征提取作为预处理步骤,而与投资组合中优化算法的执行无关。这引入了许多实用应用的计算成本的重要开销,因为通过采样和评估手头的问题实例提取和计算功能,与优化算法在其搜索轨迹中所执行的功能类似。如Jankovic等人所建议的。 (EVOAPPS 2021),基于轨迹的算法选择可以通过从求解器在优化过程中对求解器进行采样和评估的点来计算景观特征来规避昂贵的特征提取问题。以这种方式计算的功能用于训练算法性能回归模型,然后在该模型上构建每运行算法选择器。在这项工作中,我们将基于轨迹的方法应用于五种算法的投资组合。我们研究了在固定的功能评估预算之后预测不同算法性能的情况下,性能回归和算法选择模型的质量和准确性。我们依靠使用相同功能评估的上述预算的一部分计算出的问题实例的景观特征。此外,我们考虑一次在求解器之间切换一次的可能性,这要求它们要热身启动,即当我们切换时,第二求解器继续使用第一个求解器收集的信息来继续适当地初始化优化过程。在这种新背景下,我们展示了基于轨迹的每算法选择的有前途的表现,并启动了温暖。
translated by 谷歌翻译
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
translated by 谷歌翻译
Recent work has reported that AI classifiers trained on audio recordings can accurately predict severe acute respiratory syndrome coronavirus 2 (SARSCoV2) infection status. Here, we undertake a large scale study of audio-based deep learning classifiers, as part of the UK governments pandemic response. We collect and analyse a dataset of audio recordings from 67,842 individuals with linked metadata, including reverse transcription polymerase chain reaction (PCR) test outcomes, of whom 23,514 tested positive for SARS CoV 2. Subjects were recruited via the UK governments National Health Service Test-and-Trace programme and the REal-time Assessment of Community Transmission (REACT) randomised surveillance survey. In an unadjusted analysis of our dataset AI classifiers predict SARS-CoV-2 infection status with high accuracy (Receiver Operating Characteristic Area Under the Curve (ROCAUC) 0.846 [0.838, 0.854]) consistent with the findings of previous studies. However, after matching on measured confounders, such as age, gender, and self reported symptoms, our classifiers performance is much weaker (ROC-AUC 0.619 [0.594, 0.644]). Upon quantifying the utility of audio based classifiers in practical settings, we find them to be outperformed by simple predictive scores based on user reported symptoms.
translated by 谷歌翻译
Since early in the coronavirus disease 2019 (COVID-19) pandemic, there has been interest in using artificial intelligence methods to predict COVID-19 infection status based on vocal audio signals, for example cough recordings. However, existing studies have limitations in terms of data collection and of the assessment of the performances of the proposed predictive models. This paper rigorously assesses state-of-the-art machine learning techniques used to predict COVID-19 infection status based on vocal audio signals, using a dataset collected by the UK Health Security Agency. This dataset includes acoustic recordings and extensive study participant meta-data. We provide guidelines on testing the performance of methods to classify COVID-19 infection status based on acoustic features and we discuss how these can be extended more generally to the development and assessment of predictive methods based on public health datasets.
translated by 谷歌翻译
The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.
translated by 谷歌翻译
This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.
translated by 谷歌翻译
The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community.
translated by 谷歌翻译
The task of automatic text summarization produces a concise and fluent text summary while preserving key information and overall meaning. Recent approaches to document-level summarization have seen significant improvements in recent years by using models based on the Transformer architecture. However, the quadratic memory and time complexities with respect to the sequence length make them very expensive to use, especially with long sequences, as required by document-level summarization. Our work addresses the problem of document-level summarization by studying how efficient Transformer techniques can be used to improve the automatic summarization of very long texts. In particular, we will use the arXiv dataset, consisting of several scientific papers and the corresponding abstracts, as baselines for this work. Then, we propose a novel retrieval-enhanced approach based on the architecture which reduces the cost of generating a summary of the entire document by processing smaller chunks. The results were below the baselines but suggest a more efficient memory a consumption and truthfulness.
translated by 谷歌翻译