Anomaly detection on time series data is increasingly common across various industrial domains that monitor metrics in order to prevent potential accidents and economic losses. However, a scarcity of labeled data and ambiguous definitions of anomalies can complicate these efforts. Recent unsupervised machine learning methods have made remarkable progress in tackling this problem using either single-timestamp predictions or time series reconstructions. While traditionally considered separately, these methods are not mutually exclusive and can offer complementary perspectives on anomaly detection. This paper first highlights the successes and limitations of prediction-based and reconstruction-based methods with visualized time series signals and anomaly scores. We then propose AER (Auto-encoder with Regression), a joint model that combines a vanilla auto-encoder and an LSTM regressor to incorporate the successes and address the limitations of each method. Our model can produce bi-directional predictions while simultaneously reconstructing the original time series by optimizing a joint objective function. Furthermore, we propose several ways of combining the prediction and reconstruction errors through a series of ablation studies. Finally, we compare the performance of the AER architecture against two prediction-based methods and three reconstruction-based methods on 12 well-known univariate time series datasets from NASA, Yahoo, Numenta, and UCR. The results show that AER has the highest averaged F1 score across all datasets (a 23.5% improvement compared to ARIMA) while retaining a runtime similar to its vanilla auto-encoder and regressor components. Our model is available in Orion, an open-source benchmarking tool for time series anomaly detection.
translated by 谷歌翻译
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
translated by 谷歌翻译
Recently, there has been a significant amount of interest in satellite telemetry anomaly detection (AD) using neural networks (NN). For AD purposes, the current approaches focus on either forecasting or reconstruction of the time series, and they cannot measure the level of reliability or the probability of correct detection. Although the Bayesian neural network (BNN)-based approaches are well known for time series uncertainty estimation, they are computationally intractable. In this paper, we present a tractable approximation for BNN based on the Monte Carlo (MC) dropout method for capturing the uncertainty in the satellite telemetry time series, without sacrificing accuracy. For time series forecasting, we employ an NN, which consists of several Long Short-Term Memory (LSTM) layers followed by various dense layers. We employ the MC dropout inside each LSTM layer and before the dense layers for uncertainty estimation. With the proposed uncertainty region and by utilizing a post-processing filter, we can effectively capture the anomaly points. Numerical results show that our proposed time series AD approach outperforms the existing methods from both prediction accuracy and AD perspectives.
translated by 谷歌翻译
在智能交通系统中,交通拥堵异常检测至关重要。运输机构的目标有两个方面:监视感兴趣领域的一般交通状况,并在异常拥堵状态下定位道路细分市场。建模拥塞模式可以实现这些目标,以实现全市道路的目标,相当于学习多元时间序列(MTS)的分布。但是,现有作品要么不可伸缩,要么无法同时捕获MTS中的空间信息。为此,我们提出了一个由数据驱动的生成方法组成的原则性和全面的框架,该方法可以执行可拖动的密度估计来检测流量异常。我们的方法在特征空间中的第一群段段,然后使用条件归一化流以在无监督的设置下在群集级别识别异常的时间快照。然后,我们通过在异常群集上使用内核密度估计器来识别段级别的异常。关于合成数据集的广泛实验表明,我们的方法在召回和F1得分方面显着优于几种最新的拥塞异常检测和诊断方法。我们还使用生成模型来采样标记的数据,该数据可以在有监督的环境中训练分类器,从而减轻缺乏在稀疏设置中进行异常检测的标记数据。
translated by 谷歌翻译
The detection of anomalies in time series data is crucial in a wide range of applications, such as system monitoring, health care or cyber security. While the vast number of available methods makes selecting the right method for a certain application hard enough, different methods have different strengths, e.g. regarding the type of anomalies they are able to find. In this work, we compare six unsupervised anomaly detection methods with different complexities to answer the questions: Are the more complex methods usually performing better? And are there specific anomaly types that those method are tailored to? The comparison is done on the UCR anomaly archive, a recent benchmark dataset for anomaly detection. We compare the six methods by analyzing the experimental results on a dataset- and anomaly type level after tuning the necessary hyperparameter for each method. Additionally we examine the ability of individual methods to incorporate prior knowledge about the anomalies and analyse the differences of point-wise and sequence wise features. We show with broad experiments, that the classical machine learning methods show a superior performance compared to the deep learning methods across a wide range of anomaly types.
translated by 谷歌翻译
时间序列的异常提供了各个行业的关键方案的见解,从银行和航空航天到信息技术,安全和医学。但是,由于异常的定义,经常缺乏标签以及此类数据中存在的极为复杂的时间相关性,因此识别时间序列数据中的异常尤其具有挑战性。LSTM自动编码器是基于长期短期内存网络的异常检测的编码器传统方案,该方案学会重建时间序列行为,然后使用重建错误来识别异常。我们将Denoising Architecture作为对该LSTM编码模型模型的补充,并研究其对现实世界以及人为生成的数据集的影响。我们证明了所提出的体系结构既提高了准确性和训练速度,从而使LSTM自动编码器更有效地用于无监督的异常检测任务。
translated by 谷歌翻译
Unsupervised anomaly detection in time-series has been extensively investigated in the literature. Notwithstanding the relevance of this topic in numerous application fields, a complete and extensive evaluation of recent state-of-the-art techniques is still missing. Few efforts have been made to compare existing unsupervised time-series anomaly detection methods rigorously. However, only standard performance metrics, namely precision, recall, and F1-score are usually considered. Essential aspects for assessing their practical relevance are therefore neglected. This paper proposes an original and in-depth evaluation study of recent unsupervised anomaly detection techniques in time-series. Instead of relying solely on standard performance metrics, additional yet informative metrics and protocols are taken into account. In particular, (1) more elaborate performance metrics specifically tailored for time-series are used; (2) the model size and the model stability are studied; (3) an analysis of the tested approaches with respect to the anomaly type is provided; and (4) a clear and unique protocol is followed for all experiments. Overall, this extensive analysis aims to assess the maturity of state-of-the-art time-series anomaly detection, give insights regarding their applicability under real-world setups and provide to the community a more complete evaluation protocol.
translated by 谷歌翻译
时间序列数据的积累和标签的不存在使时间序列异常检测(AD)是自我监督的深度学习任务。基于单拟合的方法只能触及整个正态性的某些方面,不足以检测各种异常。其中,AD采用的对比度学习方法总是选择正常的负面对,这是反对AD任务的目的。现有的基于多促进的方法通常是两阶段的,首先应用了训练过程,其目标可能与AD不同,因此性能受到预训练的表示的限制。本文提出了一种深层对比的单级异常检测方法(COCA),该方法结合了对比度学习和一级分类的正态性假设。关键思想是将表示和重建表示形式视为无阴性对比度学习的积极对,我们将其命名为序列对比。然后,我们应用了由不变性和方差项组成的对比度损失函数,前者同时优化了这两个假设的损失,后者则防止了超晶体崩溃。在四个现实世界中的时间序列数据集上进行的广泛实验表明,所提出的方法的卓越性能达到了最新。该代码可在https://github.com/ruiking04/coca上公开获得。
translated by 谷歌翻译
今天的网络世界难以多变量。在极端品种中收集的指标需要多变量算法以正确检测异常。然而,基于预测的算法,如被广泛证明的方法,通常在数据集中进行次优或不一致。一个关键的常见问题是他们努力成为一个尺寸适合的,但异常在自然中是独特的。我们提出了一种裁定到这种区别的方法。提出FMUAD - 一种基于预测,多方面,无监督的异常检测框架。FMUAD明确,分别捕获异常类型的签名性状 - 空间变化,时间变化和相关变化 - 与独立模块。然后,模块共同学习最佳特征表示,这是非常灵活和直观的,与类别中的大多数其他模型不同。广泛的实验表明我们的FMUAD框架始终如一地优于其他最先进的预测的异常探测器。
translated by 谷歌翻译
时间序列(TS)异常检测(AD)在各种应用中起重要作用,例如,金融和医疗保健监测中的欺诈检测。由于异常的本质上不可预测和高度不同,并且在历史数据中缺乏异常标签,而广告问题通常被制定为无监督的学习问题。现有解决方案的性能往往不令人满意,尤其是数据稀缺方案。为了解决这个问题,我们提出了一种新颖的自我监督的广告中的时间序列学习技术,即\ EMPH {DeepFib}。我们将问题模型为a \ emph {填写空白}游戏,通过屏蔽TS中的某些元素并将其抵御其余部分。考虑到TS数据中的两个共同的异常形状(点或序列异常值),我们实施了两个具有许多自我产生的训练样本的掩蔽策略。相应的自我估算网络可以提取比现有的广告解决方案更强大的时间关系,并有效地促进识别两种类型的异常。对于连续异常值,我们还提出了一种异常的本地化算法,可大大减少广告错误。各种现实世界TS数据集的实验表明,DeepFib优先于最先进的方法,通过大幅度,实现F1分数的高达65.2 \%$ 65.2 \%。
translated by 谷歌翻译
神经发展是在训练期间可以用于学习最佳架构的方法之一。它使用进化算法来产生人工神经网络(ANN)的拓扑及其参数。在这项工作中,提出了一种改进的神经发展技术,其包含多级优化。本方法采用了基于装袋技术的演化策略,采用遗传算子优化单一异常检测模型,减少训练数据集以加速搜索过程并执行非梯度微调。多元异常检测作为无监督的学习任务是测试所呈现的方法的案例研究。单一模型优化基于突变,交叉运算符,并专注于查找最佳窗口尺寸,层数,层深度,超参数等,以提高新的和已知模型的异常检测分数。拟议的框架及其协议表明,可以在合理的时间内找到架构,这可以提高所有众所周知的多元异常检测深度学习架构。该工作集中在改善异常检测的多级神经发展方法。主要修改是混合组和单一模型演化,非梯度微调和投票机制的方法。呈现的框架可以用作可以使用AutoEncoder架构的任何不同无监督任务的高效学习网络架构方法。测试在SWAT和WADI数据集上运行,并呈现了在其他深度学习模型中获得最佳分数的进化架构。
translated by 谷歌翻译
在能源系统的数字化中,传感器和智能电表越来越多地用于监视生产,运行和需求。基于智能电表数据的异常检测对于在早期阶段识别潜在的风险和异常事件至关重要,这可以作为及时启动适当动作和改善管理的参考。但是,来自能源系统的智能电表数据通常缺乏标签,并且包含噪声和各种模式,而没有明显的周期性。同时,在不同的能量场景中对异常的模糊定义和高度复杂的时间相关性对异常检测构成了巨大的挑战。许多传统的无监督异常检测算法(例如基于群集或基于距离的模型)对噪声不强大,也不完全利用时间序列中的时间依赖性以及在多个变量(传感器)中的其他依赖关系。本文提出了一种基于带有注意机制的变异复发自动编码器的无监督异常检测方法。凭借来自智能电表的“肮脏”数据,我们的方法预示了缺失的值和全球异常,以在训练中缩小其贡献。本文与基于VAE的基线方法和其他四种无监督的学习方法进行了定量比较,证明了其有效性和优势。本文通过一项实际案例研究进一步验证了所提出的方法,该研究方法是检测工业加热厂的供水温度异常。
translated by 谷歌翻译
多元时间序列的异常检测对于系统行为监测有意义。本文提出了一种基于无监督的短期和长期面具表示学习(SLMR)的异常检测方法。主要思想是分别使用多尺度的残余卷积和门控复发单元(GRU)提取多元时间序列的短期局部依赖模式和长期全球趋势模式。此外,我们的方法可以通过结合时空掩盖的自我监督表示和序列分裂来理解时间上下文和特征相关性。它认为功能的重要性是不同的,我们介绍了注意机制以调整每个功能的贡献。最后,将基于预测的模型和基于重建的模型集成在一起,以关注单时间戳预测和时间序列的潜在表示。实验表明,我们方法的性能优于三个现实世界数据集上的其他最先进的模型。进一步的分析表明,我们的方法擅长可解释性。
translated by 谷歌翻译
A new Lossy Causal Temporal Convolutional Neural Network Autoencoder for anomaly detection is proposed in this work. Our framework uses a rate-distortion loss and an entropy bottleneck to learn a compressed latent representation for the task. The main idea of using a rate-distortion loss is to introduce representation flexibility that ignores or becomes robust to unlikely events with distinctive patterns, such as anomalies. These anomalies manifest as unique distortion features that can be accurately detected in testing conditions. This new architecture allows us to train a fully unsupervised model that has high accuracy in detecting anomalies from a distortion score despite being trained with some portion of unlabelled anomalous data. This setting is in stark contrast to many of the state-of-the-art unsupervised methodologies that require the model to be only trained on "normal data". We argue that this partially violates the concept of unsupervised training for anomaly detection as the model uses an informed decision that selects what is normal from abnormal for training. Additionally, there is evidence to suggest it also effects the models ability at generalisation. We demonstrate that models that succeed in the paradigm where they are only trained on normal data fail to be robust when anomalous data is injected into the training. In contrast, our compression-based approach converges to a robust representation that tolerates some anomalous distortion. The robust representation achieved by a model using a rate-distortion loss can be used in a more realistic unsupervised anomaly detection scheme.
translated by 谷歌翻译
最近的研究表明,基于自动编码器的模型可以在异常检测任务上实现出色的性能,因为它们以无监督的方式适合复杂数据的能力出色。在这项工作中,我们提出了一种新型的基于自动编码器的模型,称为Stackvae-G,可以显着将效率和解释性带入多元时间序列异常检测。具体而言,我们通过使用权重共生方案的堆叠式重建来利用整个时间序列频道的相似性来减少学习模型的大小,并减轻培训数据中未知噪声的过度拟合。我们还利用图形学习模块来学习稀疏的邻接矩阵,以明确捕获多个时间序列通道之间的稳定相互关系结构,以便对相互关联的通道的可解释模式重建。结合了这两个模块,我们将堆叠式块VAE(变异自动编码器)与GNN(图神经网络)模型进行了多变量时间序列异常检测。我们对三个常用的公共数据集进行了广泛的实验,这表明我们的模型与最先进的模型相当(甚至更好)的性能,同时需要更少的计算和内存成本。此外,我们证明,通过模型学到的邻接矩阵可以准确捕获多个渠道之间的相互关系,并可以为失败诊断应用提供有价值的信息。
translated by 谷歌翻译
存在几种数据驱动方法,使我们的模型时间序列数据能够包括传统的基于回归的建模方法(即,Arima)。最近,在时间序列分析和预测的背景下介绍和探索了深度学习技术。询问的主要研究问题是在预测时间序列数据中的深度学习技术中的这些变化的性能。本文比较了两个突出的深度学习建模技术。比较了经常性的神经网络(RNN)长的短期记忆(LSTM)和卷积神经网络(CNN)基于基于TCN的时间卷积网络(TCN),并报告了它们的性能和训练时间。根据我们的实验结果,两个建模技术都表现了相当具有基于TCN的模型优于LSTM略微。此外,基于CNN的TCN模型比基于RNN的LSTM模型更快地构建了稳定的模型。
translated by 谷歌翻译
近年来,提出了关于时间序列异常检测(TAD)的研究报告基准TAD数据集中的高F1分数,给出了TAD的清晰改进的印象。然而,大多数研究在评分之前应用了一个名为Point调整(PA)的特殊评估协议。在本文中,我们理论上实验揭示了PA协议具有高估检测性能的可能性;也就是说,即使是随机异常的分数也可以容易地变成最先进的TAD方法。因此,应用PA协议后的TAD方法的比较可能导致误导排名。此外,我们通过表示未经训练的模型对现有方法获得了可比的检测性能,即使禁止禁止,我们会解决现有TAD方法的潜力。根据我们的调查结果,我们提出了一种新的基线和评估议定书。我们预计我们的研究将有助于对TAD严格评估,并导致未来的研究进一步改善。
translated by 谷歌翻译
Aiot技术的最新进展导致利用机器学习算法来检测网络物理系统(CPS)的操作失败的越来越受欢迎。在其基本形式中,异常检测模块从物理工厂监控传感器测量和致动器状态,并检测这些测量中的异常以识别异常操作状态。然而,由于该模型必须在存在高度复杂的系统动态和未知量的传感器噪声的情况下准确地检测异常,构建有效的异常检测模型是挑战性的。在这项工作中,我们提出了一种新的时序序列异常检测方法,称为神经系统识别和贝叶斯滤波(NSIBF),其中特制的神经网络架构被构成系统识别,即捕获动态状态空间中CP的动态模型;然后,通过跟踪系统的隐藏状态的不确定性随着时间的推移,自然地施加贝叶斯滤波算法的顶部。我们提供定性的和定量实验,并在合成和三个现实世界CPS数据集上具有所提出的方法,表明NSIBF对最先进的方法比较了对CPS中异常检测的最新方法。
translated by 谷歌翻译
检测数据分布突然变化的变更点检测(CPD)被认为是时间序列分析中最重要的任务之一。尽管关于离线CPD的文献广泛,但无监督的在线CPD仍面临主要挑战,包括可扩展性,超参数调整和学习限制。为了减轻其中一些挑战,在本文中,我们提出了一种新颖的深度学习方法,用于从多维时间序列中无监督的在线CPD,名为Adaptive LSTM-AUTOENOCODER变更点检测(ALACPD)。 ALACPD利用了基于LSTM-AutoEncoder的神经网络来执行无监督的在线CPD。它连续地适应了传入的样本,而无需保留先前接收的输入,因此没有内存。我们对几个实际时间序列的CPD基准进行了广泛的评估。我们表明,在时间序列细分的质量方面,ALACPD平均在最先进的CPD算法中排名第一,并且就估计更改点的准确性而言,它与表现最好。 ALACPD的实现可在Github \ footNote {\ url {https://github.com/zahraatashgahi/alacpd}}上在线获得。
translated by 谷歌翻译
无监督的异常检测旨在通过在正常数据上训练来建立模型以有效地检测看不见的异常。尽管以前的基于重建的方法取得了富有成效的进展,但由于两个危急挑战,他们的泛化能力受到限制。首先,训练数据集仅包含正常模式,这限制了模型泛化能力。其次,现有模型学到的特征表示通常缺乏代表性,妨碍了保持正常模式的多样性的能力。在本文中,我们提出了一种称为自适应存储器网络的新方法,具有自我监督的学习(AMSL)来解决这些挑战,并提高无监督异常检测中的泛化能力。基于卷积的AutoEncoder结构,AMSL包含一个自我监督的学习模块,以学习一般正常模式和自适应内存融合模块来学习丰富的特征表示。四个公共多变量时间序列数据集的实验表明,与其他最先进的方法相比,AMSL显着提高了性能。具体而言,在具有9亿个样本的最大帽睡眠阶段检测数据集上,AMSL以精度和F1分数\ TextBF {4} \%+优于第二个最佳基线。除了增强的泛化能力之外,AMSL还针对输入噪声更加强大。
translated by 谷歌翻译