Modeling Multiple Time Series for Anomaly Detection
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Kernel Fisher Discriminants for Outlier Detection
Neural Computation
Known Unknowns: Novelty Detection in Condition Monitoring
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part I
ACM Computing Surveys (CSUR)
Applying the possibilistic c-means algorithm in kernel-induced spaces
IEEE Transactions on Fuzzy Systems - Special section on computing with words
Local Outlier Detection Based on Kernel Regression
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
One-class classification with gaussian processes
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part II
Anomaly Detection for Discrete Sequences: A Survey
IEEE Transactions on Knowledge and Data Engineering
Fast anomaly detection for streaming data
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
One-class conditional random fields for sequential anomaly detection
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.10 |
We describe a probabilistic, nonparametric method for anomaly detection, based on a squared-loss objective function which has a simple analytical solution. The method emerges from extending recent work in nonparametric least-squares classification to include a ''none-of-the-above'' class which models anomalies in terms of non-anamalous training data. The method shares the flexibility of other kernel-based anomaly detection methods, yet is typically much faster to train and test. It can also be used to distinguish between multiple inlier classes and anomalies. The probabilistic nature of the output makes it straightforward to apply even when test data has structural dependencies; we show how a hidden Markov model framework can be incorporated in order to identify anomalous subsequences in a test sequence. Empirical results on datasets from several domains show the method to have comparable discriminative performance to popular alternatives, but with a clear speed advantage.