Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
MetaCost: a general method for making classifiers cost-sensitive
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
LOF: identifying density-based local outliers
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Learning and making decisions when costs and probabilities are both unknown
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Learning cost-sensitive active classifiers
Artificial Intelligence
AdaCost: Misclassification Cost-Sensitive Boosting
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Exploiting unlabeled data in ensemble methods
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Bounded Geometries, Fractals, and Low-Distortion Embeddings
FOCS '03 Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science
Navigating nets: simple algorithms for proximity search
SODA '04 Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms
Distance--Based Classification with Lipschitz Functions
The Journal of Machine Learning Research
Decision trees with minimal costs
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A needle in a haystack: local one-class optimization
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Cost-sensitive boosting for classification of imbalanced data
Pattern Recognition
The class imbalance problem: A systematic study
Intelligent Data Analysis
A Self-training Approach to Cost Sensitive Uncertainty Sampling
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Journal of Artificial Intelligence Research
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Active cost-sensitive learning
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Proximity algorithms for nearly-doubling spaces
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
Research on cost-sensitive learning in one-class anomaly detection algorithms
ATC'07 Proceedings of the 4th international conference on Autonomic and Trusted Computing
Combining one-class classifiers via meta learning
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
ACTIDS: an active strategy for detecting and localizing network attacks
Proceedings of the 2013 ACM workshop on Artificial intelligence and security
Hi-index | 0.00 |
We propose what appears to be the first anomaly detection framework that learns from positive examples only and is sensitive to substantial differences in the presentation and penalization of normal vs. anomalous points. Our framework introduces a novel type of asymmetry between how false alarms (misclassifications of a normal instance as an anomaly) and missed anomalies (misclassifications of an anomaly as normal) are penalized: whereas each false alarm incurs a unit cost, our model assumes that a high global cost is incurred if one or more anomalies are missed. We define a few natural notions of risk along with efficient minimization algorithms. Our framework is applicable to any metric space with a finite doubling dimension. We make minimalistic assumptions that naturally generalize notions such as margin in Euclidean spaces. We provide a theoretical analysis of the risk and show that under mild conditions, our classifier is asymptotically consistent. The learning algorithms we propose are computationally and statistically efficient and admit a further tradeoff between running time and precision. Some experimental results on real-world data are provided.