Information-Based Evaluation Criterion for Classifier's Performance
Machine Learning
An introduction to Kolmogorov complexity and its applications
An introduction to Kolmogorov complexity and its applications
Prediction algorithms and confidence measures based on algorithmic randomness theory
Theoretical Computer Science - Natural computing
Machine-Learning Applications of Algorithmic Randomness
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Gradient-Based Optimization of Hyperparameters
Neural Computation
Detecting outliers using transduction and statistical testing
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
An overview of advances in reliability estimation of individual predictions in machine learning
Intelligent Data Analysis
Introduction to Machine Learning
Introduction to Machine Learning
OP-ELM: optimally pruned extreme learning machine
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
Evaluating Learning Algorithms: A Classification Perspective
Evaluating Learning Algorithms: A Classification Perspective
Recurrent kernel machines: Computing with infinite echo state networks
Neural Computation
Expectation maximization approach to data-based fault diagnostics
Information Sciences: an International Journal
Information Sciences: an International Journal
Multiple sensor fault diagnosis by evolving data-driven approach
Information Sciences: an International Journal
Hi-index | 0.07 |
Fault diagnostics problems can be formulated as classification tasks. Due to limited data and to uncertainty, classification algorithms are not perfectly accurate in practical applications. Maintenance decisions based on erroneous fault classifications result in inefficient resource allocations and/or operational disturbances. Thus, knowing the accuracy of classifiers is important to give confidence in the maintenance decisions. The average accuracy of a classifier on a test set of data patterns is often used as a measure of confidence in the performance of a specific classifier. However, the performance of a classifier can vary in different regions of the input data space. Several techniques have been proposed to quantify the reliability of a classifier at the level of individual classifications. Many of the proposed techniques are only applicable to specific classifiers, such as ensemble techniques and support vector machines. In this paper, we propose a meta approach based on the typicalness framework (Kolmogorov's concept of randomness), which is independent of the applied classifier. We apply the approach to a case of fault diagnosis in railway turnout systems and compare the results obtained with both extreme learning machines and echo state networks.