ACM Transactions on Modeling and Computer Simulation (TOMACS)
Confidence estimation of the multi-layer perceptron and its application in fault detection systems
Engineering Applications of Artificial Intelligence
A prediction interval-based approach to determine optimal structures of neural network metamodels
Expert Systems with Applications: An International Journal
Conditional Density Estimation with Class Probability Estimators
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
Improving Prediction Interval Quality: A Genetic Algorithm-Based Method Applied to Neural Networks
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
Developing optimal neural network metamodels based on prediction intervals
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Expert Systems with Applications: An International Journal
Fast meta-models for local fusion of multiple predictive models
Applied Soft Computing
Engineering Applications of Artificial Intelligence
Journal of Intelligent Manufacturing
Deriving prediction intervals for neuro-fuzzy networks
Mathematical and Computer Modelling: An International Journal
Engineering Applications of Artificial Intelligence
Hi-index | 0.01 |
Feedforward neural networks, particularly multilayer perceptrons, are widely used in regression and classification tasks. A reliable and practical measure of prediction confidence is essential. In this work three alternative approaches to prediction confidence estimation are presented and compared. The three methods are the maximum likelihood, approximate Bayesian, and the bootstrap technique. We consider prediction uncertainty owing to both data noise and model parameter misspecification. The methods are tested on a number of controlled artificial problems and a real, industrial regression application, the prediction of paper "curl". Confidence estimation performance is assessed by calculating the mean and standard deviation of the prediction interval coverage probability. We show that treating data noise variance as a function of the inputs is appropriate for the curl prediction task. Moreover, we show that the mean coverage probability can only gauge confidence estimation performance as an average over the input space, i.e., global performance and that the standard deviation of the coverage is unreliable as a measure of local performance. The approximate Bayesian approach is found to perform better in terms of global performance