MLP in layer-wise form with applications to weight decay
Neural Computation
TAO-robust backpropagation learning algorithm
Neural Networks
Nonlinear Complex-Valued Extensions of Hebbian Learning: An Essay
Neural Computation
Robust Formulations for Training Multilayer Perceptrons
Neural Computation
Mining Adaptive Ratio Rules from Distributed Data Sources
Data Mining and Knowledge Discovery
Annealing robust fuzzy basis function for modelling with noise and outliers
International Journal of Computer Applications in Technology
Robust MCD-Based Backpropagation Learning Algorithm
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
Robust incremental growing multi-experts network
Applied Soft Computing
Robust LTS backpropagation learning algorithm
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Expert Systems with Applications: An International Journal
Fast robust learning algorithm dedicated to LMLS criterion
ICAISC'10 Proceedings of the 10th international conference on Artifical intelligence and soft computing: Part II
Outliers detection in environmental monitoring databases
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence
Robust neural network for novelty detection on data streams
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Robust Learning Algorithm Based on Iterative Least Median of Squares
Neural Processing Letters
Hi-index | 0.00 |
Most supervised neural networks (NNs) are trained by minimizing the mean squared error (MSE) of the training set. In the presence of outliers, the resulting NN model can differ significantly from the underlying system that generates the data. Two different approaches are used to study the mechanism by which outliers affect the resulting models: influence function and maximum likelihood. The mean log squared error (MLSE) is proposed as the error criteria that can be easily adapted by most supervised learning algorithms. Simulation results indicate that the proposed method is robust against outliers