Neural Modeling of an Industrial Process with Noisy Data
Proceedings of the 14th International conference on Industrial and engineering applications of artificial intelligence and expert systems: engineering of intelligent systems
An algorithm of supervised learning for multilayer neural networks
Neural Computation
A neural network ensemble method with jittered training data for time series forecasting
Information Sciences: an International Journal
Improving Training in the Vicinity of Temporary Minima
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Effect of refractoriness on learning performance of a pattern sequence
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Selective transfer of task knowledge using stochastic noise
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Dual gradient descent algorithm on two-layered feed-forward artificial neural networks
IEA/AIE'07 Proceedings of the 20th international conference on Industrial, engineering, and other applications of applied intelligent systems
Analysis of artificial neural network learning near temporary minima: A fuzzy logic approach
Fuzzy Sets and Systems
A reliable resilient backpropagation method with gradient ascent
ICIC'06 Proceedings of the 2006 international conference on Intelligent computing: Part II
An improved backpropagation algorithm using absolute error function
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
A global optimization strategy for training adaptive systems such as neural networks and adaptive filters (finite or infinite impulse response) is proposed. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled