Noise injection: theoretical prospects
Neural Computation
Toward optimally distributed computation
Neural Computation
Improving the Performance of Feedforward Neural Networks by Noise Injection into Hidden Neurons
Journal of Intelligent and Robotic Systems
Neural Network Modelling with Input Uncertainty: Theory and Application
Journal of VLSI Signal Processing Systems
Learning the Dynamic Neural Networks with the Improvement of Generalization Capabilities
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
A Measure of Noise Immunity for Functional Networks
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
BYY harmony learning, structural RPCL, and topological self-organizing on mixture models
Neural Networks - New developments in self-organizing maps
On different facets of regularization theory
Neural Computation
Data smoothing regularization, multi-sets-learning, and problem solving strategies
Neural Networks - 2003 Special issue: Advances in neural networks research IJCNN'03
Regularized principal manifolds
The Journal of Machine Learning Research
Assessing the Noise Immunity and Generalization of Radial Basis Function Networks
Neural Processing Letters
Injecting noise for analysing the stability of ICA components
Signal Processing - Special issue on independent components analysis and beyond
Improving generalization capabilities of dynamic neural networks
Neural Computation
Evolving RBF neural networks for time-series forecasting with EvRBF
Information Sciences—Informatics and Computer Science: An International Journal - Special issue: Informatics and computer science intelligent systems applications
Investigating the Fault Tolerance of Neural Networks
Neural Computation
Efficient preconditioning for image reconstruction with radial basis functions
Advances in Engineering Software
Second Order Cone Programming Approaches for Handling Missing and Uncertain Data
The Journal of Machine Learning Research
A neural network ensemble method with jittered training data for time series forecasting
Information Sciences: an International Journal
A smoothing regularizer for feedforward and recurrent neural networks
Neural Computation
Extracting and composing robust features with denoising autoencoders
Proceedings of the 25th international conference on Machine learning
ECML '07 Proceedings of the 18th European conference on Machine Learning
Nondeterministic Discretization of Weights Improves Accuracy of Neural Networks
ECML '07 Proceedings of the 18th European conference on Machine Learning
A Comparative Study on Data Smoothing Regularization for Local Factor Analysis
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Improving Training in the Vicinity of Temporary Minima
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Model adaptation via model interpolation and boosting for web search ranking
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2
Robustness and Regularization of Support Vector Machines
The Journal of Machine Learning Research
IEEE Transactions on Neural Networks
Analysis of artificial neural network learning near temporary minima: A fuzzy logic approach
Fuzzy Sets and Systems
GIGO or not GIGO: The Accuracy of Multi-Criteria Satisficing Decisions
Journal of Data and Information Quality (JDIQ)
Classification of epileptic motor manifestations using inertial and magnetic sensors
Computers in Biology and Medicine
The Journal of Machine Learning Research
A novel virtual sample generation method based on Gaussian distribution
Knowledge-Based Systems
Expert Systems with Applications: An International Journal
Information Sciences: an International Journal
Mathematical and Computer Modelling: An International Journal
Approximation properties of local bases assembled from neural network transfer functions
Mathematical and Computer Modelling: An International Journal
Fundamenta Informaticae - Machine Learning in Bioinformatics
A distributed hebb neural network for network anomaly detection
ISPA'07 Proceedings of the 5th international conference on Parallel and Distributed Processing and Applications
Generation of learning samples for historical handwriting recognition using image degradation
Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing
Fast regularized canonical correlation analysis
Computational Statistics & Data Analysis
Image Classification with the Fisher Vector: Theory and Practice
International Journal of Computer Vision
Journal of Signal Processing Systems
The dropout learning algorithm
Artificial Intelligence
Hi-index | 0.00 |
It is well known that the addition of noise to the input data ofa neural network during training can, in some circumstances, leadto significant improvements in generalization performance. Previouswork has shown that such training with noise is equivalent to aform of regularization in which an extra term is added to the errorfunction. However, the regularization term, which involves secondderivatives of the error function, is not bounded below, and so canlead to difficulties if used directly in a learning algorithm basedon error minimization. In this paper we show that for the purposesof network training, the regularization term can be reduced to apositive semi-definite form that involves only first derivatives ofthe network mapping. For a sum-of-squares error function, theregularization term belongs to the class of generalized Tikhonovregularizers. Direct minimization of the regularized error functionprovides a practical alternative to training with noise.