What size net gives valid generalization?
Neural Computation
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Neural networks and the bias/variance dilemma
Neural Computation
A “thermal” perceptron learning rule
Neural Computation
Neural network constructive algorithms: trading generalization for learning efficiency?
Circuits, Systems, and Signal Processing - Special issue: networks for neural processing
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
The role of lateral cortical competition in ocular dominance development
Proceedings of the 1998 conference on Advances in neural information processing systems II
Neural Computation
Pre-pruning Classification Trees to Reduce Overfitting in Noisy Domains
IDEAL '02 Proceedings of the Third International Conference on Intelligent Data Engineering and Automated Learning
Pruning Training Sets for Learning of Object Categories
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Generalization and Selection of Examples in Feedforward Neural Networks
Neural Computation
Multi-class pattern classification using neural networks
Pattern Recognition
A cooperative constructive method for neural networks for pattern recognition
Pattern Recognition
An empirical evaluation of constructive neural network algorithms in classification tasks
International Journal of Innovative Computing and Applications
The Art of Computer Programming, Volume 4, Fascicle 0: Introduction to Combinatorial Algorithms and Boolean Functions (Art of Computer Programming)
Neural network architecture selection: can function complexity help?
Neural Processing Letters
Threshold network synthesis and optimization and its application to nanotechnologies
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Constructive neural-network learning algorithms for pattern classification
IEEE Transactions on Neural Networks
Data discretization using the extreme learning machine neural network
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part IV
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
A constructive neural network to predict pitting corrosion status of stainless steel
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Committee C-mantec: a probabilistic constructive neural network
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Hi-index | 0.00 |
C-Mantec is a novel neural network constructive algorithm that combines competition between neurons with a stable modified perceptron learning rule. The neuron learning is governed by the thermal perceptron rule that ensures stability of the acquired knowledge while the architecture grows and while the neurons compete for new incoming information. Competition makes it possible that even after new units have been added to the network, existing neurons still can learn if the incoming information is similar to their stored knowledge, and this constitutes a major difference with existing constructing algorithms. The new algorithm is tested on two different sets of benchmark problems: a Boolean function set used in logic circuit design and a well studied set of real world problems. Both sets were used to analyze the size of the constructed architectures and the generalization ability obtained and to compare the results with those from other standard and well known classification algorithms. The problem of overfitting is also analyzed, and a new built-in method to avoid its effects is devised and successfully applied within an active learning paradigm that filter noisy examples. The results show that the new algorithm generates very compact neural architectures with state-of-the-art generalization capabilities.