On the capabilities of multilayer perceptrons
Journal of Complexity - Special Issue on Neural Computation
A neural network approach to character recognition
Neural Networks
Review of neural networks for speech recognition
Neural Computation
Modular construction of time-delay neural networks for speech recognition
Neural Computation
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Comparison of three classification techniques, CART, C4.5 and multi-layer perceptrons
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural networks
Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Enlarging the Margins in Perceptron Decision Trees
Machine Learning
Automatic Construction of Decision Trees from Data: A Multi-Disciplinary Survey
Data Mining and Knowledge Discovery
Decision-tree instance-space decomposition with grouped gain-ratio
Information Sciences: an International Journal
Evolutionary Combining of Basis Function Neural Networks for Classification
IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
Neural Networks
Design an effective pattern classification model
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Incorporating linear discriminant analysis in neural tree for multidimensional splitting
Applied Soft Computing
Hi-index | 14.98 |
A pattern classification method called neural tree networks (NTNs) is presented. The NTN consists of neural networks connected in a tree architecture. The neural networks are used to recursively partition the feature space into subregions. Each terminal subregion is assigned a class label which depends on the training data routed to it by the neural networks. The NTN is grown by a learning algorithm, as opposed to multilayer perceptrons (MLPs), where the architecture must be specified before learning can begin. A heuristic learning algorithm based on minimizing the L1 norm of the error is used to grow the NTN. It is shown that this method has better performance in terms of minimizing the number of classification errors than the squared error minimization method used in backpropagation. An optimal pruning algorithm is given to enhance the generalization of the NTN. Simulation results are presented on Boolean function learning tasks and a speaker independent vowel recognition task. The NTN compares favorably to both neural networks and decision trees.