Digital neural networks
Multilayer neural networks and Bayes decision theory
Neural Networks
Are Multilayer Perceptrons Adequate for Pattern Recognition and Verification?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Pattern classification using neural networks
IEEE Communications Magazine
Circular backpropagation networks for classification
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A new neural network for cluster-detection-and-labeling
IEEE Transactions on Neural Networks
Reformulated radial basis neural networks trained by gradient descent
IEEE Transactions on Neural Networks
An overview of statistical learning theory
IEEE Transactions on Neural Networks
Classification ability of single hidden layer feedforward neural networks
IEEE Transactions on Neural Networks
Bounds on the number of hidden neurons in multilayer perceptrons
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Expert Systems with Applications: An International Journal
EvoBIO '09 Proceedings of the 7th European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics
Information Sciences: an International Journal
Research of neural network classifier based on FCM and PSO for breast cancer classification
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part I
Accelerating FCM neural network classifier using graphics processing units with CUDA
Applied Intelligence
Global Artificial Bee Colony-Levenberq-Marquardt GABC-LM Algorithm for Classification
International Journal of Applied Evolutionary Computation
Hi-index | 0.01 |
This paper studies the classification mechanisms of multilayer perceptrons (MLPs) with sigmoid activation functions (SAFs). The viewpoint is presented that in the input space the hyperplanes determined by the hidden basis functions with values 0's do not play the role of decision boundaries, and such hyperplanes do not certainly go through the marginal regions between different classes. For solving an n-class problem, a single-hidden-layer perceptron with at least log2(n-1)=2 hidden nodes is needed. The final number of hidden neurons is still related to the sample distribution shapes and regions, but not to the number of samples and input dimensions. As a result, an empirical formula for optimally selecting the initial number of hidden nodes is proposed. The ranks of response matrixes of hidden layers should be taken as a main basis for pruning or growing the existing hidden neurons. A structure-fixed perceptron ought to learn more than one round from different starting weight points for one classification task, and only the group of weights and biases that has the best generalization performance should be reserved. Finally, three examples are given to verify the above viewpoints.