Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Meta Analysis of Classification Algorithms for Pattern Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Discriminatory Power of Adaptive Feed-Forward Layered Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised feature extraction using neuro-fuzzy approach
Fuzzy Sets and Systems - Information processing
Natural Gradient Learning in NLDA Networks
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Discriminative cluster analysis
ICML '06 Proceedings of the 23rd international conference on Machine learning
An investigation of neural network classifiers with unequal misclassification costs and group sizes
Decision Support Systems
Effectiveness of different target coding schemes on networks in financial engineering
ISNN'05 Proceedings of the Second international conference on Advances in neural networks - Volume Part II
Classifying unbalanced pattern groups by training neural network
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Brief MIMO fuzzy internal model control
Automatica (Journal of IFAC)
Hi-index | 0.15 |
The problem of multiclass pattern classification using adaptive layered networks is addressed. A special class of networks, i.e., feed-forward networks with a linear final layer, that perform generalized linear discriminant analysis is discussed, This class is sufficiently generic to encompass the behavior of arbitrary feed-forward nonlinear networks. Training the network consists of a least-square approach which combines a generalized inverse computation to solve for the final layer weights, together with a nonlinear optimization scheme to solve for parameters of the nonlinearities. A general analytic form for the feature extraction criterion is derived, and it is interpreted for specific forms of target coding and error weighting. An important aspect of the approach is to exhibit how a priori information regarding nonuniform class membership, uneven distribution between train and test sets, and misclassification costs may be exploited in a regularized manner in the training phase of networks.