Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Finite Memory Loading in Hairy Neurons
Natural Computing: an international journal
Finite state automata and simple recurrent networks
Neural Computation
Backbone structure of hairy memory
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Manifold Construction Using the Multilayer Perceptron
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Linear replicator in kernel space
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part II
Adaptive graphical user interface solution for modern user devices
ACIIDS'12 Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part II
Threading possibilities of smart devices platforms for future user adaptive systems
ACIIDS'12 Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part II
Hi-index | 0.01 |
This paper presents a novel technique to separate the pattern representation in each hidden layer to facilitate many classification tasks. This technique requires that all patterns in the same class will have near representions and the patterns in different classes will have distant representions. This requirement is applied to any two data patterns to train a selected hidden layer of the MLP or the RNN. The MLP can be trained layer by layer feedforwardly to accomplish resolved representations. The trained MLP can serve as a kind of kernel functions for categorizing multiple classes.