A new feature extraction technique for on-line recognition of handwritten alphanumeric characters
Information Sciences—Applications: An International Journal
Efficient reverse k-nearest neighbor search in arbitrary metric spaces
Proceedings of the 2006 ACM SIGMOD international conference on Management of data
Phonetic feature discovery in speech using snap-drift learning
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Adaptive nonlinear auto-associative modeling through manifold learning
PAKDD'05 Proceedings of the 9th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Hi-index | 0.00 |
This paper will explore the integration of learning modes into a single neural network structure in which layers of neurons and even individual neurons adopt different modes. There are several reasons to explore modal learning in neural networks. One motivation is to overcome the inherent limitations of any given mode (for example some modes memorise specific features, others average across features, and both approaches may be relevant according to the circumstances); another is inspiration from neuroscience, cognitive science and human learning, where it is impossible to build a serious model without consideration of multiple modes; and a third reason is non-stationary input data, or time-variant learning objectives, where the required mode is a function of time. Several modal learning ideas will be presented: The Snap-Drift Neural Network (SDNN) which toggles its learning between two modes, either unsupervised or guided by performance feedback (reinforcement); a general approach to swapping between several learning modes in real-time; and an adaptive function neural network (ADFUNN), in which adaptation applies simultaneously to both the weights and the individual neuron activation functions.