Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
The nature of statistical learning theory
The nature of statistical learning theory
Bayesian methods for supervised neural networks
The handbook of brain theory and neural networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Journal of Parallel and Distributed Computing - Special issue on wireless networks
Unsupervised Recurrent Neural Networks
Unsupervised Recurrent Neural Networks
NeMLaP3/CoNLL '98 Proceedings of the Joint Conferences on New Methods in Language Processing and Computational Natural Language Learning
SARDSRN: a neural network shift-reduce parser
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Snap-drift neural network for selecting student feedback
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Phonetic feature discovery in speech using snap-drift learning
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Multimodality exploration by an unsupervised projection pursuit neural network
IEEE Transactions on Neural Networks
Growing Self-Organizing Map with cross insert for mixed-type data clustering
Applied Soft Computing
Hi-index | 0.00 |
This paper presents two novel neural networks based on snap-drift in the context of self-organisation and sequence learning. The snap-drift neural network employs modal learning that is a combination of two modes; fuzzy AND learning (snap), and Learning Vector Quantisation (drift). We present the snap-drift self-organising map (SDSOM) and the recurrent snap-drift neural network (RSDNN). The SDSOM uses the standard SOM architecture, where a layer of input nodes connects to the self-organising map layer and the weight update consists of either snap (min of input and weight) or drift (LVQ, as in SOM). The RSDNN uses a simple recurrent network (SRN) architecture, with the hidden layer values copied back to the input layer. A form of reinforcement learning is deployed in which the mode is swapped between the snap and drift when performance drops, and in which adaptation is probabilistic, whereby the probability of a neuron being adapted is reduced as performance increases. The algorithms are evaluated on several well known data sets, and it is found that these exhibit effective learning that is faster than alternative neural network methods.