Maximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
A Gneral Class of Neural Networks for Principal Component Analysis and Factor Analysis
IDEAL '00 Proceedings of the Second International Conference on Intelligent Data Engineering and Automated Learning, Data Mining, Financial Engineering, and Intelligent Agents
Reinforcement Learning Reward Functions for Unsupervised Learning
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Hi-index | 0.00 |
A review is given of a new artificial neural networkarchitecture in which the weights converge to the principalcomponent subspace. The weights learn by only simple Hebbianlearning yet require no clipping, normalization or weight decay.The net self-organizes using negative feedback of activation from aset of "interneurons" to the input neurons. By allowing thisnegative feedback from the interneurons to act on otherinterneurons we can introduce the necessary asymmetry to causeconvergence to the actual principal components. Simulations andanalysis confirm such convergence.