Topics in matrix analysis
Modified Hebbian learning for curve and surface fitting
Neural Networks
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Matrix computations (3rd ed.)
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Natural gradient works efficiently in learning
Neural Computation
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
Approximating the exponential from a Lie algebra to a Lie group
Mathematics of Computation
A Class of Intrinsic Schemes for Orthogonal Integration
SIAM Journal on Numerical Analysis
A Theory for Learning by Weight Flow on Stiefel-Grassman Manifold
Neural Computation
Discriminant Pattern Recognition Using Transformation-Invariant Neurons
Neural Computation
Projection approximation subspace tracking
IEEE Transactions on Signal Processing
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
A theory for learning based on rigid bodies dynamics
IEEE Transactions on Neural Networks
Nonlinear Complex-Valued Extensions of Hebbian Learning: An Essay
Neural Computation
Learning independent components on the orthogonal group of matrices by retractions
Neural Processing Letters
Descent methods for optimization on homogeneous manifolds
Mathematics and Computers in Simulation
An introduction to Lie group integrators - basics, new developments and applications
Journal of Computational Physics
Hi-index | 7.30 |
In previous contributions, the second author of this paper presented a new class of algorithms for orthonormal learning of linear neural networks with p inputs and m outputs, based on the equations describing the dynamics of a massive rigid frame on the Stiefel manifold. These algorithms exhibit good numerical stability, strongly binding to the sub-manifold of constraints, and good controllability of the learning dynamics, but are not completely satisfactory from a computational-complexity point of view. In the recent literature, efficient methods of integration on the Stiefel manifold have been proposed by various authors, see for example (Phys. D 156 (2001) 219; Numer. Algorithms 32 (2003) 163; J. Numer. Anal. 21 (2001) 463; Numer. Math. 83 (1999) 599). Inspired by these approaches, in this paper, we propose a new and efficient representation of the mentioned learning equations, and a new way to integrate them. The numerical experiments show how the new formulation leads to significant computational savings especially when p ≫ m. The effectiveness of the algorithms is substantiated by numerical experiments concerning principal subspace analysis and independent component analysis. These experiments were carried out with both synthetic and real-world data.