Pattern recognition: human and mechanical
Pattern recognition: human and mechanical
Non-negative matrix factorization based methods for object recognition
Pattern Recognition Letters
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Underdetermined blind source separation based on sparse representation
IEEE Transactions on Signal Processing
Coupled principal component analysis
IEEE Transactions on Neural Networks
Sparse component analysis and blind source separation of underdetermined mixtures
IEEE Transactions on Neural Networks
Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm
IEEE Transactions on Neural Networks
Modulated Hebb-Oja learning Rule-a method for principal subspace analysis
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Boolean Factor Analysis by Attractor Neural Network
IEEE Transactions on Neural Networks
New measure of boolean factor analysis quality
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
Hi-index | 0.01 |
Factor analysis is used in a number of applications. One example is image recognition, where it is often necessary to learn representations of the underlying components of images, such as objects, object-parts, or features. Another example is data compression when original data is transformed into a space of lower dimension. The goal of factor analysis is to find the underlying factors (factor loadings) and the contributions of these factors into the original observations (factor scores). Recently, we have proposed the method of Boolean factor analysis based on the ability of the Hopfield-like network to create attractors for factors [19]. It shows that an obstacle to using this network for Boolean factor analysis is the appearance of two global spurious attractors that have no relation to internal structure of analyzed signals. To eliminate these attractors we had to modify the common architecture of Hopfield network, adding a special inhibitory neuron. The existence of two global attractors and their elimination by the special inhibitory neuron were illustrated by Frolov et al. [19] only by some computer simulations. Since the appearance of those attractors is a novel important phenomenon, in this paper we investigate it both analytically and by additional computer simulations, to prove its validity, and explain its origin.