Fundamentals of digital image processing
Fundamentals of digital image processing
Associative neural memories
Fractal image compression: theory and application
Fractal image compression: theory and application
Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Robust image association by recurrent neural subnetworks
Neural Processing Letters
A subspace method for maximum likelihood target detection
ICIP '95 Proceedings of the 1995 International Conference on Image Processing (Vol. 3)-Volume 3 - Volume 3
An automatic system for model-based coding of faces
DCC '95 Proceedings of the Conference on Data Compression
Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements
IEEE Transactions on Computers
IEEE Transactions on Computers
Image coding based on a fractal theory of iterated contractive image transformations
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Based on two image compression schemes (MIT and RNS), it is shown that it is possible to associate similar object images using their intermediate representation. Thus both methods can be applied to large image database for both goals: high quality image compression and reliable search for queries by image content. MIT scheme of Moghaddam and Pentland is specialized to face images. It moves image comparison task from high dimensional image space to low dimensional principal subspace spanned on eigenfaces. The closest point in the subspace is used for image association. RNS scheme of the author represents images (not limited to a certain scene type) by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork N i is designed for the i-th image and it implements a stochastic nonlinear operator F i. It can be shown that under realistic assumptions F i has a unique attractor which is located in the vicinity of the original image. When at the input a noisy, incomplete or distorted image is presented, the associative recall is implemented in two stages. Firstly, a competition layer finds the most invariant subnetwork. Next, the selected recurrent subnetwork reconstructs in few iterations the original image.