Robot vision
On the stability of the travelling salesman problem algorithm of Hopfield and Tank
Biological Cybernetics
Radial basis functions for multivariable interpolation: a review
Algorithms for approximation
Comparison of the Efficiency of Deterministic and Stochastic Algorithms for Visual Reconstruction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Deterministic Boltzmann learning performs steepest descent in weight-space
Neural Computation
Data Fusion for Sensory Information Processing Systems
Data Fusion for Sensory Information Processing Systems
A Theory of Networks for Approximation and Learning
A Theory of Networks for Approximation and Learning
Registration of Cortical Anatomical Structures via Robust 3D Point Matching
IPMI '99 Proceedings of the 16th International Conference on Information Processing in Medical Imaging
Convergence of a Hill Climbing Genetic Algorithm for Graph Matching
EMMCVPR '99 Proceedings of the Second International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition
Shape matching and registration by data-driven EM
Computer Vision and Image Understanding
Hierarchical, unsupervised learning with growing via phase transitions
Neural Computation
Neural Computation
The complex wave representation of distance transforms
EMMCVPR'11 Proceedings of the 8th international conference on Energy minimization methods in computer vision and pattern recognition
Hi-index | 0.00 |
We describe how to formulate matching and combinatorial problems of vision and neural network theory by generalizing elastic and deformable templates models to include binary matching elements. Techniques from statistical physics, which can be interpreted as computing marginal probability distributions, are then used to analyze these models and are shown to (1) relate them to existing theories and (2) give insight into the relations between, and relative effectivenesses of, existing theories. In particular we exploit the power of statistical techniques to put global constraints on the set of allowable states of the binary matching elements. The binary elements can then be removed analytically before minimization. This is demonstrated to be preferable to existing methods of imposing such constraints by adding bias terms in the energy functions. We give applications to winner-take-all networks, correspondence for stereo and long-range motion, the traveling salesman problem, deformable template matching, learning, content addressable memories, and models of brain development. The biological plausibility of these networks is briefly discussed.