Applying temporal constraints to the dynamic stereo problem
Computer Vision, Graphics, and Image Processing - Lectures notes in computer science, Vol. 201 (G. Goos and J. Hartmanis, Eds.)
Computation of component image velocity from local phase information
International Journal of Computer Vision
Phase-based disparity measurement
CVGIP: Image Understanding
Three-dimensional computer vision: a geometric viewpoint
Three-dimensional computer vision: a geometric viewpoint
Vision and Navigation: The Carnegie Mellon Navlab
Vision and Navigation: The Carnegie Mellon Navlab
Relationship Between Phase and Energy Methods for Disparity Computation
Neural Computation
Analog VLSI circuits as physical structures for perception in early visual tasks
IEEE Transactions on Neural Networks
A low-power integrated smart sensor with on-chip real-time image processing capabilities
EURASIP Journal on Applied Signal Processing
A Fast Joint Bioinspired Algorithm for Optic Flow and Two-Dimensional Disparity Estimation
ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
A compact harmonic code for early vision based on anisotropic frequency channels
Computer Vision and Image Understanding
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Hi-index | 0.02 |
We present a cortical-like strategy to obtain reliable estimates of the motions of objects in a scene toward/away from the observer (motion in depth), from local measurements of binocular parameters derived from direct comparison of the results of monocular spatiotemporal filtering operations performed on stereo image pairs. This approach is suitable for a hardware implementation, in which such parameters can be gained via a feedforward computation (i.e., collection, comparison, and punctual operations) on the outputs of the nodes of recurrent VLSI lattice networks, performing local computations. These networks act as efficient computational structures for embedded analog filtering operations in smart vision sensors. Extensive simulations on both synthetic and real-world image sequences prove the validity of the approach that allows to gain high-level information about the 3D structure of the scene, directly from sensorial data, without resorting to explicit scene reconstruction.