Performance of optical flow techniques
International Journal of Computer Vision
Analog Integrated Circuits and Signal Processing
Motion-Driven Segmentation by Competitive Neural Processing
Neural Processing Letters
Visual processing platform based on artificial retinas
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Near-recursive optical flow from weighted image differences
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Non-Gaussian velocity distributions integrated over space, time, and scales
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Quality-Based Fusion of Multiple Video Sensors for Video Surveillance
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Coding static natural images using spiking event times: do neurons Cooperate?
IEEE Transactions on Neural Networks
Fine grain pipeline architecture for high performance phase-based optical flow computation
Journal of Systems Architecture: the EUROMICRO Journal
Spatially-variant structuring elements inspired by the neurogeometry of the visual cortex
ISMM'11 Proceedings of the 10th international conference on Mathematical morphology and its applications to image and signal processing
Parallel architecture for hierarchical optical flow estimation based on FPGA
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
A FPGA spike-based robot controlled with neuro-inspired VITE
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Real-time bio-inspired contrast enhancement on GPU
Neurocomputing
Hi-index | 0.00 |
We present a bioinspired model for detecting spatiotemporal features based on artificial retina response models. Event-driven processing is implemented using four kinds of cells encoding image contrast and temporal information. We have evaluated how the accuracy of motion processing depends on local contrast by using a multiscale and rank-order coding scheme to select the most important cues from retinal inputs. We have also developed some alternatives by integrating temporal feature results and obtained a new improved bioinspired matching algorithm with high stability, low error and low cost. Finally, we define a dynamic and versatile multimodal attention operator with which the system is driven to focus on different target features such as motion, colors, and textures.