Single Lens Stereo with a Plenoptic Camera
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
Plenoptic modeling: an image-based rendering system
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Catadioptric Projective Geometry
International Journal of Computer Vision
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
IEEE Spectrum
Advanced FPGA Design: Architecture, Implementation, and Optimization
Advanced FPGA Design: Architecture, Implementation, and Optimization
A unified algorithm for elementary functions
AFIPS '71 (Spring) Proceedings of the May 18-20, 1971, spring joint computer conference
Digital Signal Processing with Field Programmable Gate Arrays
Digital Signal Processing with Field Programmable Gate Arrays
Communication in a swarm of miniature robots: the e-Puck as an educational tool for swarm robotics
SAB'06 Proceedings of the 2nd international conference on Swarm robotics
Hi-index | 0.00 |
A new biologically-inspired vision sensor made of one hundred "eyes" is presented, which is suitable for real-time acquisition and processing of 3-D image sequences. This device, named the Panoptic camera, consists of a layered arrangement of approximately 100 classical CMOS imagers, distributed over a hemisphere of 13 cm in diameter. The Panoptic camera is a polydioptric system where all imagers have their own vision of the world, each with a distinct focal point, which is a specific feature of the Panoptic system. This enables 3-D information recording such as omnidirectional stereoscopy or depth estimation, applying specific signal processing. The algorithms dictating the image reconstruction of an omnidirectional observer located at any point inside the hemisphere are presented. A hardware architecture which has the capability of handling these algorithms, and the flexibility to support additional image processing in real time, has been developed as a two-layer system based on FPGAs. The detail of the hardware architecture, its internal blocks, the mapping of the algorithms onto the latter elements, and the device calibration procedure are presented, along with imaging results.