SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Beating the limitations of camera-monitor mediated telepresence with extra eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
A Flexible New Technique for Camera Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unstructured lumigraph rendering
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A review on effective closely-coupled collaboration using immersive CVE's
Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications
A review of telecollaboration technologies with respect to closely coupled collaboration
International Journal of Computer Applications in Technology
Hi-index | 0.00 |
We present a system and techniques for synthesizing views for three dimensional video teleconferencing. Instead of performing complex 3D scene acquisition, we decided to trade storage/hardware for computation, i.e., using more cameras. While it is expensive to directly capture a scene from all possible viewpoints, we observed that the participants' viewpoints usually remain at a constant height (eye level) during video teleconferencing. Therefore we can restrict the possible viewpoint to be within a virtual plane without sacrificing much of the realism. Doing so signicantly reduces the number of cameras required. We demonstrate a realtime system that uses a linear array of cameras to perform Light-Field style rendering. The simplicity and robustness of light fielding rendering, combined with the natural restrictions of limited view volume in video teleconferencing, allow us to synthesize photo-realistic views persuser request at interactive rate.