A multiresolution spline with application to image mosaics
ACM Transactions on Graphics (TOG)
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
Panoramic stereo imaging system with automatic disparity warping and seaming
Graphical Models and Image Processing
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Omnistereo: Panoramic Stereo Imaging
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
Graphcut textures: image and video synthesis using graph cuts
ACM SIGGRAPH 2003 Papers
ACM SIGGRAPH 2003 Papers
ACM SIGGRAPH 2005 Papers
Dynamosaicing: Mosaicing of Dynamic Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Misperceptions in stereoscopic displays: a vision science perspective
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Mastering Digital Panoramic Photography
Mastering Digital Panoramic Photography
Computer Vision: Algorithms and Applications
Computer Vision: Algorithms and Applications
Spacetime Texture Representation and Recognition Based on a Spatiotemporal Orientation Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Panoramic stereo video textures
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Most methods for synthesizing panoramas assume that the scene is static. A few methods have been proposed for synthesizing stereo or motion panoramas, but there has been little attempt to synthesize panoramas that have both stereo and motion. One faces several challenges in synthesizing stereo motion panoramas, for example, to ensure temporal synchronization between left and right views in each frame, to avoid spatial distortion of moving objects, and to continuously loop the video in time. We have recently developed a stereo motion panorama method that tries to address some of these challenges. The method blends space-time regions of a video XYT volume, such that the blending regions are distinct and translate over time. This article presents a perception experiment that evaluates certain aspects of the method, namely how well observers can detect such blending regions. We measure detection time thresholds for different blending widths and for different scenes, and for monoscopic versus stereoscopic videos. Our results suggest that blending may be more effective in image regions that do not contain coherent moving objects that can be tracked over time. For example, we found moving water and partly transparent smoke were more effectively blended than swaying branches. We also found that performance in the task was roughly the same for mono versus stereo videos.