Artificial Intelligence - Special volume on computer vision
Creating full view panoramic image mosaics and environment maps
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Estimation of Relative Camera Positions for Uncalibrated Cameras
ECCV '92 Proceedings of the Second European Conference on Computer Vision
Hierarchical Model-Based Motion Estimation
ECCV '92 Proceedings of the Second European Conference on Computer Vision
Robust Video Mosaicing through Topology Inference and Local to Global Alignment
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
Registration of Video to Geo-Referenced Imagery
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Hi-index | 0.00 |
Video as an entertainment or information source in consumer, military, and broadcast television applications is widespread. Typically however, the video is simply presented to the viewer, with only minimal manipulation. Examples include chroma-keying (often used in news and weather broadcasts) where specific color components are detected and used to control the video source. In the past few years, the advent of digital video and increases in computational power has meant that more complex manipulation can be performed. In this paper we present some highlights of our work in annotating video by aligning features extracted from the video to a reference set of features. Video insertion and annotation require manipulation of the video stream to composite synthetic imagery and information with real video imagery. The manipulation may involve only the 2D image space or the 3D scene space. The key problems to be solved are : (i) indexing and matching to determine the location of insertion, (ii) stable and jitter-free tracking to compute the time variation of the camera, and (iii) seamlessly blended insertion for an authentic viewing experience. We highlight our approach to these problems by showing three example scenarios: (i) 2D synthetic pattern insertion in live video, (ii) annotation of aerial imagery through geo-registration with stored reference imagery and annotations, and (iii) 3D object insertion in a video for a 3D scene.