Viewable scene modeling for geospatial video search
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Sensor-rich video exploration on a map interface
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Orientation data correction with georeferenced mobile videos
Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Hi-index | 0.00 |
Camera motion information facilitates higher-level semantic description inference in many video applications, e.g., video retrieval. However, an efficient and accurate methodology for annotating videos with camera motion information is still an elusive goal. In our recent work we have investigated the fusion of captured video with a continuous stream of sensor meta-data. For these so-called sensor-rich videos we present a system, called Motch, which precisely partitions a video document into subshots, automatically characterizes the camera motions and provides video subshot browsing based on an interactive, map-based interface. Moreover, the system computes and presents motion type statistics for each video in real time and renders different subshots distinctively on the map synchronously with the video playback.