SEVA: sensor-enhanced video annotation
Proceedings of the 13th annual ACM international conference on Multimedia
Viewable scene modeling for geospatial video search
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
A crowdsourceable QoE evaluation framework for multimedia content
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Energy-accuracy trade-off for continuous mobile device location
Proceedings of the 8th international conference on Mobile systems, applications, and services
Location-based crowdsourcing: extending crowdsourcing to the real world
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
CrowdDB: answering queries with crowdsourcing
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
A privacy-aware framework for participatory sensing
ACM SIGKDD Explorations Newsletter
Automatic tag generation and ranking for sensor-rich outdoor videos
MM '11 Proceedings of the 19th ACM international conference on Multimedia
GeoCrowd: enabling query answering with spatial crowdsourcing
Proceedings of the 20th International Conference on Advances in Geographic Information Systems
Automatic positioning data correction for sensor-annotated mobile videos
Proceedings of the 20th International Conference on Advances in Geographic Information Systems
Hi-index | 0.00 |
MediaQ is a novel online media management system to collect, organize, share, and search mobile multimedia contents using automatically tagged geospatial metadata. User-generated-videos can be uploaded to the MediaQ from users' smartphones, iPhone and Android, and displayed accurately on a map interface according to their automatically sensed geospatial and other metadata. The MediaQ system provides the following distinct features. First, individual frames of videos (or any meaningful video segments) are automatically annotated by objective metadata which capture four dimensions in the real world: the capture time (when), the camera location and viewing direction (where), several key-words (what) and people (who). We term this data W4-metadata and they are obtained by utilizing camera sensors, geospatial and computer vision techniques. Second, a new approach of collecting multimedia data from the public has been implemented using spatial crowdsourcing, which allows media content to be collected in a coordinated manner for a specific purpose. Lastly, flexible video search features are implemented using W4 metadata, such as directional queries for selecting multimedia with a specific viewing direction. This paper is to present the design of a comprehensive mobile multimedia management system, MediaQ, and to share our experience in its implementation. Our extensive real world experimental case studies demonstrate that MediaQ can be an effective and comprehensive solution for various mobile multimedia applications.