Designing video annotation and analysis systems
Proceedings of the conference on Graphics interface '92
Toolglass and magic lenses: the see-through interface
SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques
Media Streams: an iconic visual language for video representation
Human-computer interaction
Media streams: representing video for retrieval and repurposing
Media streams: representing video for retrieval and repurposing
Active video watching using annotation
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 2)
Designing annotation before it's needed
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Video Database Systems: Issues, Products and Applications
Video Database Systems: Issues, Products and Applications
MPEG-7 in action: end user experiences with COSMOS-7 front end systems
Proceedings of the 2006 ACM symposium on Applied computing
EmoPlayer: A media player for video clips with affective annotations
Interacting with Computers
SEMI-AUTOMATED ANNOTATION AND RETRIEVAL OF DANCE MEDIA OBJECTS
Cybernetics and Systems
Distributed discrimination of media moments and media intervals: a Watch-and-Comment approach
Proceedings of the 2010 ACM Symposium on Applied Computing
WebMedia '09 Proceedings of the XV Brazilian Symposium on Multimedia and the Web
Using temporal video annotation as a navigational aid for video browsing
UIST '10 Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology
A color bar based affective annotation method for media player
APWeb'06 Proceedings of the 8th Asia-Pacific Web conference on Frontiers of WWW Research and Development
Discrimination of media moments and media intervals: sticker-based watch-and-comment annotation
Multimedia Tools and Applications
Hi-index | 0.00 |
This paper describes a video annotation tool based on a new and flexible model, that gives several perspectives over the same video content. The model was designed in a way that allows having multiple views over the same video data, enabling users with different requirements to have the most appropriate interface. These views, video-lenses, highlight a specific aspect of the video content that is being annotated. Annotations are made using a timeline based interface with multiple tracks, where each track corresponds to a given video-lens. The format used to store and exchange the information is the MPEG-7 standard. The annotation tool (VAnnotator) is being developed in the scope of Vizard, an ambitious project that aims to define a new paradigm for video navigation, annotation, editing and retrieval. The Vizard project includes users, both from the production/archiving area and from the consumer electronics area, that help to define and validate the annotation requirements and functionality.