Real life, real users, and real needs: a study and analysis of user queries on the web
Information Processing and Management: an International Journal
Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Segmentation of Color-Texture Regions in Images and Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
ViVo: Visual Vocabulary Construction for Mining Biomedical Images
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
The Story Picturing Engine---a system for automatic text illustration
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Real-time computerized annotation of pictures
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
IEEE Transactions on Neural Networks
Context information exchange and sharing in a peer-to-peer community: a video annotation scenario
Proceedings of the 27th ACM international conference on Design of communication
Hi-index | 0.00 |
This paper proposes an integrated framework for analyzing human actions in video streams. Despite most current approaches that are just based on automatic spatiotemporal analysis of sequences, the proposed method introduces the implicit user-in-the-loop concept for dynamically mining semantics and annotating video streams. This work sets a new and ambitious goal: to recognize, model and properly use "average user's" selections, preferences and perception, for dynamically extracting content semantics. The proposed approach is expected to add significant value to hundreds of billions of non-annotated or inadequately annotated video streams existing in the Web, file servers, databases etc. Furthermore expert annotators can gain important knowledge relevant to user preferences, selections, styles of searching and perception.