Making large-scale support vector machine learning practical
Advances in kernel methods
Modern Information Retrieval
Dependence Among Terms in Vector Space Model
IDEAS '04 Proceedings of the International Database Engineering and Applications Symposium
Next-Generation Personal Memory Aids
BT Technology Journal
Reality mining: sensing complex social systems
Personal and Ubiquitous Computing
Proceedings of the 15th international conference on World Wide Web
Assessing the Filtering and Browsing Utility of Automatic Semantic Concepts for Multimedia Retrieval
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Large-Scale Concept Ontology for Multimedia
IEEE MultiMedia
Evaluation campaigns and TRECVid
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
The challenge problem for automated detection of 101 semantic concepts in multimedia
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Do life-logging technologies support memory for the past?: an experimental study using sensecam
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Automatically Segmenting LifeLog Data into Events
WIAMIS '08 Proceedings of the 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services
Everyday concept detection in visual lifelogs: validation, relationships and trends
Multimedia Tools and Applications
Adding Semantics to Detectors for Video Retrieval
IEEE Transactions on Multimedia
Hi-index | 0.00 |
The performance of automatic detection of concepts in image and video data has been improved to a satisfactory level for some generic concepts like indoor, outdoor, faces, etc. on high quality data from broadcast TV or movies. However it remains a challenge to apply this to interpreting the high-level semantics of events as they occur in visual lifelogs from wearable cameras. This is because poorer quality image data and the activities of the wearer make it difficult to automatically categorise them. In this paper, we propose an interestingness-based semantic aggregation and representation algorithm, to tackle the problem of event management and representation in visual lifelogging. Semantic concept interestingness is calculated by fusing image-level concepts which are then exploited to select a representation for the semantic event correlated to various event topics. Experimental results show the efficacy of our algorithm in fusing semantics at the event level, and in selecting representations for event management in visual lifelogging.