MyLifeBits: fulfilling the Memex vision
Proceedings of the tenth ACM international conference on Multimedia
Passive capture and ensuing issues for a personal lifetime store
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
The Turn: Integration of Information Seeking and Retrieval in Context (The Information Retrieval Series)
Large scale evaluations of multimedia information retrieval: the TRECVid experience
CIVR'05 Proceedings of the 4th international conference on Image and Video Retrieval
Interactive experiments in object-based retrieval
CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
MediAssist: using content-based analysis and context to manage personal photo collections
CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
SenseCam: a retrospective memory aid
UbiComp'06 Proceedings of the 8th international conference on Ubiquitous Computing
Context-aware classification of continuous video from wearables
Proceedings of the 2007 conference on Designing for User eXperiences
PhotoSim: Tightly Integrating Image Analysis into a Photo Browsing UI
SG '08 Proceedings of the 9th international symposium on Smart Graphics
A Framework for Review, Annotation, and Classification of Continuous Video in Context
SG '09 Proceedings of the 10th International Symposium on Smart Graphics
Hi-index | 0.00 |
Much of the current work on determining multimedia semantics from multimedia artifacts is based around using either context, or using content. When leveraged thoroughly these can independently provide content description which is used in building content-based applications. However, there are few cases where multimedia semantics are determined based on an integrated analysis of content and context. In this keynote talk we present one such example system in which we use an integrated combination of the two to automatically structure large collections of images taken by a SenseCam, a device from Microsoft Research which passively records a person's daily activities. This paper describes the post-processing we perform on SenseCam images in order to present a structured, organised visualisation of the highlights of each of the wearer's days.