Video Handling with Music and Speech Detection
IEEE MultiMedia
Recognizing User Context via Wearable Sensors
ISWC '00 Proceedings of the 4th IEEE International Symposium on Wearable Computers
StartleCam: A Cybernetic Wearable Camera
ISWC '98 Proceedings of the 2nd IEEE International Symposium on Wearable Computers
ISWC '98 Proceedings of the 2nd IEEE International Symposium on Wearable Computers
Context-based video retrieval system for the life-log applications
MIR '03 Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval
Wearable imaging system for summarizing personal experiences
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
Audio-visual intent-to-speak detection for human-computer interaction
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 04
Content-based video parsing and indexing based on audio-visual interaction
IEEE Transactions on Circuits and Systems for Video Technology
Challenges and Opportunities of Context-Aware Information Access
UDM '05 Proceedings of the International Workshop on Ubiquitous Data Management
Continuous archival and analysis of user data in virtual and immersive game environments
CARPE '05 Proceedings of the 2nd ACM workshop on Continuous archival and retrieval of personal experiences
Practical experience recording and indexing of Life Log video
CARPE '05 Proceedings of the 2nd ACM workshop on Continuous archival and retrieval of personal experiences
SEVA: sensor-enhanced video annotation
Proceedings of the 13th annual ACM international conference on Multimedia
PERSONE: personalized experience recoding and searching on networked environment
Proceedings of the 3rd ACM workshop on Continuous archival and retrival of personal experences
Recognizing context for annotating a live life recording
Personal and Ubiquitous Computing - Memory and Sharing of Experiences
Toward a Common Event Model for Multimedia Applications
IEEE MultiMedia
Prototyping Applications to Document Human Experiences
IEEE Pervasive Computing
Challenges in interface and interaction design for context-aware augmented memory systems
Proceedings of the 8th ACM SIGCHI New Zealand chapter's international conference on Computer-human interaction: design centered HCI
Video summarisation: A conceptual framework and survey of the state of the art
Journal of Visual Communication and Image Representation
Ubigraphy: a third-person viewpoint life log
CHI '08 Extended Abstracts on Human Factors in Computing Systems
A Practical Activity Capture Framework for Personal, Lifetime User Modeling
UM '07 Proceedings of the 11th international conference on User Modeling
Feasibility of Personalized Affective Video Summaries
Affect and Emotion in Human-Computer Interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Ubiquitous Home: Retrieval of Experiences in a Home Environment
IEICE - Transactions on Information and Systems
SEVA: Sensor-enhanced video annotation
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Ubiquitous Computing for Capture and Access
Foundations and Trends in Human-Computer Interaction
A method for analyzing work tasks and status by video-and-PC-monitoring system
AMC '09 Proceedings of the 2009 workshop on Ambient media computing
MyMemex: a web service-based personal memex system
ISI'09 Proceedings of the 2009 IEEE international conference on Intelligence and security informatics
Personalized life log media system in ubiquitous environment
ICUCT'06 Proceedings of the 1st international conference on Ubiquitous convergence technology
Extracting meaningful contexts from mobile life log
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
ELVIS: Entertainment-led video summaries
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
A language-based approach to indexing heterogeneous multimedia lifelog
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Personalization in multimedia retrieval: A survey
Multimedia Tools and Applications
Exploiting mobile contexts for Petri-net to generate a story in cartoons
Applied Intelligence
The emotional economy for the augmented human
Proceedings of the 2nd Augmented Human International Conference
"Life portal": an information access scheme based on life logs
HCII'11 Proceedings of the 1st international conference on Human interface and the management of information: interacting with information - Volume Part II
SharedLife: towards selective sharing of augmented personal memories
Reasoning, Action and Interaction in AI Theories and Systems
Multi-video summary and skim generation of sensor-rich videos in geo-space
Proceedings of the 3rd Multimedia Systems Conference
Restrain from pervasive logging employing geo-temporal policies
Proceedings of the 10th asia pacific conference on Computer human interaction
Efficient storage and retrieval of geo-referenced video from moving sensors
Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Hi-index | 0.00 |
In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions.