Layered Representations for Human Activity Recognition
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Sensing and Modeling Human Networks using the Sociometer
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
MMM2: mobile media metadata for media sharing
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Reality mining: sensing complex social systems
Personal and Ubiquitous Computing
Audio-visual perception of a lecturer in a smart seminar room
Signal Processing - Special section: Multimodal human-computer interfaces
ISWC '07 Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers
Socialmotion: measuring the hidden social life of a building
LoCA'07 Proceedings of the 3rd international conference on Location-and context-awareness
LoCA'05 Proceedings of the First international conference on Location- and Context-Awareness
Mobility detection using everyday GSM traces
UbiComp'06 Proceedings of the 8th international conference on Ubiquitous Computing
Activity Inference through Sequence Alignment
LoCA '09 Proceedings of the 4th International Symposium on Location and Context Awareness
Towards a semi-automatic personal digital diary: detecting daily activities from smartphone sensors
Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments
Mining user similarity based on routine activities
Information Sciences: an International Journal
Hi-index | 0.00 |
The automatic analysis of real-life, long-term behavior and dynamics of individuals and groups from mobile sensor data constitutes an emerging and challenging domain. We present a framework to classify people's daily routines (defined by day type, and by group affiliation type) from real-life data collected with mobile phones, which include physical location information (derived from cell tower connectivity), and social context (given by person proximity information derived from Bluetooth). We propose and compare single- and multi-modal routine representations at multiple time scales, each capable of highlighting different features from the data, to determine which best characterized the underlying structure of the daily routines. Using a massive data set of 87000+ hours spanning four months of the life of 30 university students, we show that the integration of location and social context and the use of multiple time-scales used in our method is effective, producing accuracies of over 80% for the two daily routine classification tasks investigated, with significant performance differences with respect to the single-modal cues.