Learning probabilistic automata with variable memory length
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
Predicting human interruptibility with sensors: a Wizard of Oz feasibility study
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Layered Representations for Human Activity Recognition
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Extracting Context from Environmental Audio
ISWC '98 Proceedings of the 2nd IEEE International Symposium on Wearable Computers
On-line learning of predictive compositional hierarchies
On-line learning of predictive compositional hierarchies
SenSay: A Context-Aware Mobile Phone
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Learning and reasoning about interruption
Proceedings of the 5th international conference on Multimodal interfaces
Examining the robustness of sensor-based statistical models of human interruptibility
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Minimal-impact audio-based personal archives
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
The connector: facilitating context-aware communication
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 3 (ICME '03) - Volume 03
The connector service-predicting availability in mobile contexts
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
Context-aware computer systems are characterized by the ability to consider user state information in their decision logic. One example application of context-aware computing is the smart mobile telephone. Ideally, a smart mobile telephone should be able to consider both social factors (i.e., known relationships between contactor and contactee) and environmental factors (i.e., the contactee's current locale and activity) when deciding how to handle an incoming request for communication.Toward providing this kind of user state information and improving the ability of the mobile phone to handle calls intelligently, we present work on inferring environmental factors from sensory data and using this information to predict user interruptibility. Specifically, we learn the structure and parameters of a user state model from continuous ambient audio and visual information from periodic still images, and attempt to associate the learned states with user-reported interruptibility levels. We report experimental results using this technique on real data, and show how such an approach can allow for adaptation to specific user preferences.