A gaze-responsive self-disclosing display
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
COACH: a teaching agent that learns
Communications of the ACM
The invisible computer
Design principles for intelligent environments
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Manual and gaze input cascaded (MAGIC) pointing
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Building user and expert models by long-term observation of application usage
UM '99 Proceedings of the seventh international conference on User modeling
SUITOR: an attentive information system
Proceedings of the 5th international conference on Intelligent user interfaces
Perceptual user interfaces (introduction)
Communications of the ACM
Perceptual user interfaces: multimodal interfaces that process what comes naturally
Communications of the ACM
Tradeoffs in displaying peripheral information
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
IEEE Intelligent Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
The lumière project: Bayesian user modeling for inferring the goals and needs of software users
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Face-Responsive Interfaces: From Direct Manipulation to Perceptive Presence
UbiComp '02 Proceedings of the 4th international conference on Ubiquitous Computing
Proceedings of the 4th ACM International Workshop on Context-Awareness for Self-Managing Systems
3D-tracking of head and hands for pointing gesture recognition in a human-robot interaction scenario
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Multimodal recognition of reading activity in transit using body-worn sensors
ACM Transactions on Applied Perception (TAP)
Measuring the performance of gaze and speech for text input
Proceedings of the Symposium on Eye Tracking Research and Applications
Learning HMM-based cognitive load models for supporting human-agent teamwork
Cognitive Systems Research
Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing
Hi-index | 0.00 |
The trend toward pervasive computing necessitates finding and implementing appropriate ways for users to interact with devices. We believe the future of interaction with pervasive devices lies in attentive user interfaces, systems that pay attention to what users do so that they can attend to what users need. Such systems track user behavior, model user interests, and anticipate user desires and actions. In addition to developing technologies that support attentive user interfaces, and applications or scenarios that use attentive user interfaces, there is the problem of evaluating the utility of the attentive approach. With this last point in mind, we observed users in an "office of the future", where information is accessed on displays via verbal commands. Based on users' verbal data and eye-gaze patterns, our results suggest people naturally address individual devices rather than the office as a whole.