VIDEOPLACE—an artificial reality
CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
What's the big idea? Toward a pedagogy of idea power
IBM Systems Journal
BT Technology Journal
Attention-based design of augmented reality interfaces
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Nightmarket workshops: art & science in action
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Adaptive context aware attentive interaction in large tiled display
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Augmented reality visualization interface for biometric wireless sensor networks
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Hi-index | 0.00 |
This paper shows how a software toolkit enables graphic designers to make camera-based interactive environments in a short period of time without requiring experience in user interface design or machine vision. The Attentive Interaction Design Toolkit, a vision-based input toolkit, gives users an analysis of faces found in a given image stream, including facial expression, body motion, and attentive activities. This data is fed to a text file that can be easily understood by humans and programs alike. A four-day workshop demonstrated that some Flash-savvy architecture students could construct interactive spaces (e.g. Eat-Eat-Eat, TaiKer-KTV and ScreamMarket) based on a group of people's body and their head motions.