Issues in combining marking and direct manipulation techniques
UIST '91 Proceedings of the 4th annual ACM symposium on User interface software and technology
Proceedings of the 17th annual ACM symposium on User interface software and technology
Cooperative gestures: multi-user gestural interactions for co-located groupware
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
Lucid touch: a see-through mobile device
Proceedings of the 20th annual ACM symposium on User interface software and technology
OctoPocus: a dynamic guide for learning gesture-based command sets
Proceedings of the 21st annual ACM symposium on User interface software and technology
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Ripples: utilizing per-contact visualizations to improve user interaction with touch displays
Proceedings of the 22nd annual ACM symposium on User interface software and technology
ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Hi-index | 0.00 |
Emerging technologies provide platforms for new devices, applications, and user interfaces. These technologies have shown potential in early research, but their true utility and measures of success lie in their ability to reflect and enhance the capabilities of the people who use them. My research seeks to address this problem by thoroughly examining and understanding humans, hardware, and software to create tools that enable users in new ways and meet real needs. In this talk, I will discuss both sides of the coin: the potential, and the limitations of emerging input technologies that require fundamentally different user interface designs to realize their full utility. With particular focus on the area of multi-touch and surface computing, I will describe how leveraging and mirroring human motor, cognitive, and social abilities and needs can produce interfaces that are both learnable and enabling of high-bandwidth communication between the user and the computer. Further, such leverage and reflection also ensures that the resulting tools solve real problems and enable their users in ways that a traditional mouse-based user interface do not.