Communications of the ACM - Special issue on computer augmented environments: back to the real world
Passive real-world interface props for neurosurgical visualization
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Bricks: laying the foundations for graspable user interfaces
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tilting operations for small screen interfaces
Proceedings of the 9th annual ACM symposium on User interface software and technology
Tangible bits: towards seamless interfaces between people, bits and atoms
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Squeeze me, hold me, tilt me! An exploration of manipulative user interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Embodied User Interfaces: Towards Invisible User Interfaces
Proceedings of the IFIP TC2/TC13 WG2.7/WG13.4 Seventh Working Conference on Engineering for Human-Computer Interaction
Hi-index | 0.00 |
There has been wide-spread interest in augmented reality and physically-based user interfaces [e.g., 1, 2, 3, 4, 5, 6, 7, 8] in the past 6 years. A goal of these efforts is to seamlessly blend the affordances and strengths of physically manipulatable objects with virtual environments or artifacts, thereby leveraging the particular strengths of each. This approach allows us to break free of indirectly manipulating tiny representations trapped within a computer display by leveraging form factor, physical motor skills, and naturalistic associations.Our work is distinguished from previous work in that we are not exploring separate input devices, but rather we employ sensor technologies to make the physical artifact become the input device. In other words we are investigating situations in which the physical manipulations are directly integrated with the device or artifact that is being controlled. We consider these user interfaces to be physically embodied. Our approach has been to experiment with various form factors of several handheld devices, implement some commonly used functions, such as navigation, and iterate on the resulting prototypes.This video shows three implementations of manipulative user interfaces on PDAs to support three simple real world tasks - navigation through long sequential lists, navigation within a book or document by pages, and document annotation. In addition to the three tasks, this document describes a fourth implementation not shown in the video - navigation within a book by "chunks" or by relative location.