Grasping reality through illusion—interactive graphics serving science
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A synthetic visual environment with hand gesturing and voice input
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hands-on interaction with virtual environments
UIST '89 Proceedings of the 2nd annual ACM SIGGRAPH symposium on User interface software and technology
A hand gesture interface device
CHI '87 Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface
Graphics Processing on a Graphics Supercomputer
IEEE Computer Graphics and Applications
Dialogue structures for virtual worlds
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
On temporal-spatial realism in the virtual reality environment
UIST '91 Proceedings of the 4th annual ACM symposium on User interface software and technology
The decoupled simulation model for virtual reality systems
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
In recent years, a number of research groups have implemented various versions of virtual world concept [2, 4, 6, 7]. A common thread among these virtual worlds is a direct manipulation user interface paradigm based on a glove device with the position and orientation of the hand registered by a tracking device. To explore this paradigm, a new project at IBM Research was started in 1989 to build a virtual laboratory for scientists and engineers. Our first step is to integrate the glove and space tracking devices with the real time graphics on a graphics superworkstation. A simple bouncing ball virtual world has been created to test underlying software and fine tune interactive performance.Our initial emphasis is placed on understanding the limitations of various system components and getting the best interactive performance from the system. With current state of technology, the glove and tracking devices can generate much more data than the graphics update process can utilize. Both the rendering process and the processes handling the device serial ports are CPU intensive. Our first design problem is how to distribute the processing and match the incoming data rates of input devices with the update rate of the graphics. After a new position from the tracker is received by the graphics, it is displayed only at the next frame update time giving the appearance that the hand image always lags behind the motion of the real hand. Our second design problem is to use techniques to compensate for this inherent lag time. This abstract describes the specific approaches we use to solve these problems and some useful insight gained in experimenting with lag time reduction by position prediction.