The perceptual structure of multidimensional input device selection
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integrality and separability of input devices
ACM Transactions on Computer-Human Interaction (TOCHI)
Moving objects in space: exploiting proprioception in virtual-environment interaction
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Two-handed input in a compound task
CHI '94 Conference Companion on Human Factors in Computing Systems
The structure of object transportation and orientation in human-computer interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Object manipulation in virtual environments: relative size matters
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The role of kinesthetic reference frames in two-handed input performance
Proceedings of the 12th annual ACM symposium on User interface software and technology
The cubic mouse: a new device for three-dimensional input
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
An affordable optical head tracking system for desktop VR/AR systems
EGVE '03 Proceedings of the workshop on Virtual environments 2003
Frames of reference in virtual object rotation
APGV '04 Proceedings of the 1st Symposium on Applied perception in graphics and visualization
Optical Tracking and Automatic Model Estimation of Composite Interaction Devices
VR '06 Proceedings of the IEEE conference on Virtual Reality
Separability of spatial manipulations in multi-touch interfaces
Proceedings of Graphics Interface 2009
EuroHaptics'12 Proceedings of the 2012 international conference on Haptics: perception, devices, mobility, and communication - Volume Part I
JVRC'09 Proceedings of the 15th Joint virtual reality Eurographics conference on Virtual Environments
How natural is a natural interface? An evaluation procedure based on action breakdowns
Personal and Ubiquitous Computing
Hi-index | 0.01 |
Complex 3D interaction tasks require the manipulation of a large number of input parameters. Spatial input devices can be constructed such that their structure reflects the task at hand. As such, somatosensory cues that a user receives during device manipulation, as well as a users expectations, are consistent with visual cues from the virtual environment. Intuitively, such a match between the device's spatial structure and the task at hand would seem to allow for more natural and direct interaction. However, the exact effects on aspects like task performance, intuitiveness, and user comfort, are yet unknown.The goal of this work is to study the effects of input device structure for complex interaction tasks on user performance. Two factors are investigated: the relation between the frame of reference of a user's actions and the frame of reference of the virtual object being manipulated, and the relation between the type of motion a user performs with the input device and the type of motion of the virtual object.These factors are addressed by performing a user study using different input device structures. Subjects are asked to perform a task that entails translating a virtual object over an axis, where the structure of the input device reflects this task to different degrees. First, the action subjects need to perform to translate the object is either a translation or a rotation. Second, the action is performed in the same frame of reference of the virtual object, or in a fixed, separately located, frame of reference.Results show that task completion times are lowest when the input device allows a user to make the same type of motion in the same coordinate system as the virtual object. In case either factor does not match, task completion times increase significantly. Therefore, it may be advantageous to structure an input device such that the relation between its frame of reference and the type of action matches the corresponding frame of reference and motion type of the virtual object being manipulated.