Wizard of Oz prototyping of computer vision based action games for children
Proceedings of the 2004 conference on Interaction design and children: building a community
Maximizing the guessability of symbolic input
CHI '05 Extended Abstracts on Human Factors in Computing Systems
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
User-defined motion gestures for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gesture-based user interfaces for public spaces
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: users diversity - Volume Part II
Understanding user gestures for manipulating 3D objects from touchscreen inputs
Proceedings of Graphics Interface 2012
User-defined gestures for free-hand TV control
Proceedings of the 10th European conference on Interactive tv and video
Comparing elicited gestures to designer-created gestures for selection above a multitouch surface
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
Web on the wall: insights from a multimodal interaction elicitation study
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
Hi-index | 0.00 |
This paper explores the use of a guessability study to examine child-defined gestures with Kinect. Applying a Wizard-of-Oz approach, gestures were elicited from six children (age 3--8) through a series of 22 task stimuli including object manipulation, navigation-based tasks, and spatial interaction. Gestures were video recorded, transcribed, and coded by three researchers employing an inductive, qualitative method of analysis. Five themes emerged from the data: (1) the influence of 2D touchscreens on children's interactions in 3D, (2) the role of contextual cues in designing a stimuli set, (3) individual preferences for dominant styles of interaction, (4) different approaches children employ to simulate the same object path, and (5) and allocentric versus egocentric approaches for manipulating objects on screen. While we did not achieve strong consensus among all of the gestures produced by children in our study, our results provide a basis for further refinement of the stimulus set and methodology used for future work examining child-defined gestures for whole-body interfaces.