Exploring concepts collaboratively: considering how Wii interact

  • Authors:
  • Christopher Foster;Liz Burd;Andrew Hatch

  • Affiliations:
  • Durham University, South Road, Durham, UK;Durham University, South Road, Durham, UK;Durham University, South Road, Durham, UK

  • Venue:
  • Proceedings of the International Conference on Advanced Visual Interfaces
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This abstract describes ongoing research on human to human interaction between early career Computer Scientists as they explore a complex collaborative concept mapping task performed by a collocated group using a large wall-screen projected display. We have investigated the effects of input configuration and mode of input on human to human computer interaction through the use of gesture based controllers. Application of Bales' Interaction Process Analysis (IPA) [1] supports identity trade offs in choosing from a plethora of available input devices and displays when investigating interaction and knowledge discovery. Increasingly, Higher Educational departments are encouraged to plan and deploy technologies that facilitate interaction as a fundamental principle of moving into the Interaction Age, specifically 'new tools are needed to support informal learning activities, in particular processes associated with conceptual development' [2]. This need for new tools resulted in the creation of WiiDraw; software through which single or multiple users could interact with and manipulate conceptual mapping diagrams using gestural interaction concurrently. Gaming interfaces like the Nintendo Wii provide options for creating gesture-based input beyond the move-click capability of a mouse, offering new modes to groups who interact with and create conceptual knowledge with large screens. Other example systems have used mouse input [3] and laser pointers [4] as a means through which to explore large-screen based systems, yet none of these have emerged as a clear choice for a range of applications, and there may not be a single best fit option. However, we now present a study of eleven groups who completed a conceptual mapping task on a shared wall-display to determine how the configuration and mode of input influenced the amount of interaction. The experiment consisted of a single between-groups factor of input configuration of two levels (one controller and two controllers) and a single within-groups factor of interaction style, consisting of two levels (controller with no gestures enabled and controller with gestures enabled). IPA was applied to the data that was obtained from video recordings made of each experiment. ANOVA indicated a main effect of number of controllers, F(1,18)=6.38, p=0.02, with a higher number of interactions when dyads had one controller (M=432, SD=93) than two controllers (M=310, SD=140). A main effect of gestures was evident, F(1,18)=5.08, p=0.04 with more interactions occurring with gestures (M=420, SD=119), than without (M=310, SD=129) (see Figure 1). The interaction effect was not statistically significant. Results indicate that one controller afforded higher levels of human to human interaction, with gestures also increasing the number of interactions seen. Further analysis describes the differences in type of interaction and its impact upon knowledge discovery. However it appears Wii interact more when gesturing with concepts.