Wireless Face Interface: Using voluntary gaze direction and facial muscle activations for human-computer interaction

  • Authors:
  • Outi Tuisku;Veikko Surakka;Toni Vanhala;Ville Rantanen;Jukka Lekkala

  • Affiliations:
  • Research Group for Emotions, Sociality, and Computing, Tampere Unit for Computer-Human Interaction (TAUCHI), School of Information Sciences, University of Tampere, Kanslerinrinne 1, FI-33014 Unive ...;Research Group for Emotions, Sociality, and Computing, Tampere Unit for Computer-Human Interaction (TAUCHI), School of Information Sciences, University of Tampere, Kanslerinrinne 1, FI-33014 Unive ...;Research Group for Emotions, Sociality, and Computing, Tampere Unit for Computer-Human Interaction (TAUCHI), School of Information Sciences, University of Tampere, Kanslerinrinne 1, FI-33014 Unive ...;Sensor Technology and Biomeasurements, Department of Automation Science and Engineering, Tampere University of Technology, P.O. Box 692, FI-33101 Tampere, Finland;Sensor Technology and Biomeasurements, Department of Automation Science and Engineering, Tampere University of Technology, P.O. Box 692, FI-33101 Tampere, Finland

  • Venue:
  • Interacting with Computers
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The present aim was to investigate the functionality of a new wireless prototype called Face Interface. The prototype combines the use of voluntary gaze direction and facial muscle activations, for pointing and selecting objects on a computer screen, respectively. The subjective and objective functionality of the prototype was evaluated with a series of pointing tasks using either frowning (i.e., frowning technique) or raising the eyebrows (i.e., raising technique) as the selection technique. Pointing task times and accuracies were measured using three target diameters (i.e., 25, 30, 40mm), seven pointing distances (i.e., 60, 120, 180, 240, 260, 450, and 520mm), and eight pointing angles (0^o, 45^o, 90^o, 135^o, 180^o, 225^o, 270^o, and 315^o). The results showed that the raising technique was faster selection technique than the frowning technique for the objects that were presented in the pointing distances from 60mm to 260mm. For those pointing distances the overall pointing task times were 2.4s for the frowning technique, and 1.6s for the raising technique. Fitts' law computations showed that the correlations for the Fitts' law model were r=0.77 for the frowning technique and r=0.51 for the raising technique. Further, the index of performance (IP) value was 1.9 bits/s for the frowning technique and 5.4 bits/s for raising the eyebrows technique. Based on the results, the prototype functioned well and was adjustable so that two different facial activations can be used in combination with gaze direction for pointing and selecting objects on a computer screen.