The impact of three interfaces for 360-degree video on spatial cognition

  • Authors:
  • Wutthigrai Boonsuk;Stephen Gilbert;Jonathan Kelly

  • Affiliations:
  • Iowa State University, Ames, Iowa, United States;Iowa State University, Ames, Iowa, United States;Iowa State University, Ames, Iowa, United States

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we describe an experiment designed to evaluate the effectiveness of three interfaces for surveillance or remote control using live 360-degree video feeds from a person or vehicle in the field. Video feeds are simulated using a game engine. While locating targets within a 3D terrain using a 2D 360-degree interface, participants indicated perceived egocentric directions to targets and later placed targets on an overhead view of the terrain. Interfaces were compared based on target finding and map placement performance. Results suggest 1) non-seamless interfaces with visual boundaries facilitate spatial understanding, 2) correct perception of self-to-object relationships is not correlated with understanding object-to-object relationships within the environment, and 3) increased video game experience corresponds with better spatial understanding of an environment observed in 360-degrees. This work can assist researchers of panoramic video systems in evaluating the optimal interface for observation and teleoperation of remote systems.