Models of gaze control for manipulation tasks

  • Authors:
  • Jose Nunez-Varela;Jeremy L. Wyatt

  • Affiliations:
  • University of Birmingham, Birmingham, UK;University of Birmingham, Birmingham, UK

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human studies have shown that gaze shifts are mostly driven by the current task demands. In manipulation tasks, gaze leads action to the next manipulation target. One explanation is that fixations gather information about task relevant properties, where task relevance is signalled by reward. This work presents new computational models of gaze shifting, where the agent imagines ahead in time the informational effects of possible gaze fixations. Building on our previous work, the contributions of this article are: (i) the presentation of two new gaze control models, (ii) comparison of their performance to our previous model, (iii) results showing the fit of all these models to previously published human data, and (iv) integration of a visual search process. The first new model selects the gaze that most reduces positional uncertainty of landmarks (Unc), and the second maximises expected rewards by reducing positional uncertainty (RU). Our previous approach maximises the expected gain in cumulative reward by reducing positional uncertainty (RUG). In experiment ii the models are tested on a simulated humanoid robot performing a manipulation task, and each model's performance is characterised by varying three environmental variables. This experiment provides evidence that the RUG model has the best overall performance. In experiment iii, we compare the hand-eye coordination timings of the models in a robot simulation to those obtained from human data. This provides evidence that only the models that incorporate both uncertainty and reward (RU and RUG) match human data.