Learning representations for animated motion sequence and implied motion recognition

  • Authors:
  • Georg Layher;Martin A. Giese;Heiko Neumann

  • Affiliations:
  • Institute for Neural Information Processing, Dept. of Engineering and Computer Sciences, Ulm University, Germany;Section for Computational Sensomotorics, Dept. for Cognitive Neurology, University Clinic Tübingen, Germany;Institute for Neural Information Processing, Dept. of Engineering and Computer Sciences, Ulm University, Germany

  • Venue:
  • ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The detection and categorization of animate motions is a crucial task underlying social interaction and decision-making. Neural representations of perceived animate objects are built into cortical area STS which is a region of convergent input from intermediate level form and motion representations. Populations of STS cells exist which are selectively responsive to specific action sequences, such as walkers. It is still unclear how and to which extent form and motion information contribute to the generation of such representations and what kind of mechanisms are utilized for the learning processes. The paper develops a cortical model architecture for the unsupervised learning of animated motion sequence representations. We demonstrate how the model automatically selects significant motion patterns as well as meaningful static snapshot categories from continuous video input. Such keyposes correspond to articulated postures which are utilized in probing the trained network to impose implied motion perception from static views. We also show how sequence selective representations are learned in STS by fusing snapshot and motion input and how learned feedback connections enable making predictions about future input. Network simulations demonstrate the computational capacity of the proposed model.