Combining embedded accelerometers with computer vision for recognizing food preparation activities

  • Authors:
  • Sebastian Stein;Stephen J. McKenna

  • Affiliations:
  • University of Dundee, Dundee, United Kingdom;University of Dundee, Dundee, United Kingdom

  • Venue:
  • Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a publicly available dataset of complex activities that involve manipulative gestures. The dataset captures people preparing mixed salads and contains more than 4.5 hours of accelerometer and RGB-D video data, detailed annotations, and an evaluation protocol for comparison of activity recognition algorithms. Providing baseline results for one possible activity recognition task, this paper further investigates modality fusion methods at different stages of the recognition pipeline: (i) prior to feature extraction through accelerometer localization, (ii) at feature level via feature concatenation, and (iii) at classification level by combining classifier outputs. Empirical evaluation shows that fusing information captured by these sensor types can considerably improve recognition performance.