Kinect=IMU? Learning MIMO Signal Mappings to Automatically Translate Activity Recognition Systems across Sensor Modalities

  • Authors:
  • Oresti Banos;Alberto Calatroni;Miguel Damas;Hector Pomares;Ignacio Rojas;Hesam Sagha;Jose del R. Millán;Gerhard Troster;Ricardo Chavarriaga;Daniel Roggen

  • Affiliations:
  • -;-;-;-;-;-;-;-;-;-

  • Venue:
  • ISWC '12 Proceedings of the 2012 16th Annual International Symposium on Wearable Computers (ISWC)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method to automatically translate a preexisting activity recognition system, devised for a source sensor domain S, so that it can operate on a newly discovered target sensor domain T, possibly of different modality. First, we use MIMO system identification techniques to obtain a function that maps the signals of S to T. This mapping is then used to translate the recognition system across the sensor domains. We demonstrate the approach in a 5-class gesture recognition problem translating between a vision-based skeleton tracking system (Kinect), and inertial measurement units (IMUs). An adequate mapping can be learned in as few as a single gesture (3 seconds) in this scenario. The accuracy after Kinect- IMU or IMU- Kinect translation is 4% below the baseline for the same limb. Translating across modalities and also to an adjacent limb yields an accuracy 8% below baseline. We discuss the sources of errors and means for improvement. The approach is independent of the sensor modalities. It supports multimodal activity recognition and more flexible real-world activity recognition system deployments.