Distributed modular toolbox for multi-modal context recognition

  • Authors:
  • David Bannach;Kai Kunze;Paul Lukowicz;Oliver Amft

  • Affiliations:
  • Institute for Computer Systems and Networks, UMIT, Hall in Tyrol, Austria;Institute for Computer Systems and Networks, UMIT, Hall in Tyrol, Austria;Institute for Computer Systems and Networks, UMIT, Hall in Tyrol, Austria;Wearable Computing Lab, ETH Zurich, Switzerland

  • Venue:
  • ARCS'06 Proceedings of the 19th international conference on Architecture of Computing Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a GUI-based C++ toolbox that allows for building distributed, multi-modal context recognition systems by plugging together reusable, parameterizable components. The goals of the toolbox are to simplify the steps from prototypes to online implementations on low-power mobile devices, facilitate portability between platforms and foster easy adaptation and extensibility. The main features of the toolbox we focus on here are a set of parameterizable algorithms including different filters, feature computations and classifiers, a runtime environment that supports complex synchronous and asynchronous data flows, encapsulation of hardware-specific aspects including sensors and data types (e.g., int vs. float), and the ability to outsource parts of the computation to remote devices. In addition, components are provided for group-wise, event-based sensor synchronization and data labeling. We describe the architecture of the toolbox and illustrate its functionality on two case studies that are part of the downloadable distribution.