Building reliable activity models using hierarchical shrinkage and mined ontology

  • Authors:
  • Emmanuel Munguia Tapia;Tanzeem Choudhury;Matthai Philipose

  • Affiliations:
  • Massachusetts Institute of Technology, Cambridge, MA;Intel Research Seattle, Seattle, WA;Intel Research Seattle, Seattle, WA

  • Venue:
  • PERVASIVE'06 Proceedings of the 4th international conference on Pervasive Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Activity inference based on object use has received considerable recent attention. Such inference requires statistical models that map activities to the objects used in performing them. Proposed techniques for constructing these models (hand definition, learning from data, and web extraction) all share the problem of model incompleteness: it is difficult to either manually or automatically identify all the possible objects that may be used to perform an activity, or to accurately calculate the probability with which they will be used. In this paper, we show how to use auxiliary information, called an ontology, about the functional similarities between objects to mitigate the problem of model incompleteness. We show how to extract a large, relevant ontology automatically from WordNet, an online lexical reference system for the English language. We adapt a statistical smoothing technique, called shrinkage, to apply this similarity information to counter the incompleteness of our models. Our results highlight two advantages of performing shrinkage. First, overall activity recognition accuracy improves by 15.11% by including the ontology to re-estimate the parameters of models that are automatically mined from the web. Shrinkage can therefore serve as a technique for making web-mined activity models more attractive. Second, smoothing yields an increased recognition accuracy when objects not present in the incomplete models are used while performing an activity. When we replace 100% of the objects with other objects that are functionally similar, we get an accuracy drop of only 33% when using shrinkage as opposed to 91.66% (equivalent to random guessing) without shrinkage. If training data is available, shrinkage further improves classification accuracy.