Elaborating Sensor Data using Temporal and Spatial Commonsense Reasoning

  • Authors:
  • Bo Morgan;Push Singh

  • Affiliations:
  • Massachusetts Institute of Technology;Massachusetts Institute of Technology

  • Venue:
  • BSN '06 Proceedings of the International Workshop on Wearable and Implantable Body Sensor Networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ubiquitous computing has established a vision of computation where computers are so deeply integrated into our lives that they become both invisible and everywhere. In order to have computers out of sight and out of mind, they will need a deeper understanding of human life. LifeNet [1] is a model that functions as a computational model of human life that attempts to anticipate and predict what humans do in the world from a first-person point of view. LifeNet utilizes a general knowledge storage [2] gathered from assertions about the world input by the web community at large. In this work, we extend this general knowledge with sensor data gathered in vivo. By adding these sensor-network data to LifeNet, we are enabling a bidirectional learning process: both bottom-up segregation of sensor data and top-down conceptual constraint propagation, thus correcting current metric assumptions in the LifeNet conceptual model by using sensor measurements. Also, in addition to having LifeNet learning general common sense metrics of physical time and space, it will also learn metrics to a specific lab space and chances for recognizing specific individual human activities, and thus be able to make both general and specific spatial/temporal inferences, such as predicting howmany people are in a given room and what they might be doing. This paper discusses the following topics: (1) details of the LifeNet probabilistic human model, (2) a description of the Plug sensor network used in this research, and (3) a description of an experimental design for evaluation of the LifeNet learning method.