SIP-based context distribution: does aggregation pay off?

  • Authors:
  • Alisa Devlic

  • Affiliations:
  • Appear Networks & Royal Institute of Technology (KTH), Kista, Sweden

  • Venue:
  • ACM SIGCOMM Computer Communication Review
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Context-aware applications need quickly access to current context information, in order to adapt their behavior before this context changes. To achieve this, the context distribution mechanism has to timely discover context sources that can provide a particular context type, then acquire and distribute context information from these sources to the applications that requested this type of information. This paper reviews the state-of-the-art context distribution mechanisms according to identified requirements, then introduces a resource list-based subscription/notification mechanism for context sharing. This SIP-based mechanism enables subscriptions to a resource list containing URIs of multiple context sources that can provide the same context type and delivery of aggregated notifications containing context updates from each of these sources. Aggregation of context is thought to be important as it reduces the network traffic between entities involved in context distribution. However, it introduces an additional delay due to waiting for context updates and their aggregation. To investigate if this aggregation actually pays off, we measured and compared the time needed by an application to receive context updates after subscribing to a particular resource list (using RLS) versus after subscribing to each of the individual context sources (using SIMPLE) for different numbers of context sources. Our results show that RLS aggregation outperforms the SIMPLE presence mechanism with 3 or more context sources, regardless of their context updates size. Database performance was identified as a major bottleneck during aggregation, hence we used in-memory tables & prepared statements, leading to up to 57% database time improvement, resulting in a reduction of the aggregation time by up to 34%. With this reduction and an increase in context size, we pushed the aggregation payoff threshold closer to 2 context sources.