Directed diffusion: a scalable and robust communication paradigm for sensor networks
MobiCom '00 Proceedings of the 6th annual international conference on Mobile computing and networking
Hood: a neighborhood abstraction for sensor networks
Proceedings of the 2nd international conference on Mobile systems, applications, and services
A sensor network application construction kit (SNACK)
SenSys '04 Proceedings of the 2nd international conference on Embedded networked sensor systems
TinyDB: an acquisitional query processing system for sensor networks
ACM Transactions on Database Systems (TODS) - Special Issue: SIGMOD/PODS 2003
Programming wireless sensor networks with logical neighborhoods
InterSense '06 Proceedings of the first international conference on Integrated internet ad hoc and sensor networks
Programming sensor networks using abstract regions
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Audio-based semantic concept classification for consumer video
IEEE Transactions on Audio, Speech, and Language Processing
sMAP: a simple measurement and actuation profile for physical information
Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems
Performance Evaluation of a People Tracking System on PETS2009 Database
AVSS '10 Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance
Demo: Zoom: a multi-resolution tasking framework for crowdsourced geo-spatial sensing
Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems
Hi-index | 0.00 |
In this paper, we propose a framework to support the bridging of applications and computer-vision based sensor networks. We argue that the semantic gap, the difference between the data collected in a sensor network and the information needed by the application, in video-based sensor networks can only be addressed by providing systems support in such a way that allows users and computing systems to meet in the middle. We first outline the vision of the system that we are working towards. We then describe initial experiments that we have conducted using a functional component of the system applied to real-world video data that is being collected by intelligent transportation systems researchers.