Sensing capacity for discrete sensor network applications
IPSN '05 Proceedings of the 4th international symposium on Information processing in sensor networks
Action coverage formulation for power optimization in body sensor networks
Proceedings of the 2008 Asia and South Pacific Design Automation Conference
Collaborative Target Classification for Image Recognition in Wireless Sensor Networks
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
IEEE Journal on Selected Areas in Communications - Special issue on body area networking: Technology and applications
Situation assessment via multi-target identification and classification in radar sensor networks
MILCOM'09 Proceedings of the 28th IEEE conference on Military communications
Decentralized sparse signal recovery for compressive sleeping wireless sensor networks
IEEE Transactions on Signal Processing
Hi-index | 0.08 |
We study distributed strategies for classification of multiple targets in a wireless sensor network. The maximum number of targets is known a priori but the actual number of distinct targets present in any given event is assumed unknown. The target signals are modeled as zero-mean Gaussian processes with distinct temporal power spectral densities, and it is assumed that the noise-corrupted node measurements are spatially independent. The proposed classifiers have a simple distributed architecture: local hard decisions from each node are communicated over noisy links to a manager node which optimally fuses them to make the final decision. A natural strategy for local hard decisions is to use the optimal local classifier. A key problem with the optimal local classifier is that the number of hypotheses increases exponentially with the maximum number of targets. We propose two suboptimal (mixture density and Gaussian) local classifiers that are based on a natural but coarser repartitioning of the hypothesis space, resulting in linear complexity with the number of targets. We show that exponentially decreasing probability of error with the number of nodes can be guaranteed with an arbitrarily small but nonvanishing communication power per node. Numerical results based on real data demonstrate the remarkable practical advantage of decision fusion: an acceptably small probability of error can be attained by fusing a moderate number of unreliable local decisions. Furthermore, the performance of the suboptimal mixture density classifier is comparable to that of the optimal local classifier, making it an attractive choice in practice.