Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
FOCS '02 Proceedings of the 43rd Symposium on Foundations of Computer Science
Avrora: scalable sensor network simulation with precise timing
IPSN '05 Proceedings of the 4th international symposium on Information processing in sensor networks
The β-factor: measuring wireless link burstiness
Proceedings of the 6th ACM conference on Embedded network sensor systems
Wishbone: profile-based partitioning for sensornet applications
NSDI'09 Proceedings of the 6th USENIX symposium on Networked systems design and implementation
Error correction in single-hop wireless sensor networks: a case study
Proceedings of the Conference on Design, Automation and Test in Europe
Decoding by linear programming
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Fast track article: Balancing behavioral privacy and information utility in sensory data flows
Pervasive and Mobile Computing
Compressed data aggregation: energy-efficient and high-fidelity data collection
IEEE/ACM Transactions on Networking (TON)
Hi-index | 0.00 |
Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or compressible, employing CS, not only compresses the data and reduces effective transmission rate, but also improves the robustness of the system to channel erasures. This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we propose that if CS is employed for source compression, then CS can further be exploited as an application layer erasure coding strategy for recovering missing data. We show that CS erasure encoding (CSEC) with random sampling is efficient for handling missing data in erasure channels, paralleling the performance of BCH codes, with the added benefit of graceful degradation of the reconstruction error even when the amount of missing data far exceeds the designed redundancy. Further, since CSEC is equivalent to nominal oversampling in the incoherent measurement basis, it is computationally cheaper than conventional erasure coding. We support our proposal through extensive performance studies.