Joint manifolds for data fusion

  • Authors:
  • Mark A. Davenport;Chinmay Hegde;Marco F. Duarte;Richard G. Baraniuk

  • Affiliations:
  • Department of Statistics, Stanford University, Stanford, CA;Department of Electrical and Computer Engineering, Rice University, Houston, TX;Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ;Department of Electrical and Computer Engineering, Rice University, Houston, TX

  • Venue:
  • IEEE Transactions on Image Processing - Special section on distributed camera networks: sensing, processing, communication, and implementation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The emergence of low-cost sensing architectures for diverse modalities has made it possible to deploy sensor networks that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these networks acquire large amounts of very high-dimensional data. For example, even a relatively small network of cameras can generate massive amounts of high-dimensional image and video data. One way to cope with this data deluge is to exploit low-dimensional data models. Manifold models provide a particularly powerful theoretical and algorithmic framework for capturing the structure of data governed by a small number of parameters, as is often the case in a sensor network. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that joint manifold structure can lead to improved performance for a variety of signal processing algorithms for applications including classification and manifold learning. Additionally, recent results concerning random projections of manifolds enable us to formulate a scalable and universal dimensionality reduction scheme that efficiently fuses the data from all sensors.