Hilbert-Schmidt Lower Bounds for Estimators on Matrix Lie Groups for ATR
IEEE Transactions on Pattern Analysis and Machine Intelligence
Geometric theory of images
Image Manifolds which are Isometric to Euclidean Space
Journal of Mathematical Imaging and Vision
The Whitney Reduction Network: A Method for Computing Autoassociative Graphs
Neural Computation
Decentralized compression and predistribution via randomized gossiping
Proceedings of the 5th international conference on Information processing in sensor networks
Proceedings of the 5th international conference on Information processing in sensor networks
Data Fusion and Multicue Data Matching by Diffusion Maps
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Manifold alignment using Procrustes analysis
Proceedings of the 25th international conference on Machine learning
Compressive Sensing for Background Subtraction
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Random Projections of Smooth Manifolds
Foundations of Computational Mathematics
Audio-visual group recognition using diffusion maps
IEEE Transactions on Signal Processing
Geodesic entropic graphs for dimension and entropy estimation in manifold learning
IEEE Transactions on Signal Processing
Asymptotic performance analysis of Bayesian target recognition
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Modeling the manifolds of images of handwritten digits
IEEE Transactions on Neural Networks
Multi-metric learning for multi-sensor fusion based classification
Information Fusion
Hi-index | 0.00 |
The emergence of low-cost sensing architectures for diverse modalities has made it possible to deploy sensor networks that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these networks acquire large amounts of very high-dimensional data. For example, even a relatively small network of cameras can generate massive amounts of high-dimensional image and video data. One way to cope with this data deluge is to exploit low-dimensional data models. Manifold models provide a particularly powerful theoretical and algorithmic framework for capturing the structure of data governed by a small number of parameters, as is often the case in a sensor network. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that joint manifold structure can lead to improved performance for a variety of signal processing algorithms for applications including classification and manifold learning. Additionally, recent results concerning random projections of manifolds enable us to formulate a scalable and universal dimensionality reduction scheme that efficiently fuses the data from all sensors.