Improving hyperspectral classifiers: the difference between reducing data dimensionality and reducing classifier parameter complexity

  • Authors:
  • Asbjørn Berge;Anne Schistad Solberg

  • Affiliations:
  • Department of Informatics, University of Oslo, Norway;Department of Informatics, University of Oslo, Norway

  • Venue:
  • SCIA'07 Proceedings of the 15th Scandinavian conference on Image analysis
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hyperspectral data is usually high dimensional, and there is often a scarcity of available ground truth pixels . Thus the task of applying even a simple classifier such as the Gaussian Maximum Likelihood (GML) classifier usually forces the analyst to reduce the complexity of the implicit parameter estimation task. For decades, the common perception in the literature has been that the solution to this has been to reduce data dimensionality. However, as can be seen from a result by Cover [1], reducing dimensionality increases the risk of making the classification problem more complex.Using the simple GML classifier we compare state of the art dimensionality reduction strategies with a recently proposed strategy for sparsing of parameter estimates in full dimension [2]. Results show that reducing parameter estimation complexity by fitting sparse models in full dimension have a slight edge on the common approaches.