Tensor Learning for Regression

  • Authors:
  • Weiwei Guo;Irene Kotsia;Ioannis Patras

  • Affiliations:
  • School Computer Science and Electronic Engineering, Queen Mary, University of London, London, U.K.;School Computer Science and Electronic Engineering, Queen Mary, University of London, London, U.K.;School Computer Science and Electronic Engineering, Queen Mary, University of London, London, U.K.

  • Venue:
  • IEEE Transactions on Image Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we exploit the advantages of tensorial representations and propose several tensor learning models for regression. The model is based on the canonical/parallel-factor decomposition of tensors of multiple modes and allows the simultaneous projections of an input tensor to more than one direction along each mode. Two empirical risk functions are studied, namely, the square loss and $\epsilon$ -insensitive loss functions. The former leads to higher rank tensor ridge regression (TRR), and the latter leads to higher rank support tensor regression (STR), both formulated using the Frobenius norm for regularization. We also use the group-sparsity norm for regularization, favoring in that way the low rank decomposition of the tensorial weight. In that way, we achieve the automatic selection of the rank during the learning process and obtain the optimal-rank TRR and STR. Experiments conducted for the problems of head-pose, human-age, and 3-D body-pose estimations using real data from publicly available databases, verified not only the superiority of tensors over their vector counterparts but also the efficiency of the proposed algorithms.