Multiple feature fusion based on co-training approach and time regularization for place classification in wearable video

  • Authors:
  • Vladislavs Dovgalecs;Rémi Mégret;Yannick Berthoumieu

  • Affiliations:
  • IMS Laboratory, University of Bordeaux, UMR5218 CNRS, Talence, France;IMS Laboratory, University of Bordeaux, UMR5218 CNRS, Talence, France;IMS Laboratory, University of Bordeaux, UMR5218 CNRS, Talence, France

  • Venue:
  • Advances in Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.