Learning from mobile contexts to minimize the mobile location search latency

  • Authors:
  • Ling-Yu Duan;Rongrong Ji;Jie Chen;Hongxun Yao;Tiejun Huang;Wen Gao

  • Affiliations:
  • Institute of Digital Media, Peking University, Beijing 100871, China;Institute of Digital Media, Peking University, Beijing 100871, China and Visual Intelligence Laboratory, Harbin Institute of Technology, Heilongjiang 150001, China;Institute of Digital Media, Peking University, Beijing 100871, China;Visual Intelligence Laboratory, Harbin Institute of Technology, Heilongjiang 150001, China;Institute of Digital Media, Peking University, Beijing 100871, China;Institute of Digital Media, Peking University, Beijing 100871, China and Visual Intelligence Laboratory, Harbin Institute of Technology, Heilongjiang 150001, China

  • Venue:
  • Image Communication
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose to learn an extremely compact visual descriptor from the mobile contexts towards low bit rate mobile location search. Our scheme combines location related side information from the mobile devices to adaptively supervise the compact visual descriptor design in a flexible manner, which is very suitable to search locations or landmarks within a bandwidth constraint wireless link. Along with the proposed compact descriptor learning, a large-scale, contextual aware mobile visual search benchmark dataset PKUBench is also introduced, which serves as the first comprehensive benchmark for the quantitative evaluation of how the cheaply available mobile contexts can help the mobile visual search systems. Our proposed contextual learning based compact descriptor has shown to outperform the existing works in terms of compression rate and retrieval effectiveness.