Towards a universal and limited visual vocabulary

  • Authors:
  • Jian Hou;Zhan-Shen Feng;Yong Yang;Nai-Ming Qi

  • Affiliations:
  • School of Computer Science and Technology, Xuchang University, China;School of Computer Science and Technology, Xuchang University, China;School of Astronautics, Harbin Institute of Technology, Harbin, China;School of Astronautics, Harbin Institute of Technology, Harbin, China

  • Venue:
  • ISVC'11 Proceedings of the 7th international conference on Advances in visual computing - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bag-of-visual-words is a popular image representation and attains wide application in image processing community. While its potential has been explored in many aspects, its operation still follows a basic mode, namely for a given dataset, using k-means-like clustering methods to train a vocabulary. The vocabulary obtained this way is data dependent, i.e., with a new dataset, we must train a new vocabulary. Based on previous research on determining the optimal vocabulary size, in this paper we research on the possibility of building a universal and limited visual vocabulary with optimal performance. We analyze why such a vocabulary should exist and conduct extensive experiments on three challenging datasets to validate this hypothesis. As a consequence, we believe this work sheds a new light on finally obtaining a universal visual vocabulary of limited size which can be used with any datasets to obtain the best or near-best performance.