Towards optimal naive bayes nearest neighbor

  • Authors:
  • Régis Behmo;Paul Marcombes;Arnak Dalalyan;Véronique Prinet

  • Affiliations:
  • NLPR / LIAMA, Institute of Automation, Chinese Academy of Sciences;NLPR / LIAMA, Institute of Automation, Chinese Academy of Sciences and IMAGINE, LIGM, Université Paris-Est;IMAGINE, LIGM, Université Paris-Est;NLPR / LIAMA, Institute of Automation, Chinese Academy of Sciences

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Naive Bayes Nearest Neighbor (NBNN) is a feature-based image classifier that achieves impressive degree of accuracy [1] by exploiting 'Image-to-Class' distances and by avoiding quantization of local image descriptors. It is based on the hypothesis that each local descriptor is drawn froma class-dependent probability measure. The density of the latter is estimated by the non-parametric kernel estimator, which is further simplified under the assumption that the normalization factor is class-independent. While leading to significant simplification, the assumption underlying the original NBNN is too restrictive and considerably degrades its generalization ability. The goal of this paper is to address this issue. As we relax the incriminated assumption we are faced with a parameter selection problem that we solve by hinge-loss minimization. We also show that our modified formulation naturally generalizes to optimal combinations of feature types. Experiments conducted on several datasets show that the gain over the original NBNN may attain up to 20 percentage points. We also take advantage of the linearity of optimal NBNN to perform classification by detection through efficient sub-window search [2], with yet another performance gain. As a result, our classifier outperforms--in terms of misclassification error--methods based on support vector machine and bags of quantized features on some datasets.