k: nearest neighbors associative memory model for face recognition

  • Authors:
  • Bai-ling Zhang;Yuan Miao;Gopal Gupta

  • Affiliations:
  • School of Computer Science and Mathematics, Victoria University, VIC, Australia;School of Computer Science and Mathematics, Victoria University, VIC, Australia;School of Computer Science and Software Engineering, Monash University, VIC, Australia

  • Venue:
  • AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Associative memory (AM) models for human faces recognition have been previously studied in psychology and neuroscience. A kernel based AM model (KAM) has been recently proposed and demonstrated with good recognition performances. KAM first forward transforms input space to a feature space and then reconstructs input from the kernel features. For a given subject, KAM uses all of the training samples to build the model, regardless what a query face image will be. This not only keeps unnecessary overhead for model building when the number of smaples is large, but also makes the model not robust when there are outliers in the training samples, for example, from occlusions or illumination. In this paper, an improved associative memory model is investigated by combining the KAM with the k–Nearest Neighbors classification algorithm. Named as k–Nearest Neighbors Associative Memory (kNN-AM), the model takes into account the closeness between a query face image and the training prototype face images. A modular scheme of applying the proposed kNN-AM to face recognition was discussed. As a multi-class classification problem, face recognition can be carried out by simply comparing which associative memory model best describe a given query face image. Results of extensive experiments on several well-known face database show that the kNN-AM has very satisfactory recognition accuracies.