Text-independent speaker authentication with spiking neural networks

  • Authors:
  • Simei Gomes Wysoski;Lubica Benuskova;Nikola Kasabov

  • Affiliations:
  • Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand;Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand;Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand

  • Venue:
  • ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a novel system that performs text-independent speaker authentication using new spiking neural network (SNN) architectures. Each speaker is represented by a set of prototype vectors that is trained with standard Hebbian rule and winner-takes-all approach. For every speaker there is a separated spiking network that computes normalized similarity scores of MFCC (Mel Frequency Cepstrum Coefficients) features considering speaker and background models. Experiments with the VidTimit dataset show similar performance of the system when compared with a benchmark method based on vector quantization. As the main property, the system enables optimization in terms of performance, speed and energy efficiency. A procedure to create/merge neurons is also presented, which enables adaptive and on-line training in an evolvable way.