Novel vector quantiser design using reinforced learning as a pre-process

  • Authors:
  • Wenhuan Xu;Asoke K. Nandi;Jihong Zhang;Kenneth G. Evans

  • Affiliations:
  • Department of Electrical Engineering and Electronics, Signal Processing and Communications Group, University of Liverpool, Liverpool, UK;Department of Electrical Engineering and Electronics, Signal Processing and Communications Group, University of Liverpool, Liverpool, UK;Information Engineering Faculty, Shenzhen University, Shenzhen, PR China;Department of Electrical Engineering and Electronics, Signal Processing and Communications Group, University of Liverpool, Liverpool, UK

  • Venue:
  • Signal Processing
  • Year:
  • 2005

Quantified Score

Hi-index 0.08

Visualization

Abstract

This paper presents the development and evaluation of a new approach toward the design of optimised codebooks by vector quantisation (VQ). A strategy of reinforced learning (RL) is proposed which exploits the advantages offered by fuzzy clustering algorithms, competitive learning and knowledge of training vector and codevector configurations. RL is used as a pre-process before using a conventional VQ algorithm such as the generalised Lloyd algorithm (GLA) or the fuzzy k-means (FKM) algorithm. At each iteration of RL, codevectors move intelligently and intentionally toward an improved optimum codebook design. This is distinct from simulated annealing (SA) and genetic algorithm (GA) techniques in which a random variation is introduced in the movement of the codevectors. The new strategy reduces the possibility that, in the final design, codevectors will be overcrowded in the high density distribution regions of the training vector space, and also that too few codevectors will be settled in low density regions. Experiments demonstrate that this results in a more effective representation of the training vectors by the codevectors and that the final codebook is nearer to the optimal solution in applications such as image compression. It has been found that GLA and FKM yield improved quality of codebook design in this application when RL is used as a pre-process. The investigations have also indicated that RL is insensitive to the selection of both the initial codebook and a learning rate control parameter, which is the only additional parameter introduced by RL from the standard FKM.