Improving the GRLVQ algorithm by the cross entropy method

  • Authors:
  • Abderrahmane Boubezoul;Sébastien Paris;Mustapha Ouladsine

  • Affiliations:
  • Laboratory of Sciences of Information's and of System, LSIS UMR, University Paul Cézanne, Marseille Cedex 20, France;Laboratory of Sciences of Information's and of System, LSIS UMR, University Paul Cézanne, Marseille Cedex 20, France;Laboratory of Sciences of Information's and of System, LSIS UMR, University Paul Cézanne, Marseille Cedex 20, France

  • Venue:
  • ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper discusses an alternative approach to parameter optimization of prototype-based learning algorithms that aim to minimize an objective function based on gradient search. The proposed approach is a stochastic optimization method called the Cross Entropy (CE) method. The CE method is used to tackle the initialization sensitiveness problem associated with the original generalized Learning Vector Quantization (GLVQ) algorithm and its variants and to locate the globally optimal solutions. We will focus our study on a variant which deals with a weighted norm instead of the Euclidean norm in order to select the most relevant features. The results in this paper indicate that the CE method can successfully be applied to this kind of problems and efficiently generate high quality solutions. Also, highly competitive numerical results on real world data sets are reported.