Constrained Learning Vector Quantization or Relaxed k-Separability

  • Authors:
  • Marek Grochowski;Włodzisław Duch

  • Affiliations:
  • Department of Informatics, Nicolaus Copernicus University, Toruń, Poland;Department of Informatics, Nicolaus Copernicus University, Toruń, Poland

  • Venue:
  • ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural networks and other sophisticated machine learning algorithms frequently miss simple solutions that can be discovered by a more constrained learning methods. Transition from a single neuron solving linearly separable problems, to multithreshold neuron solving k -separable problems, to neurons implementing prototypes solving q -separable problems, is investigated. Using Learning Vector Quantization (LVQ) approach this transition is presented as going from two prototypes defining a single hyperplane, to many co-linear prototypes defining parallel hyperplanes, to unconstrained prototypes defining Voronoi tessellation. For most datasets relaxing the co-linearity condition improves accuracy increasing complexity of the model, but for data with inherent logical structure LVQ algorithms with constraints significantly outperforms original LVQ and many other algorithms.