Modification of nested hyperrectangle exemplar as a proposition of information fusion method

  • Authors:
  • Michał Woźniak

  • Affiliations:
  • Systems and Computer Networks, Faculty of Electronics, Wroclaw University of Technology, Wroclaw, Poland

  • Venue:
  • IDEAL'09 Proceedings of the 10th international conference on Intelligent data engineering and automated learning
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional approaches to a combining classifier produce committees on the basis of the outputs of simple classifiers. There are two main concepts, the first one uses class number given by simple classifiers and the second one based on their discriminants. However classifier fusion needn't have been done as a mix of outputs but it could be performed as the fusion of training information used by the classifiers to make decision. Let us note that two general forms of information are used by classifiers: learning sets and rules. Paper presents concept of information fusion where mentioned type of information are used together during NGE (Nested Generalized Exemplar) learning. The NGE is a learning method that generalizes a given training set into a set of hyperrectangles in an n-dimensional Euclidean space. The NGE algorithm can be considered a descendent of minimal distance classifiers, known as lazy classifier like k-NN (k-Nearest Neighbor) classifier. For new examples, the class of the smallest hyperrectangle that contains the new example is predicted. If the new example is not within any hyperrectangle, the algorithm predicts the class of the closest generalized exemplar by calculating their Eucledian distance to the nearest generalized hyperrectangle. This paper describes a version of the NGE model which produces a set of hyperrectangles on the basis of a training set and a set of rules. Quality of proposed modification of NGE (called RB-NGE) has been evaluated and compared to original version of NGE and k-NN during computer experiments.