Dynamic data condensation for classification

  • Authors:
  • Dymitr Ruta

  • Affiliations:
  • Chief Technology Office, Research & Venturing, British Telecommunications Group (BT), Ipswich, UK

  • Venue:
  • ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite increasing amounts of data and rapidly increasing computational power current state-of-the-art pattern recognition models still can not handle massive and noisy corporate data warehouses, that have become the reality of today's businesses. Moreover, real-time and adaptive systems often require frequent model retraining which further hinders their use. The necessity is therefore to build the classification model on a much smaller representative subset of the original dataset. Various condensation methods ranging from data sampling up to density retention models attempt to capture the summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. The proposed family of models called Dynamic Data Condensation (DDC) combine dynamic condensation, data editing and noise filtering in an attempt to maximally reduce labelled dataset yet with no harm on the performance of a classifier trained on the reduced set. The condensation is achieved by data merging and repositioning imposed by an electrostatic-type field applied to the data which attracts data from the same class but repels data from different classes, thereby trying to improve class separability. Initial experiments demonstrate that DDC outperforms competitive data condensation methods in terms of both data reduction and the classification performance, and is therefore considered better preprocessing step for classification.