Induction of compact neural network trees through centroid based dimensionality reduction

  • Authors:
  • Hirotomo Hayashi;Qiangfu Zhao

  • Affiliations:
  • Department of Computer and Information Systems, The University of Aizu, Aizuwakamatsu, Japan;Department of Computer and Information Systems, The University of Aizu, Aizuwakamatsu, Japan

  • Venue:
  • SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural network tree (NNTree) is a hybrid model for machine learning. Compared with single model fully connected neural networks, NNTrees are more suitable for structural learning, and faster for decision making. To increase the realizability of the NNTrees, we have tried to induce more compact NNTrees through dimensionality reduction. So far, we have used principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction, and confirmed that in most cases the LDA based approach can result in very compact NNTrees without degrading the performance. One drawback in using the LDA based approach is that the cost for finding the transformation matrix can be very high for large databases. To solve this problem, in this paper we investigate the efficiency and efficacy of two centroid based approaches for NNTree induction. One is to map each datum directly to the class centroids; and the other is to find the least square error approximation of each datum using the centroids. Experimental results show that both approaches, although simple, are comparable to the LDA based approach in most cases.