International Journal of Man-Machine Studies - Special Issue: Knowledge Acquisition for Knowledge-based Systems. Part 5
Hypercube algorithms for neural network simulation: the Crystal_Accumulator and the Crystal Router
C3P Proceedings of the third conference on Hypercube concurrent computers and applications: Architecture, software, computer systems, and general issues - Volume 1
A Nearest Hyperrectangle Learning Method
Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
Pattern Recognition with Fuzzy Objective Function Algorithms
Pattern Recognition with Fuzzy Objective Function Algorithms
Agglomerative Learning Algorithms for General Fuzzy Min-Max Neural Network
Journal of VLSI Signal Processing Systems
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
A memetic algorithm for evolutionary prototype selection: A scaling up approach
Pattern Recognition
A modal learning adaptive function neural network applied to handwritten digit recognition
Information Sciences: an International Journal
Modal learning neural networks
WSEAS Transactions on Computers
Data clustering: 50 years beyond K-means
Pattern Recognition Letters
Why Does Unsupervised Pre-training Help Deep Learning?
The Journal of Machine Learning Research
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Evolutionary selection of hyperrectangles in nested generalized exemplar learning
Applied Soft Computing
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
Fuzzy min-max neural networks. I. Classification
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper evaluates the performance of a number of novel extensions of the hyperbox neural network algorithm, a method which uses different modes of learning for supervised classification problems. One hyperbox per class is defined that covers the full range of attribute values in the class. Each hyperbox has one or more neurons associated with it, which model the class distribution. During prediction, points falling into only one hyperbox can be classified immediately, with the neural outputs used only when points lie in overlapping regions of hyperboxes. Decomposing the learning problem into easier and harder regions allows extremely efficient classification. We introduce an unsupervised clustering stage in each hyperbox followed by supervised learning of a neuron per cluster. Both random and heuristic-driven initialisation of the cluster centres and initial weight vectors are considered. We also consider an adaptive activation function for use in the neural mode. The performance and computational efficiency of the hyperbox methods is evaluated on artificial datasets and publically available real datasets and compared with results obtained on the same datasets using Support Vector Machine, Decision tree, K-nearest neighbour, and Multilayer Perceptron (with Back Propagation) classifiers. We conclude that the method is competitively performing, computationally efficient and provide recommendations for best usage of the method based on results on artificial datasets, and evaluation of sensitivity to initialisation.