Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Parallel VLSI Neural System Design
Parallel VLSI Neural System Design
Parallel Architectures for Artificial Neural Networks: Paradigms and Implementations
Parallel Architectures for Artificial Neural Networks: Paradigms and Implementations
SLIQ: A Fast Scalable Classifier for Data Mining
EDBT '96 Proceedings of the 5th International Conference on Extending Database Technology: Advances in Database Technology
Fast Algorithms for Mining Association Rules in Large Databases
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
SPRINT: A Scalable Parallel Classifier for Data Mining
VLDB '96 Proceedings of the 22th International Conference on Very Large Data Bases
Comparison of neural networks and discriminant analysis in predicting forest cover types
Comparison of neural networks and discriminant analysis in predicting forest cover types
Novel approaches in adaptive resonance theory for machine learning
Novel approaches in adaptive resonance theory for machine learning
A Fast Simplified Fuzzy ARTMAP Network
Neural Processing Letters
IEEE Transactions on Neural Networks
Fuzzy min-max neural networks. I. Classification
IEEE Transactions on Neural Networks
ART-EMAP: A neural network architecture for object recognition by evidence accumulation
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Fuzzy ARTMAP neural networks have been proven to be good classifiers on a variety of classification problems. However, the time that Fuzzy ARTMAP takes to converge to a solution increases rapidly as the number of patterns used for training is increased. In this paper we examine the time Fuzzy ARTMAP takes to converge to a solution and we propose a coarse grain parallelization technique, based on a pipeline approach, to speed-up the training process. In particular, we have parallelized Fuzzy ARTMAP without the match-tracking mechanism. We provide a series of theorems and associated proofs that show the characteristics of Fuzzy ARTMAP's, without matchtracking, parallel implementation. Results run on a BEOWULF cluster with three large databases show linear speedup as a function of the number of processors used in the pipeline. The databases used for our experiments are the Forrest CoverType database from the UCI Machine Learning repository and two artificial databases, where the data generated were 16-dimensional Gaussian distributed data belonging to two distinct classes, with different amounts of overlap (5% and 15%).