Modeling with constructive backpropagation
Neural Networks
Incremental Learning with Respect to New Incoming Input Attributes
Neural Processing Letters
Dynamic Training Subset Selection for Supervised Learning in Genetic Programming
PPSN III Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: Parallel Problem Solving from Nature
Dynamic Subset Selection Based on a Fitness Case Topology
Evolutionary Computation
Interference-less neural network training
Neurocomputing
Output partitioning of neural networks
Neurocomputing
Class decomposition for GA-based classifier agents - a Pitt approach
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Parallel growing and training of neural networks using output parallelism
IEEE Transactions on Neural Networks
Efficient classification for multiclass problems using modular neural networks
IEEE Transactions on Neural Networks
Recursive and incremental learning GA featuring problem-dependent rule-set
ICIC'11 Proceedings of the 7th international conference on Intelligent Computing: bio-inspired computing and applications
Recursive Learning of Genetic Algorithms with Task Decomposition and Varied Rule Set
International Journal of Applied Evolutionary Computation
Hi-index | 0.00 |
When neural networks are applied to large scale real-world classification problems, a major drawback is its inefficiency in utilizing network resources. A natural approach to overcome this drawback is to decompose the problem into several smaller sub-problems based on the "divide-and-conquer" methodology. This paper presents a hybrid method of task decomposition - OP-RPHP (Output Parallelism with Recursive Percentage-based Hybrid Pattern training). OP-RPHP employs a combination of both class decomposition and domain decomposition in its architecture thereby incorporating the advantages of both methods. OP-RPHP can be grown and trained in parallel on separate processing units to improve training time. To further improve the training time, a reduced pattern training algorithm is introduced. The reduction parameter p associated with the reduced pattern training algorithm is optimized to obtain maximum reduction in training time without compromising classification accuracy. Our approach is tested on four benchmark classification problems retrieved from the UCI repository of machine learning databases. The results show that OP-RPHP with reduced pattern training outperformed conventional OP and RPHP algorithms in both classification accuracy and training times.