Morphology neural networks: an introduction with applications
Circuits, Systems, and Signal Processing - Special issue: networks for neural processing
The handbook of brain theory and neural networks
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
An Introduction to Morphological Neural Networks
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume IV-Volume 7472 - Volume 7472
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
A New Associative Model with Dynamical Synapses
Neural Processing Letters
Editorial: Lattice computing and natural computing
Neurocomputing
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Image Target Detection Using Morphological Neural Network
CIS '09 Proceedings of the 2009 International Conference on Computational Intelligence and Security - Volume 01
Information Sciences: an International Journal
Editorial: Special issue: Information engineering applications based on lattices
Information Sciences: an International Journal
Lattice algebra approach to single-neuron computation
IEEE Transactions on Neural Networks
Learning parsimonious dendritic classifiers
Neurocomputing
Hi-index | 0.01 |
This paper introduces an efficient training algorithm for a dendrite morphological neural network (DMNN). Given p classes of patterns, C^k, k=1, 2, ..., p, the algorithm selects the patterns of all the classes and opens a hyper-cube HC^n (with n dimensions) with a size such that all the class elements remain inside HC^n. The size of HC^n can be chosen such that the border elements remain in some of the faces of HC^n, or can be chosen for a bigger size. This last selection allows the trained DMNN to be a very efficient classification machine in the presence of noise at the moment of testing, as we will see later. In a second step, the algorithm divides the HC^n into 2^n smaller hyper-cubes and verifies if each hyper-cube encloses patterns for only one class. If this is the case, the learning process is stopped and the DMNN is designed. If at least one hyper-cube HC^n encloses patterns of more than one class, then HC^n is divided into 2^n smaller hyper-cubes. The verification process is iteratively repeated onto each smaller hyper-cube until the stopping criterion is satisfied. At this moment the DMNN is designed. The algorithm was tested for benchmark problems and compare its performance against some reported algorithms, showing its superiority.