Alternative learning vector quantization
Pattern Recognition
Expert Systems with Applications: An International Journal
Sample-size adaptive self-organization map for color images quantization
Pattern Recognition Letters
A hierarchical genetic algorithm for segmentation of multi-spectral human-brain MRI
Expert Systems with Applications: An International Journal
Automatic kernel clustering with a Multi-Elitist Particle Swarm Optimization Algorithm
Pattern Recognition Letters
Letters: Data classification using hybrid GrayART network
Neurocomputing
Clustering: A neural network approach
Neural Networks
A fuzzy clustering neural network architecture for classification of ECG arrhythmias
Computers in Biology and Medicine
A novel clustering approach: Artificial Bee Colony (ABC) algorithm
Applied Soft Computing
Suppressed fuzzy-soft learning vector quantization for MRI segmentation
Artificial Intelligence in Medicine
Fuzzy neural gas for unsupervised vector quantization
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Reformulating Learning Vector Quantization and Radial Basis Neural Networks
Fundamenta Informaticae
Towards hierarchical clustering
CSR'07 Proceedings of the Second international conference on Computer Science: theory and applications
Engineering Applications of Artificial Intelligence
ABK-means: an algorithm for data clustering using ABC and K-means algorithm
International Journal of Computational Science and Engineering
International Journal of Automation and Computing
Hi-index | 0.01 |
The relationship between the sequential hard c-means (SHCM) and learning vector quantization (LVQ) clustering algorithms is discussed. The impact and interaction of these two families of methods with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method but often lends ideas to clustering algorithms, are considered. A generalization of LVQ that updates all nodes for a given input vector is proposed. The network attempts to find a minimum of a well-defined objective function. The learning rules depend on the degree of distance match to the winner node; the lesser the degree of match with the winner, the greater the impact on nonwinner nodes. Numerical results indicate that the terminal prototypes generated by this modification of LVQ are generally insensitive to initialization and independent of any choice of learning coefficient. IRIS data obtained by E. Anderson's (1939) is used to illustrate the proposed method. Results are compared with the standard LVQ approach