A new class of edge-preserving smoothing filters
Pattern Recognition Letters
Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Modeling with Front Propagation: A Level Set Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
Self-Organizing Maps
Output-Sensitive Algorithms for Computing Nearest-Neighbour Decision Boundaries
Discrete & Computational Geometry
Salient region detection and segmentation
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.01 |
Self-Organizing Feature Maps (SOFMs) are extensively used for dimensionality reduction and rendering of inherent data structures. A novel model of a SOFM based on the notion of aggregate/reduced ordering (R-ordering) of vector sets is proposed and applied to the segmentation of color images. The so-called Cross-Order Distance Matrix is defined in order to measure the similarity between local histograms corresponding to ordered sets of color vectors. Color images are regarded as two-dimensional (2-D) vector fields. Basic image processing algorithms are modified since color is represented as a vector instead of a scalar gray level variable. Operators utilizing several distance and similarity measures are adopted in order to quantify the color distribution within a sliding window. The proposed window-based SOFM uses sets of one, two and more color vectors in order to approximate local color distributions within sliding windows. Each set represents a separate node of the SOFM that is trained according to a sequence of ordered input sets of color vectors. A 3x3 window is used to capture color components in uniform color space (L^@?u^@?v^@?). The color vectors within the sliding window are R-ordered. The neuron featuring the smallest aggregated distance (similarity) is activated during training. Segmentation results suggest that clustered nodes represent populations of pixels in rather compact segments of the images featuring similar texture.