Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Bayesian Approaches to Gaussian Mixture Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning Patterns of Activity Using Real-Time Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Contour and Texture Analysis for Image Segmentation
International Journal of Computer Vision
Assessing a Mixture Model for Clustering with the Integrated Completed Likelihood
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Feature Selection Applied to Content-Based Retrieval of Lung Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Hidden Markov Model-Based Approach to Sequential Data Clustering
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Learning Dynamic Bayesian Networks
Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, "E.R. Caianiello"-Tutorial Lectures
Segmentation Using Eigenvectors: A Unifying View
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Multiclass Spectral Clustering
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Recognition of Group Activities using Dynamic Probabilistic Networks
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Event Detection by Eigenvector Decomposition Using Object and Frame Features
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 7 - Volume 07
Beyond Tracking: Modelling Activity and Understanding Behaviour
International Journal of Computer Vision
Enabling scalable spectral clustering for image segmentation
Pattern Recognition
Role defining using behavior-based clustering in telecommunication network
Expert Systems with Applications: An International Journal
Hierarchical fiber clustering based on multi-scale neuroanatomical features
MIAR'10 Proceedings of the 5th international conference on Medical imaging and augmented reality
Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillance
Image and Vision Computing
Spectral clustering with more than K eigenvectors
Neurocomputing
Intelligent data analysis and model interpretation with spectral analysis fuzzy symbolic modeling
International Journal of Approximate Reasoning
Self-adjust local connectivity analysis for spectral clustering
PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part I
Eigenvector sensitive feature selection for spectral clustering
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
A two-stage genetic algorithm for automatic clustering
Neurocomputing
Vector quantization based approximate spectral clustering of large datasets
Pattern Recognition
Incremental behavior modeling and suspicious activity detection
Pattern Recognition
On-the-fly feature importance mining for person re-identification
Pattern Recognition
A non-parametric method to estimate the number of clusters
Computational Statistics & Data Analysis
Hi-index | 0.01 |
The task of discovering natural groupings of input patterns, or clustering, is an important aspect of machine learning and pattern analysis. In this paper, we study the widely used spectral clustering algorithm which clusters data using eigenvectors of a similarity/affinity matrix derived from a data set. In particular, we aim to solve two critical issues in spectral clustering: (1) how to automatically determine the number of clusters, and (2) how to perform effective clustering given noisy and sparse data. An analysis of the characteristics of eigenspace is carried out which shows that (a) not every eigenvectors of a data affinity matrix is informative and relevant for clustering; (b) eigenvector selection is critical because using uninformative/irrelevant eigenvectors could lead to poor clustering results; and (c) the corresponding eigenvalues cannot be used for relevant eigenvector selection given a realistic data set. Motivated by the analysis, a novel spectral clustering algorithm is proposed which differs from previous approaches in that only informative/relevant eigenvectors are employed for determining the number of clusters and performing clustering. The key element of the proposed algorithm is a simple but effective relevance learning method which measures the relevance of an eigenvector according to how well it can separate the data set into different clusters. Our algorithm was evaluated using synthetic data sets as well as real-world data sets generated from two challenging visual learning problems. The results demonstrated that our algorithm is able to estimate the cluster number correctly and reveal natural grouping of the input data/patterns even given sparse and noisy data.