An Õ(n2) algorithm for minimum cuts
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Journal of the ACM (JACM)
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Approximation algorithms
Learning with mixtures of trees
The Journal of Machine Learning Research
An introduction to variable and feature selection
The Journal of Machine Learning Research
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature Selection via Coalitional Game Theory
Neural Computation
A tutorial on spectral clustering
Statistics and Computing
Nash Stability in Additively Separable Hedonic Games Is NP-Hard
CiE '07 Proceedings of the 3rd conference on Computability in Europe: Computation and Logic in the Real World
Hi-index | 0.00 |
In this paper, we develop a game theoretic approach for clustering features in a learning problem. Feature clustering can serve as an important preprocessing step in many problems such as feature selection, dimensionality reduction, etc. In this approach, we view features as rational players of a coalitional game where they form coalitions (or clusters) among themselves in order to maximize their individual payoffs. We show how Nash Stable Partition (NSP), a well known concept in the coalitional game theory, provides a natural way of clustering features. Through this approach, one can obtain some desirable properties of the clusters by choosing appropriate payoff functions. For a small number of features, the NSP based clustering can be found by solving an integer linear program (ILP). However, for large number of features, the ILP based approach does not scale well and hence we propose a hierarchical approach. Interestingly, a key result that we prove on the equivalence between a k-size NSP of a coalitional game and minimum k-cut of an appropriately constructed graph comes in handy for large scale problems. In this paper, we use feature selection problem (in a classification setting) as a running example to illustrate our approach. We conduct experiments to illustrate the efficacy of our approach.