Linear discriminant dimensionality reduction
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Multi-subspace representation and discovery
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
Self-taught dimensionality reduction on the high-dimensional small-sized data
Pattern Recognition
Exact top-k feature selection via l2,0-norm constraint
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
In many cases of machine learning or data mining applications, we are not only aimed to establish accurate {\em black box} predictors, we are also interested in discovering predictive patterns in data which enhance our interpretation and understanding of underlying physical, biological and other natural processes. Sparse representation is one of the focuses in this direction. More recently, structural sparsity has attracted increasing attentions. The structural sparsity is often achieved by imposing l2/l1 norms. In this paper, we present the explicit l2/l0 norm to directly achieve structural sparsity. To tackle the problem of intractable l2/l0 optimization, we develop a general Lipschitz auxiliary function which leads to simple iterative algorithms. In each iteration, optimal solution is achieved for the induced sub-problem and a guarantee of convergence is provided. Further more, the local convergent rate is also theoretically bounded. We test our optimization techniques in the multi-task feature learning problem. Experimental results suggest that our approaches outperform other approaches in both synthetic and real world data sets.