Maxi-Min discriminant analysis via online learning
Neural Networks
Hi-index | 0.00 |
In this paper, we propose a novel discriminant analysis method, called Minimal Distance Maximization (MDM). In contrast to the traditional LDA, which actually maximizes the average divergence among classes, MDM attempts to find a low-dimensional subspace that maximizes the minimal (worst-case) divergence among classes. This ``minimal" setting solves the problem caused by the ``average" setting of LDA that tends to merge similar classes with smaller divergence when used for multi-class data. Furthermore, we elegantly formulate the worst-case problem as a convex problem, making the algorithm solvable for larger data sets. Experimental results demonstrate the advantages of our proposed method against five other competitive approaches on one synthetic and six real-life data sets.