MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Robust Classification for Imprecise Environments
Machine Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Estimating the Posterior Probabilities Using the K-Nearest Neighbor Rule
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
RotBoost: A technique for combining Rotation Forest and AdaBoost
Pattern Recognition Letters
IEEE Transactions on Signal Processing - Part I
An evidence-theoretic k-NN rule with parameter optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Relabeling algorithm for retrieval of noisy instances and improving prediction quality
Computers in Biology and Medicine
An asymmetric classifier based on partial least squares
Pattern Recognition
On the utility of partially labeled data for classification of microarray data
PSL'11 Proceedings of the First IAPR TC3 conference on Partially Supervised Learning
Computer Methods and Programs in Biomedicine
Multiclass penalized likelihood pattern classification algorithm
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Unlabeling data can improve classification accuracy
Pattern Recognition Letters
Theoretical aspects of mapping to multidimensional optimal regions as a multi-classifier
Intelligent Data Analysis
Hi-index | 0.01 |
Penalized likelihood is a general approach whereby an objective function is defined, consisting of the log likelihood of the data minus some term penalizing non-smooth solutions. Subsequently, this objective function is maximized, yielding a solution that achieves some sort of trade-off between the faithfulness and the smoothness of the fit. Most work on that topic focused on the regression problem, and there has been little work on the classification problem. In this paper we propose a new classification method using the concept of penalized likelihood (for the two class case). By proposing a novel penalty term based on the K-nearest neighbors, simple analytical derivations have led to an algorithm that is proved to converge to the global optimum. Moreover, this algorithm is very simple to implement and converges typically in two or three iterations. We also introduced two variants of the method by distance-weighting the K-nearest neighbor contributions, and by tackling the unbalanced class patterns situation. We performed extensive experiments to compare the proposed method to several well-known classification methods. These simulations reveal that the proposed method achieves one of the top ranks in classification performance and with a fairly small computation time.