Learning monotone nonlinear models using the choquet integral
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Hierarchical model for rank discrimination measures
ECSQARU'13 Proceedings of the 12th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Robust ordinal regression in preference learning and ranking
Machine Learning
Hi-index | 0.00 |
In many applications of data mining we know beforehand that the response variable should be increasing (or decreasing) in the attributes. Such relations between response and attributes are called monotone. In this paper we present a new algorithm to compute an optimal monotone classification of a data set for convex loss functions. Moreover, we show how the algorithm can be extended to compute all optimal monotone classifications with little additional effort. Monotone relabeling is useful for at least two reasons. Firstly, models trained on relabeled data sets often have better predictive performance than models trained on the original data. Secondly, relabeling is an important building block for the construction of monotone classifiers. We apply the new algorithm to investigate the effect on the prediction error of relabeling the training sample for $k$ nearest neighbour classification and classification trees. In contrast to previous work in this area, we consider {\em all} optimal monotone relabelings. The results show that, for small training samples, relabeling the training data results in significantly better predictive performance.