On the Definition and Representation of a Ranking
ReIMICS '01 Revised Papers from the 6th International Conference and 1st Workshop of COST Action 274 TARSKI on Relational Methods in Computer Science
International Journal of Approximate Reasoning
INFORMS Journal on Computing
Stochastic dominance-based rough set model for ordinal classification
Information Sciences: an International Journal
Nearest Neighbour Classification with Monotonicity Constraints
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Nonparametric Monotone Classification with MOCA
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Environmental Modelling & Software
Loss optimal monotone relabeling of noisy multi-criteria data sets
Information Sciences: an International Journal
Supervised ranking in the weka environment
Information Sciences: an International Journal
Supervised ranking in the weka environment
Information Sciences: an International Journal
Learning partial ordinal class memberships with kernel-based proportional odds models
Computational Statistics & Data Analysis
Aggregation of monotone reciprocal relations with application to group decision making
Fuzzy Sets and Systems
Hi-index | 0.07 |
A method to restore stochastic monotonicity of noisy multi-criteria data sets through relabeling is presented. By formulating the problem as a weighted maximum independent set problem on a comparability graph, it is possible to compute optimal relabelings w.r.t. cumulative label frequency loss function. We demonstrate how to formulate the problem in this manner and discuss why it requires objects to be relabeled instead of deleted. More precisely, we will formulate the zero-one cumulative label frequency loss, L1 cumulative label frequency loss and squared cumulative label frequency loss, and provide a weighing function for each. We investigate these loss functions in the related context of restoring regular monotonicity, dealing with objects with a single label, rather than distributions. Finally, we provide applications on some closely related example data sets and discuss some interesting findings.