Feature selection for ordinal regression
Proceedings of the 2010 ACM Symposium on Applied Computing
An experimental study of different ordinal regression methods and measures
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Adaptive metric learning vector quantization for ordinal classification
Neural Computation
Evolutionary extreme learning machine for ordinal regression
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Using micro-documents for feature selection: The case of ordinal text classification
Expert Systems with Applications: An International Journal
Learning in probabilistic graphs exploiting language-constrained patterns
NFMCP'12 Proceedings of the First international conference on New Frontiers in Mining Complex Patterns
Exploitation of pairwise class distances for ordinal classification
Neural Computation
Kernelizing the proportional odds model through the empirical kernel mapping
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Can machine learning techniques help to improve the common fisheries policy?
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advences in computational intelligence - Volume Part II
An organ allocation system for liver transplantation based on ordinal regression
Applied Soft Computing
Feature selection for ordinal text classification
Neural Computation
Hi-index | 0.00 |
Ordinal regression (OR -- also known as ordinal classification) has received increasing attention in recent times, due to its importance in IR applications such as learning to rank and product review rating. However, research has not paid attention to the fact that typical applications of OR often involve datasets that are highly imbalanced. An imbalanced dataset has the consequence that, when testing a system with an evaluation measure conceived for balanced datasets, a trivial system assigning all items to a single class (typically, the majority class) may even outperform genuinely engineered systems. Moreover, if this evaluation measure is used for parameter optimization, a parameter choice may result that makes the system behave very much like a trivial system. In order to avoid this, evaluation measures that can handle imbalance must be used. We propose a simple way to turn standard measures for OR into ones robust to imbalance. We also show that, once used on balanced datasets, the two versions of each measure coincide, and therefore argue that our measures should become the standard choice for OR.