Learning and classification of monotonic ordinal concepts
Computational Intelligence
Inductive modelling in law: example based expert systems in administrative law
ICAIL '91 Proceedings of the 3rd international conference on Artificial intelligence and law
Efficient learning of monotone concepts via quadratic optimization
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Eliciting and analyzing expert judgment: a practical guide
Eliciting and analyzing expert judgment: a practical guide
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Decision trees for ordinal classification
Intelligent Data Analysis
Elicitation of probabilities for belief networks: combining qualitative and quantitative information
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Three naive Bayes approaches for discrimination-free classification
Data Mining and Knowledge Discovery
Information Sciences: an International Journal
Citation-based journal ranks: The use of fuzzy measures
Fuzzy Sets and Systems
Learning monotone nonlinear models using the choquet integral
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Comprehensible classification models: a position paper
ACM SIGKDD Explorations Newsletter
Hi-index | 0.00 |
In many application areas of machine learning, prior knowledge concerning the monotonicity of relations between the response variable and predictor variables is readily available. Monotonicity may also be an important model requirement with a view toward explaining and justifying decisions, such as acceptance/rejection decisions. We propose a modified nearest neighbour algorithm for the construction of monotone classifiers from data. We start by making the training data monotone with as few label changes as possible. The relabeled data set can be viewed as a monotone classifier that has the lowest possible error-rate on the training data. The relabeled data is subsequently used as the training sample by a modified nearest neighbour algorithm. This modified nearest neighbour rule produces predictions that are guaranteed to satisfy the monotonicity constraints. Hence, it is much more likely to be accepted by the intended users. Our experiments show that monotone kNN often outperforms standard kNN in problems where the monotonicity constraints are applicable.