The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multivariate discretization of continuous variables for set mining
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
A reduction algorithm meeting users' requirements
Journal of Computer Science and Technology
Controlled Flux Results in Stable Decision Trees
ICTAI '99 Proceedings of the 11th IEEE International Conference on Tools with Artificial Intelligence
A complete fuzzy decision tree technique
Fuzzy Sets and Systems - Theme: Learning and modeling
Stability of Feature Selection Algorithms
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Stability of Feature Selection Algorithms
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Bootstrapping rule induction to achieve rule stability and reduction
Journal of Intelligent Information Systems
Stability of feature selection algorithms: a study on high-dimensional spaces
Knowledge and Information Systems
Decision Tree Instability and Active Learning
ECML '07 Proceedings of the 18th European conference on Machine Learning
Seeing the Forest Through the Trees: Learning a Comprehensible Model from an Ensemble
ECML '07 Proceedings of the 18th European conference on Machine Learning
CTC: An Alternative to Extract Explanation from Bagging
Current Topics in Artificial Intelligence
Journal of Artificial Intelligence Research
On the stability of recommendation algorithms
Proceedings of the fourth ACM conference on Recommender systems
ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
Stability of Recommendation Algorithms
ACM Transactions on Information Systems (TOIS)
Hi-index | 0.00 |
Research on bias in machine learning algorithms has generally been concerned with the impact of bias on predictive accuracy. We believe that there are other factors that should also play a role in the evaluation of bias. One such factor is the stability of the algorithm; in other words, the repeatability of the results. If we obtain two sets of data from the same phenomenon, with the same underlying probability distribution, then we would like our learning algorithm to induce approximately the same concepts from both sets of data. This paper introduces a method for quantifying stability, based on a measure of the agreement between concepts. We also discuss the relationships among stability, predictive accuracy, and bias.