Neural networks and the bias/variance dilemma
Neural Computation
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Learning to extract symbolic knowledge from the World Wide Web
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Variance and Bias for General Loss Functions
Machine Learning
Linkage and Autocorrelation Cause Feature Selection Bias in Relational Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Machine Learning Approach to Building Domain-Specific Search Engines
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
A Unified Bias-Variance Decomposition for Zero-One and Squared Loss
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Dependency networks for inference, collaborative filtering, and data visualization
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Learning relational probability trees
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Why collective inference improves relational classification
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Dependency Networks for Relational Data
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Leveraging Relational Autocorrelation with Latent Group Models
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Classification in Networked Data: A Toolkit and a Univariate Case Study
The Journal of Machine Learning Research
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Discriminative probabilistic models for relational data
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Cautious Collective Classification
The Journal of Machine Learning Research
Bias/variance analysis for relational domains
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Multi-network fusion for collective inference
Proceedings of the Eighth Workshop on Mining and Learning with Graphs
An analysis of how ensembles of collective classifiers improve predictions in graphs
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Bias/variance analysis is a useful tool for investigating the performance of machine learning algorithms. Conventional analysis decomposes loss into errors due to aspects of the learning process, but in relational domains, the inference process used for prediction introduces an additional source of error. Collective inference techniques introduce additional error, both through the use of approximate inference algorithms and through variation in the availability of test-set information. To date, the impact of inference error on model performance has not been investigated. We propose a new bias/variance framework that decomposes loss into errors due to both the learning and inference processes. We evaluate the performance of three relational models on both synthetic and real-world datasets and show that (1) inference can be a significant source of error, and (2) the models exhibit different types of errors as data characteristics are varied.