Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Variance and Bias for General Loss Functions
Machine Learning
A Unified Bias-Variance Decomposition for Zero-One and Squared Loss
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Leveraging Relational Autocorrelation with Latent Group Models
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Relational Dependency Networks
The Journal of Machine Learning Research
Ensembles of relational classifiers
Knowledge and Information Systems
A bias/variance decomposition for models using collective inference
Machine Learning
Why Stacked Models Perform Effective Collective Classification
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Understanding Propagation Error and Its Effect on Collective Classification
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
Active learning and inference method for within network classification
Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Hi-index | 0.00 |
We present a theoretical analysis framework that shows how ensembles of collective classifiers can improve predictions for graph data. We show how collective ensemble classification reduces errors due to variance in learning and more interestingly inference. We also present an empirical framework that includes various ensemble techniques for classifying relational data using collective inference. The methods span single- and multiple-graph network approaches, and are tested on both synthetic and real world classification tasks. Our experimental results, supported by our theoretical justifications, confirm that ensemble algorithms that explicitly focus on both learning and inference processes and aim at reducing errors associated with both, are the best performers.