Bias-Variance Analysis for Ensembling Regularized Multiple Criteria Linear Programming Models

  • Authors:
  • Peng Zhang;Xingquan Zhu;Yong Shi

  • Affiliations:
  • FEDS Research Center, Chinese Academy of Sciences, Beijing, China 100190;Dep. of Computer Sci. & Eng., Florida Atlantic University, Boca Raton, USA 33431;FEDS Research Center, Chinese Academy of Sciences, Beijing, China 100190 and College of Inform. Science & Technology, Univ. of Nebraska at Omaha, Nebraska, USA

  • Venue:
  • ICCS 2009 Proceedings of the 9th International Conference on Computational Science
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Regularized Multiple Criteria Linear Programming (RMCLP) models have recently shown to be effective for data classification. While the models are becoming increasingly important for data mining community, very little work has been done in systematically investigating RMCLP models from common machine learners' perspectives. The missing of such theoretical components leaves important questions like whether RMCLP is a strong and stable learner unable to be answered in practice. In this paper, we carry out a systematic investigation on RMCLP by using a well-known statistical analysis approach, bias-variance decomposition. We decompose RMCLP's error into three parts: bias error, variance error and noise error. Our experiments and observations conclude that RMCLP'error mainly comes from its bias error, whereas its variance error remains relatively low. Our observation asserts that RMCLP is stable but not strong. Consequently, employing boosting based ensembling mechanism RMCLP will mostly further improve the RMCLP models to a large extent.