Meta Analysis of Classification Algorithms for Pattern Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimating the Predictive Accuracy of a Classifier
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
Characterization of Classification Algorithms
EPIA '95 Proceedings of the 7th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
Meta-Learning by Landmarking Various Learning Algorithms
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
AST: Support for Algorithm Selection with a CBR Approach
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Discovering Task Neighbourhoods Through Landmark Learning Performances
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Improved Dataset Characterisation for Meta-learning
DS '02 Proceedings of the 5th International Conference on Discovery Science
YALE: rapid prototyping for complex data mining tasks
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
The lack of a priori distinctions between learning algorithms
Neural Computation
Information-Theoretic Measures for Meta-learning
HAIS '08 Proceedings of the 3rd international workshop on Hybrid Artificial Intelligence Systems
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Hi-index | 0.00 |
Besides the classification performance, the training time is a second important factor that affects the suitability of a classification algorithm regarding an unknown dataset. An algorithm with a slightly lower accuracy is maybe preferred if its training time is significantly lower. Additionally, an estimation of the required training time of a pattern recognition task is very useful if the result has to be available in a certain amount of time. Meta-learning is often used to predict the suitability or performance of classifiers using different learning schemes and features. Especially landmarking features have been used very successfully in the past. The accuracy of simple learners are used to predict the performance of a more sophisticated algorithm. In this work, we investigate the quantitative prediction of the training time for several target classifiers. Different sets of meta-features are evaluated according to their suitability of predicting actual run-times of a parameter optimization by a grid search. Additionally, we adapted the concept of landmarking to time prediction. Instead of their accuracy, the run-time of simple learners are used as feature values. We evaluated the approach on real world datasets from the UCI machine learning repository and StatLib. The run-time of five different classification algorithms are predicted and evaluated using two different performance measures. The promising results show that the approach is able to reasonably predict the training time including a parameter optimization. Furthermore, different sets of meta-features seem to be necessary for different target algorithms in order to achieve the highest prediction performances.