Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Effective input variable selection for function approximation
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Forward feature selection based on approximate markov blanket
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part II
Fast variable selection for memetracker phrases time series prediction
Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments
Hi-index | 0.00 |
Given any modeling problem, variable selection is a preprocess step that selects the most relevant variables with respect to the output variable. Forward selection is the most straightforward strategy for variable selection; its application using the mutual information is simple, intuitive and effective, and is commonly used in the machine learning literature. However the problem of when to stop the forward process doesn't have a direct satisfactory solution due to the inaccuracies of the Mutual Information estimation, specially as the number of variables considered increases. This work proposes a modified stopping criterion for this variable selection methodology that uses the Markov blanket concept. As it will be shown, this approach can increase the performance and applicability of the stopping criterion of a forward selection process using mutual information.