Boolean Feature Discovery in Empirical Learning
Machine Learning
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Feature Generation Using General Constructor Functions
Machine Learning
Genetic Programming and Evolvable Machines
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Genetic Programming with a Genetic Algorithm for Feature Construction and Selection
Genetic Programming and Evolvable Machines
Machine Learning
Artificial Intelligence for Engineering Design, Analysis and Manufacturing
An empirical comparison of supervised learning algorithms
ICML '06 Proceedings of the 23rd international conference on Machine learning
Boosting products of base classifiers
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Analytical features: a knowledge-based approach to audio feature generation
EURASIP Journal on Audio, Speech, and Music Processing
GECCO '96 Proceedings of the 1st annual conference on Genetic and evolutionary computation
A system for induction of oblique decision trees
Journal of Artificial Intelligence Research
A scheme for feature construction and a comparison of empirical methods
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
A survey on the application of genetic programming to classification
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Feature generation using genetic programming with application to fault classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Feature generation is the problem of automatically constructing good features for a given target learning problem. While most feature generation algorithms belong either to the filter or to the wrapper approach, this paper focuses on embedded feature generation. We propose a general scheme to embed feature generation in a wide range of tree-based learning algorithms, including single decision trees, random forests and tree boosting. It is based on the formalization of feature construction as a sequential decision making problem addressed by a tractable Monte Carlo search algorithm coupled with node splitting. This leads to fast algorithms that are applicable to large-scale problems. We empirically analyze the performances of these tree-based learners combined or not with the feature generation capability on several standard datasets.