Floating search methods in feature selection
Pattern Recognition Letters
Machine Learning
Learning belief networks from data: an information theory based approach
CIKM '97 Proceedings of the sixth international conference on Information and knowledge management
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Time and sample efficient discovery of Markov blankets and direct causal relations
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient Feature Selection via Analysis of Relevance and Redundancy
The Journal of Machine Learning Research
Toward Integrating Feature Selection Algorithms for Classification and Clustering
IEEE Transactions on Knowledge and Data Engineering
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
A hybrid Bayesian network learning method for constructing gene networks
Computational Biology and Chemistry
A review of feature selection techniques in bioinformatics
Bioinformatics
Relevance and Redundancy Analysis for Ensemble Classifiers
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Learning with many irrelevant features
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Learning bayesian network structure from massive datasets: the «sparse candidate« algorithm
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Accuracy/Diversity and Ensemble MLP Classifier Design
IEEE Transactions on Neural Networks
Multi-objective genetic algorithm evaluation in feature selection
EMO'11 Proceedings of the 6th international conference on Evolutionary multi-criterion optimization
Hi-index | 0.00 |
High dimensional feature spaces with relatively few samples usually leads to poor classifier performance for machine learning, neural networks and data mining systems. This paper presents a comparison analysis between correlation-based and causal feature selection for ensemble classifiers. MLP and SVM are used as base classifier and compared with Naive Bayes and Decision Tree. According to the results, correlation-based feature selection algorithm can eliminate more redundant and irrelevant features, provides slightly better accuracy and less complexity than causal feature selection. Ensemble using Bagging algorithm can improve accuracy in both correlation-based and causal feature selection.