Boolean Feature Discovery in Empirical Learning
Machine Learning
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Unknown attribute values in induction
Proceedings of the sixth international workshop on Machine learning
Semi-naive Bayesian classifier
EWSL-91 Proceedings of the European working session on learning on Machine learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Learning Boolean concepts in the presence of many irrelevant features
Artificial Intelligence
Controlling constructive induction in CiPF: an MDL approach
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Error reduction through learning multiple descriptions
Machine Learning
From data mining to knowledge discovery: an overview
Advances in knowledge discovery and data mining
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Extending naïve Bayes classifiers using long itemsets
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Data mining by attribute decomposition with semiconductor manufacturing case study
Data mining for design and manufacturing
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Problem Decomposition and the Learning of Skills
ECML '95 Proceedings of the 8th European Conference on Machine Learning
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Generalization Bounds for Decision Trees
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Mining manufacturing data using genetic algorithm-based feature set decomposition
International Journal of Intelligent Systems Technologies and Applications
Computational Statistics & Data Analysis
Artificial Intelligence Review
Privacy-preserving data mining: A feature set partitioning approach
Information Sciences: an International Journal
EDLRT: Entropy-based dummy variables logistic regression tree
Intelligent Data Analysis
Artificial Intelligence in Medicine
Exploiting label dependencies for improved sample complexity
Machine Learning
A survey of multiple classifier systems as hybrid systems
Information Fusion
Hi-index | 0.00 |
This paper presents practical aspects of feature set decomposition in classification problems using decision trees. Feature set decomposition generalizes the task of feature selection which is extensively used in data mining. Feature selection aims to provide a representative set of features from which a classifier is constructed. On the other hand, feature set decomposition decomposes the original set of features into several subsets, and builds a classifier for each subset. The classifiers are then combined for classifying new instances. In order to examine the idea, a general framework that searches for helpful decomposition structures is proposed. This framework nests many algorithms, two of which are tested empirically over a set of benchmark datasets. The first algorithm performs a serial search while using a new Vapnik-Chervonenkis dimension bound for multiple oblivious trees as an evaluating schema. The second algorithm performs a multi-search while using wrapper evaluating schema. This work indicates that feature set decomposition can increase the accuracy of decision trees.