Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Estimating dependency structure as a hidden variable
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Edge exclusion tests for graphical Guassian models
Learning in graphical models
An improved Bayesian structural EM algorithm for learning Bayesian networks for clustering
Pattern Recognition Letters
Feature subset selection by Bayesian network-based optimization
Artificial Intelligence
Clustering Algorithms
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Knowledge Acquisition Via Incremental Conceptual Clustering
Machine Learning
Efficient Feature Selection in Conceptual Clustering
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Feature Selection as a Preprocessing Step for Hierarchical Clustering
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
ICCBR '95 Proceedings of the First International Conference on Case-Based Reasoning Research and Development
Dimensionality Reduction of Unsupervised Data
ICTAI '97 Proceedings of the 9th International Conference on Tools with Artificial Intelligence
Dependency-based feature selection for clustering symbolic data
Intelligent Data Analysis
Building classifiers using Bayesian networks
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
The Bayesian structural EM algorithm
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
An experimental comparison of several clustering and initialization methods
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Learning mixtures of DAG models
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence
Subspace clustering for high dimensional data: a review
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets
Simultaneous Feature Selection and Clustering Using Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Intelligent sensory evaluation: Concepts, implementations, and applications
Mathematics and Computers in Simulation
Optimisation of garment design using fuzzy logic and sensory evaluation techniques
Engineering Applications of Artificial Intelligence
Hybrid prediction model for Type-2 diabetic patients
Expert Systems with Applications: An International Journal
A hybrid prediction model with F-score feature selection for type II Diabetes databases
Proceedings of the 1st Amrita ACM-W Celebration on Women in Computing in India
Nearest-neighbor guided evaluation of data reliability and its applications
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Consensus self-organized models for fault detection (COSMO)
Engineering Applications of Artificial Intelligence
Investigating a novel GA-based feature selection method using improved KNN classifiers
International Journal of Information and Communication Technology
Journal of Global Optimization
An evaluation of filter and wrapper methods for feature selection in categorical clustering
IDA'05 Proceedings of the 6th international conference on Advances in Intelligent Data Analysis
International Journal of Data Warehousing and Mining
Feature selection with SVD entropy: Some modification and extension
Information Sciences: an International Journal
Hi-index | 0.15 |
This paper introduces a novel enhancement for unsupervised learning of conditional Gaussian networks that benefits from feature selection. Our proposal is based on the assumption that, in the absence of labels reflecting the cluster membership of each case of the database, those features that exhibit low correlation with the rest of the features can be considered irrelevant for the learning process. Thus, we suggest performing this process using only the relevant features. Then, every irrelevant feature is added to the learned model to obtain an explanatory model for the original database which is our primary goal. A simple and, thus, efficient measure to assess the relevance of the features for the learning process is presented. Additionally, the form of this measure allows us to calculate a relevance threshold to automatically identify the relevant features. The experimental results reported for synthetic and real-world databases show the ability of our proposal to distinguish between relevant and irrelevant features and to accelerate learning; however, still obtaining good explanatory models for the original database.