The nature of statistical learning theory
The nature of statistical learning theory
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A perspective view and survey of meta-learning
Artificial Intelligence Review
Machine Learning
Automatic Feature Extraction for Classifying Audio Data
Machine Learning
A model of inductive bias learning
Journal of Artificial Intelligence Research
The feature selection problem: traditional methods and a new algorithm
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Distributed feature extraction in a p2p setting: a case study
Future Generation Computer Systems - Special section: Data mining in grid computing environments
Feature Selection by Transfer Learning with Linear Regularized Models
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Application challenges for ubiquitous knowledge discovery
Ubiquitous knowledge discovery
Nemoz: a distributed framework for collaborative media organization
Ubiquitous knowledge discovery
Application challenges for ubiquitous knowledge discovery
Ubiquitous knowledge discovery
Nemoz: a distributed framework for collaborative media organization
Ubiquitous knowledge discovery
PSO for feature construction and binary classification
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Feature construction is essential for solving many complex learning problems. Unfortunately, the construction of features usually implies searching a very large space of possibilities and is often computationally demanding. In this work, we propose a case based approach to feature construction. Learning tasks are stored together with a corresponding set of constructed features in a case base and can be retrieved to speed up feature construction for new tasks. The essential part of our method is a new representation model for learning tasks and a corresponding distance measure. Learning tasks are compared using relevance weights on a common set of base features only. Therefore, the case base can be built and queried very efficiently. In this respect, our approach is unique and enables us to apply case based feature construction not only on a large scale, but also in distributed learning scenarios in which communication costs play an important role. We derive a distance measure for heterogeneous learning tasks by stating a set of necessary conditions. Although the conditions are quite basic, they constraint the set of applicable methods to a surprisingly small number.