Journal of the ACM (JACM)
Similarity estimation techniques from rounding algorithms
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Closest-point problems simplified on the RAM
SODA '02 Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms
Learning Additive Models Online with Fast Evaluating Kernels
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Mining concept-drifting data streams using ensemble classifiers
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Effective classification of noisy data streams with attribute-oriented dynamic classifier selection
Knowledge and Information Systems
Sequential sampling techniques for algorithmic learning theory
Theoretical Computer Science - Algorithmic learning theory (ALT 2000)
Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Learning drifting concepts: Example selection vs. example weighting
Intelligent Data Analysis
From dynamic classifier selection to dynamic ensemble selection
Pattern Recognition
Dynamic Weighted Majority: An Ensemble Method for Drifting Concepts
The Journal of Machine Learning Research
Privacy Preserving Data Mining Research: Current Status and Key Issues
ICCS '07 Proceedings of the 7th international conference on Computational Science, Part III: ICCS 2007
A Hilbert Space Embedding for Distributions
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
On Meta-Learning Rule Learning Heuristics
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
A kernel approach to comparing distributions
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
On learning algorithm selection for classification
Applied Soft Computing
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
We propose a well-founded method of ranking a pool of mtrained classifiers by their suitability for the current input of ninstances. It can be used when dynamically selecting a single classifier as well as in weighting the base classifiers in an ensemble. No classifiers are executed during the process. Thus, the ninstances, based on which we select the classifier, can as well be unlabeled. This is rare in previous work. The method works by comparing the training distributions of classifiers with the input distribution. Hence, the feasibility for unsupervised classification comes with a price of maintaining a small sample of the training data for each classifier in the pool.In the general case our method takes time and space , where tis the size of the stored sample from the training distribution for each classifier. However, for commonly used Gaussian and polynomial kernel functions we can execute the method more efficiently. In our experiments the proposed method was found to be accurate.