Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Linearly Combining Density Estimators via Stacking
Machine Learning
Kernel density classification and boosting: an L2 analysis
Statistics and Computing
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Knowledge discovery on RFM model using Bernoulli sequence
Expert Systems with Applications: An International Journal
Classification Based on Combination of Kernel Density Estimators
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Classification based on multiple-resolution data view
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
We consider a problem of selection of parameters in a classifier based on the average of kernel density estimators where each estimator corresponds to a different data "resolution". The selection is based on adjusting parameters of the estimators to minimize a substitute of the misclassification ratio. We experimentally compare the misclassification ratio and parameters selected for benchmark data sets by the introduced algorithm with these values of the algorithm's baseline version. In order to place the classification results in a wider context, we compare them with results of other popular classifiers.