Communications of the ACM - Special issue on parallelism
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Machine Learning
An Evaluation of Grading Classifiers
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
Inference for the Generalization Error
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Transformations of symbolic data for continuous data oriented models
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Hi-index | 0.00 |
In pattern recognition many methods need numbers as inputs. Using nominal datasets with these methods requires to transform such data into numerical. Usually, this transformation consists in encoding nominal attributes into a group of binary attributes (one for each possible nominal value). This approach, however, can be enhanced for certain methods (e.g., those requiring linear separable data representations). In this paper, different alternatives are evaluated for enhancing SVM (Support Vector Machine) accuracy with nominal data. Some of these approaches convert nominal into continuous attributes using distance metrics (i.e., VDM (Value Difference Metric)). Other approaches combine the SVM with other classifier which could work directly with nominal data (i.e., a Decision Tree). An experimental validation over 27 datasets shows that Cascading with an SVM at Level-2 and a Decision Tree at Level-1 is a very interesting solution in comparison with other combinations of these base classifiers, and when compared to VDM.