Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Bounds on Error Expectation for Support Vector Machines
Neural Computation
EuroGP'08 Proceedings of the 11th European conference on Genetic programming
Hi-index | 0.00 |
This paper presents a new algorithm to deal with nominal attributes in Support Vector Classification by modifying the most popular approach. For a nominal attribute with M states, we translate it into M points in M – 1 dimensional space with flexible and adjustable position. Their final position is decided by minimizing the Leave-one-out error. This strategy overcomes the shortcoming in the most popular approach which assume that any two different attribute values have the same degree of dissimilarities. Preliminary experiments also show the superiority of our new algorithm.