The nature of statistical learning theory
The nature of statistical learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms
Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms
Syllables and other String Kernel Extensions
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Comparison and classification of dialects
EACL '99 Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics
Profile-Based String Kernels for Remote Homology Detection and Motif Extraction
CSB '04 Proceedings of the 2004 IEEE Computational Systems Bioinformatics Conference
Use of support vector learning for chunk identification
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
Entity extraction without language-specific resources
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
Use of support vector machines in extended named entity recognition
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Learning to recognize faces incrementally
Proceedings of the 29th DAGM conference on Pattern recognition
The Journal of Machine Learning Research
Context-Sensitive kernel functions: a distance function viewpoint
ICMLC'05 Proceedings of the 4th international conference on Advances in Machine Learning and Cybernetics
Hi-index | 0.00 |
In classification problems, machine learning algorithms often make use of the assumption that (dis)similar inputs lead to (dis)similar outputs. In this case, two questions naturally arise: what does it mean for two inputs to be similar and how can this be used in a learning algorithm? In support vector machines, similarity between input examples is implicitly expressed by a kernel function that calculates inner products in the feature space. For numerical input examples the concept of an inner product is easy to define, for discrete structures like sequences of symbolic data however these concepts are less obvious. This article describes an approach to SVM learning for symbolic data that can serve as an alternative to the bag-of-words approach under certain circumstances. This latter approach first transforms symbolic data to vectors of numerical data which are then used as arguments for one of the standard kernel functions. In contrast, we will propose kernels that operate on the symbolic data directly.