Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
A study on convolution kernels for shallow semantic parsing
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Kernel methods, syntax and semantics for relational text categorization
Proceedings of the 17th ACM conference on Information and knowledge management
Efficient linearization of tree kernel functions
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
Fast support vector machines for structural Kernels
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Structured lexical similarity via convolution kernels on dependency trees
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Hi-index | 0.00 |
In recent years, machine learning (ML) has been used more and more to solve complex tasks in different disciplines, ranging from Data Mining to Information Retrieval or Natural Language Processing (NLP). These tasks often require the processing of structured input, e.g., the ability to extract salient features from syntactic/semantic structures is critical to many NLP systems. Mapping such structured data into explicit feature vectors for ML algorithms requires large expertise, intuition and deep knowledge about the target linguistic phenomena. Kernel Methods (KM) are powerful ML tools (see e.g., (Shawe-Taylor and Cristianini, 2004)), which can alleviate the data representation problem. They substitute feature-based similarities with similarity functions, i.e., kernels, directly defined between training/test instances, e.g., syntactic trees. Hence feature vectors are not needed any longer. Additionally, kernel engineering, i.e., the composition or adaptation of several prototype kernels, facilitates the design of effective similarities required for new tasks, e.g., (Moschitti, 2004; Moschitti, 2008).