An Efficient Digital Search Algorithm by Using a Double-Array Structure
IEEE Transactions on Software Engineering
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
Efficient support vector classifiers for named entity recognition
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Fast methods for kernel-based text analysis
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Linear-time dependency analysis for Japanese
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Tracking the best hyperplane with a simple budget Perceptron
Machine Learning
The Forgetron: A Kernel-Based Perceptron on a Budget
SIAM Journal on Computing
Structure compilation: trading structure for features
Proceedings of the 25th international conference on Machine learning
splitSVM: fast, space-efficient, non-heuristic, polynomial kernel computation for NLP applications
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
An approximate approach for training polynomial kernel SVMs in linear time
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
Practical structured learning techniques for natural language processing
Practical structured learning techniques for natural language processing
A fast boosting-based learner for feature-rich tagging and chunking
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
Cross-task knowledge-constrained self training
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Learning combination features with L1 regularization
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Stochastic gradient descent training for L1-regularized log-linear models with cumulative penalty
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Polynomial to linear: efficient classification with conjunctive features
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
Bounded Kernel-Based Online Learning
The Journal of Machine Learning Research
Training and Testing Low-degree Polynomial Data Mappings via Linear SVM
The Journal of Machine Learning Research
Identifying constant and unique relations by using time-series text
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
This paper proposes an efficient online method that trains a classifier with many conjunctive features. We employ kernel computation called kernel slicing, which explicitly considers conjunctions among frequent features in computing the polynomial kernel, to combine the merits of linear and kernel-based training. To improve the scalability of this training, we reuse the temporal margins of partial feature vectors and terminate unnecessary margin computations. Experiments on dependency parsing and hyponymy-relation extraction demonstrated that our method could train a classifier orders of magnitude faster than kernel-based online learning, while retaining its space efficiency.