A maximum entropy approach to natural language processing
Computational Linguistics
On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems
Theoretical Computer Science
Evaluating tutors that listen: an overview of project LISTEN
Smart machines in education
Feature selection for high-dimensional genomic microarray data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
ADVISOR: A Machine Learning Architecture for Intelligent Tutor Construction
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Improving Story Choice in a Reading Tutor that Listens
ITS '00 Proceedings of the 5th International Conference on Intelligent Tutoring Systems
High-Level Student Modeling with Machine Learning
ITS '00 Proceedings of the 5th International Conference on Intelligent Tutoring Systems
An introduction to variable and feature selection
The Journal of Machine Learning Research
Towards developing general models of usability with PARADISE
Natural Language Engineering
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
A Fast Dual Algorithm for Kernel Logistic Regression
Machine Learning
The PARADISE Evaluation Framework: Issues and Findings
Computational Linguistics
Predicting the quality and usability of spoken dialogue services
Speech Communication
Hybrid reinforcement/supervised learning of dialogue policies from fixed data sets
Computational Linguistics
Does Learner Control Affect Learning?
Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work
Comparing user simulation models for dialog strategy learning
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
Learning combination features with L1 regularization
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Automated Assessment of Oral Reading Prosody
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
Learning human multimodal dialogue strategies
Natural Language Engineering
Leveraging hidden dialogue state to select tutorial moves
IUNLPBEA '10 Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Characterizing the effectiveness of tutorial dialogue with hidden markov models
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
Which system differences matter?: using l1/l2 regularization to compare dialogue systems
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Hi-index | 0.00 |
The richness of multimodal dialogue makes the space of possible features required to describe it very large relative to the amount of training data. However, conventional classifier learners require large amounts of data to avoid overfitting, or do not generalize well to unseen examples. To learn dialogue classifiers using a rich feature set and fewer data points than features, we apply a recent technique, ℓ1-regularized logistic regression. We demonstrate this approach empirically on real data from Project LISTEN's Reading Tutor, which displays a story on a computer screen and listens to a child read aloud. We train a classifier to predict task completion (i.e., whether the student will finish reading the story) with 71% accuracy on a balanced, unseen test set. To characterize differences in the behavior of children when they choose the story they read, we likewise train and test a classifier that with 73.6% accuracy infers who chose the story based on the ensuing dialogue. Both classifiers significantly outperform baselines and reveal relevant features of the dialogue.