Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
A trainable document summarizer
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
A maximum entropy approach to natural language processing
Computational Linguistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Summarizing text documents: sentence selection and evaluation metrics
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
The automatic construction of large-scale corpora for summarization research
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Discourse Segmentation in Aid of Document Summarization
HICSS '00 Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 3 - Volume 3
A Maximum-Entropy-Inspired Parser
A Maximum-Entropy-Inspired Parser
The effects of analysing cohesion on document summarisation
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Sequence modelling for sentence classification in a legal summarisation system
Proceedings of the 2005 ACM symposium on Applied computing
A comparison of algorithms for maximum entropy parameter estimation
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
Combining optimal clustering and Hidden Markov models for extractive summarization
MultiSumQA '03 Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering - Volume 12
One story, one flow: Hidden Markov Story Models for multilingual multidocument summarization
ACM Transactions on Speech and Language Processing (TSLP)
Extractive summarisation of legal texts
Artificial Intelligence and Law - AI & law in eGovernment and eDemocracy part I
Summarization with a joint model for sentence extraction and compression
ILP '09 Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing
LexRank: graph-based lexical centrality as salience in text summarization
Journal of Artificial Intelligence Research
Swarm Diversity Based Text Summarization
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
Extractive summarization of broadcast news: comparing strategies for European portuguese
TSD'07 Proceedings of the 10th international conference on Text, speech and dialogue
Fuzzy swarm diversity hybrid model for text summarization
Information Processing and Management: an International Journal
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts of ACL 2011
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Web Page Summarization for Just-in-Time Contextual Advertising
ACM Transactions on Intelligent Systems and Technology (TIST)
Knowledge extraction for question titling
FQAS'11 Proceedings of the 9th international conference on Flexible Query Answering Systems
An approach to summarizing Bengali news documents
Proceedings of the International Conference on Advances in Computing, Communications and Informatics
Summarization of legal texts with high cohesion and automatic compression rate
JSAI-isAI'12 Proceedings of the 2012 international conference on New Frontiers in Artificial Intelligence
Hi-index | 0.00 |
A maximum entropy classifier can be used to extract sentences from documents. Experiments using technical documents show that such a classifier tends to treat features in a categorical manner. This results in performance that is worse than when extracting sentences using a naive Bayes classifier. Addition of an optimised prior to the maximum entropy classifier improves performance over and above that of naive Bayes (even when naive Bayes is also extended with a similar prior). Further experiments show that, should we have at our disposal extremely informative features, then maximum entropy is able to yield excellent results. Naive Bayes, in contrast, cannot exploit these features and so fundamentally limits sentence extraction performance.