Learning Information Extraction Rules for Semi-Structured and Free Text
Machine Learning - Special issue on natural language learning
Constructing Biological Knowledge Bases by Extracting Information from Text Sources
Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology
A non-projective dependency parser
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Automatic acquisition of domain knowledge for Information Extraction
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
Automatic pattern acquisition for Japanese information extraction
HLT '01 Proceedings of the first international conference on Human language technology research
University of Sheffield: description of the LaSIE system as used for MUC-6
MUC6 '95 Proceedings of the 6th conference on Message understanding
An improved extraction pattern representation model for automatic IE pattern acquisition
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Counter-training in discovery of semantic patterns
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Accurate unlexicalized parsing
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
A semantic approach to IE pattern induction
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
A shortest path dependency kernel for relation extraction
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Mapping General-Specific Noun Relationships to WordNet Hypernym/Hyponym Relations
EKAW '08 Proceedings of the 16th international conference on Knowledge Engineering: Practice and Patterns
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
A task-based comparison of information extraction pattern models
DeepLP '07 Proceedings of the Workshop on Deep Linguistic Processing
Improving semi-supervised acquisition of relation extraction patterns
IEBeyondDoc '06 Proceedings of the Workshop on Information Extraction Beyond The Document
Multi-class named entity recognition via bootstrapping with dependency tree-based patterns
PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
DEEPER: a full parsing based approach to protein relation extraction
EvoBIO'08 Proceedings of the 6th European conference on Evolutionary computation, machine learning and data mining in bioinformatics
Text classification with the support of pruned dependency patterns
Pattern Recognition Letters
Paraphrase alignment for synonym evidence discovery
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Semi-supervised semantic pattern discovery with guidance from unsupervised pattern clusters
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
A local tree alignment approach to relation extraction of multiple arguments
Information Processing and Management: an International Journal
Dependency graphs as a generic interface between parsers and relation extraction rule learning
KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence
Divisible transition systems and multiplanar dependency parsing
Computational Linguistics
Hi-index | 0.00 |
Several recently reported techniques for the automatic acquisition of Information Extraction (IE) systems have used dependency trees as the basis of their extraction pattern representation. These approaches have used a variety of pattern models (schemes for representing IE patterns based on particular parts of the dependency analysis). An appropriate model should be expressive enough to represent the information which is to be extracted from text without being overly complicated. Four previously reported pattern models are evaluated using existing IE evaluation corpora and three dependency parsers. It was found that one model, linked chains, could represent around 95% of the information of interest without generating an unwieldy number of possible patterns.