Natural Language Information Processing: A Computer Grammmar of English and Its Applications
Natural Language Information Processing: A Computer Grammmar of English and Its Applications
PathwayFinder: paving the way towards automatic pathway extraction
APBC '04 Proceedings of the second conference on Asia-Pacific bioinformatics - Volume 29
Term identification in the biomedical literature
Journal of Biomedical Informatics - Special issue: Named entity recognition in biomedicine
Towards a base noun phrase parser using Web counts
Journal of Computing Sciences in Colleges
Towards applying text mining and natural language processing for biomedical ontology acquisition
TMBIO '06 Proceedings of the 1st international workshop on Text mining in bioinformatics
Extracting regulatory gene expression networks from PubMed
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Unsupervised Method for Parsing Coordinated Base Noun Phrases
CICLing '07 Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing
The extraction of enriched protein-protein interactions from biomedical text
BioNLP '07 Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing
Domain adaptation for statistical classifiers
Journal of Artificial Intelligence Research
Leveraging natural language processing of clinical narratives for phenotype modeling
PIKM '10 Proceedings of the 3rd workshop on Ph.D. students in information and knowledge management
Proceedings of the 2nd international workshop on Managing interoperability and compleXity in health systems
Comparative study of classification techniques on biomedical data from hypertext documents
International Journal of Knowledge Engineering and Soft Data Paradigms
Hi-index | 0.00 |
Information extraction is the process of scanning text for information relevant to some interest, including extracting entities, relations, and events. It requires deeper analysis than key word searches, but its aims fall short of the very hard and long-term problem of full text understanding. Information extraction represents a midpoint on this spectrum, where the aim is to capture structured information without sacrificing feasibility. One of the key ideas in this technology is to separate processing into several stages, in cascaded finite-state transducers. The earlier stages recognize smaller linguistic objects and work in a largely domain-independent fashion. The later stages take these linguistic objects as input and find domain-dependent patterns among them. There are now initial efforts to apply this technology to biomedical text. In other domains, the technology plateaued at about 60% recall and precision. Even if applications to biomedical text do no better than this, they could still prove to be of immense help to curatorial activities.