The C-value/NC-value Method of Automatic Recognition for Multi-Word Terms
ECDL '98 Proceedings of the Second European Conference on Research and Advanced Technology for Digital Libraries
A methodology for automatic term recognition
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
Approaches to text mining for clinical medical records
Proceedings of the 2006 ACM symposium on Applied computing
An automated system for conversion of clinical notes into SNOMED clinical terminology
ACSW '07 Proceedings of the fifth Australasian symposium on ACSW frontiers - Volume 68
A survey of types of text noise and techniques to handle noisy text
Proceedings of The Third Workshop on Analytics for Noisy Unstructured Text Data
An approach for adding noise-tolerance to restricted-domain information retrieval
NLDB'10 Proceedings of the Natural language processing and information systems, and 15th international conference on Applications of natural language to information systems
Automatic filtering of bilingual corpora for statistical machine translation
NLDB'05 Proceedings of the 10th international conference on Natural Language Processing and Information Systems
Automatically estimating the incidence of symptoms recorded in GP free text notes
Proceedings of the first international workshop on Managing interoperability and complexity in health systems
Hi-index | 0.01 |
This paper, reports on the results of research which is based originally on the 2009 criteria and corpus of "The Obesity Challenge", defined by Informatics for Integrating Biology and the Bedside (i2b2), a National Center for Biomedical Computing. In the original task, i2b2 asked participants to build software systems that could process a corpus of noisy patient's clinical discharge summaries and report on patients' condition. The ultimate aim was to compare the judgments of physicians in evaluating the patient condition to a machine performance over such a corpus.. The authors used a collection of resources to lexically and semantically characterize the diseases and their associated signs, symptoms. In this approach, they combined dictionary look-up, rule-based, and machine-learning methods along with adopting a special internal redundancy algorithm to reduce the usage of customized rules and increase the consistency of the performance over various types of noisy corpora. The performance was strengthened by information extracted from the patient notes via an internal redundancy module to overcome False Positives (FPs) and False Negatives (FNs) arising from the noisy nature of corpus. The methods were applied to a collection of 507 previously unseen noisy patient discharge summaries, and the Judgments were evaluated against a manually provided gold standard. The overall ranking of the participating Research groups were primarily based on the macro-averaged F-measure over 16 Classes of diseases. The implemented method achieved the micro-averaged F-measure of 96.9% (ranked within the top 7 out of 28 research groups) where there was no statistical significant difference between top 7 teams in micro F-measure. The highest F-Measure was 97.2%. The performance achieved was in line with the agreement between human annotators, indicating the potential of text mining for accurate and efficient prediction of disease status from clinical discharge summaries. Comparison of the results of this approach to results of other submitted classical approaches also showed adopting the internal redundancy algorithm for clinical domains can boost the accuracy of classifiers without extensive usage of rules and customization and therefore has potential for a more consistent performance and more efficient processing over various type of noisy corpora.