Algorithms for clustering data
Algorithms for clustering data
Grammatical category disambiguation by statistical optimization
Computational Linguistics
Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
A theory of classifier combination: the neural network approach
A theory of classifier combination: the neural network approach
Incorporating Language Syntax in Visual Text Recognition with a Statistical Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Lexicon Driven Approach to Handwritten Word Recognition for Real-Time Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Clustering Algorithms
A Methodology for Mapping Scores to Probabilities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Postprocessing of Recognized Strings Using Nonstationary Markovian Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
This paper describes the derivation of probability of correctness from scores assigned by most recognizers. Motivation for this research is three-fold: (i) probability values can be used to rerank the output of any recognizer by using a new set of training data; if the training data is sufficiently large and representative of the test data, the recognition rates are seen to improve significantly, (ii) derivation of probability values puts the output of different recognizers on the same scale; this makes comparison across recognizers trivial, and (iii) word recognition can be readily extended to phrase and sentence recognition because the integration of language models becomes straightforward.We have conducted an extensive set of experiments. The results show a reranking of recognition choices based on the derived probability values leading to an enhancement in performance.