An efficient probabilistic context-free parsing algorithm that computes prefix probabilities
Computational Linguistics
A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
A new one pass algorithm for estimating stochastic context-free grammars
Information Processing Letters
An efficient context-free parsing algorithm
Communications of the ACM
Introduction to Automata Theory, Languages and Computability
Introduction to Automata Theory, Languages and Computability
A dynamic EM algorithm for estimating mixture proportions
Statistics and Computing
ICGI '00 Proceedings of the 5th International Colloquium on Grammatical Inference: Algorithms and Applications
Estimating Grammar Parameters Using Bounded Memory
ICGI '02 Proceedings of the 6th International Colloquium on Grammatical Inference: Algorithms and Applications
On-line EM Algorithm for the Normalized Gaussian Network
Neural Computation
An introduction to ROC analysis
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
Online learning with hidden markov models
Neural Computation
Parameter learning of logic programs for symbolic-statistical modeling
Journal of Artificial Intelligence Research
An incremental EM-based learning approach for on-line prediction of hospital resource utilization
Artificial Intelligence in Medicine
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
IEEE Transactions on Signal Processing
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
Although Stochastic Context-Free Grammars (SCFGs) appear promising for the recognition and threat assessment of complex radar emitters in radar Electronic Support (ES) systems, techniques for learning their production rule probabilities are computationally demanding, and cannot efficiently reflect changes in operational environments. On-line learning techniques are needed in ES applications to adapt SCFG probabilities rapidly, as new training data are collected from the field. In this paper, an efficient on-line version of the fast learning technique known as the graphical Expectation-Maximization (gEM) technique - called on-line gEM (ogEM) - is proposed. A second technique called on-line gEM with discount factor (ogEM-df) expands ogEM to allow for tuning the learning rate. The performance of these new techniques has been compared to HOLA, the only other fast on-line learning technique, from several perspectives - perplexity, error rate, complexity, and convergence time - and using complex radar signals. The impact on performance of factors like the size of new data blocks, and the level of ambiguity of grammars has been observed. Results indicate that on-line learning of new training data blocks with ogEMand ogEM-df provides the same level of accuracy as batch learning with gEM using all cumulative data from the start, even for small data blocks. As expected, on-line learning significantly reduces the overall time and memory complexities associated with updating probabilities with new data in an operational environment. Finally, while the computational complexity and memory requirements of ogEM and ogEM-df may be greater than that of HOLA, they both provide a significantly higher level of accuracy.