Quantitative Analysis of MR Brain Image Sequences by Adaptive Self-Organizing Finite Mixtures
Journal of VLSI Signal Processing Systems - special issue on applications of neural networks in biomedical image processing
A General Probabilistic Formulation for Supervised Neural Classifiers
Journal of VLSI Signal Processing Systems
Adaptive filtering with the self-organizing map: a performance comparison
Neural Networks - 2006 Special issue: Advances in self-organizing maps--WSOM'05
Partial likelihood for online order selection
Signal Processing - Special issue: Information theoretic signal processing
A neural equalizer for nonlinearly distorted OFDM signals
International Journal of Knowledge-based and Intelligent Engineering Systems
Equalisation of satellite mobile channels with neural network techniques
Space Communications
Hi-index | 35.68 |
We present a conditional distribution learning formulation for real-time signal processing with neural networks based on an extension of maximum likelihood theory-partial likelihood (PL) estimation-which allows for (i) dependent observations and (ii) sequential processing. For a general neural network conditional distribution model, we establish a fundamental information-theoretic connection, the equivalence of maximum PL estimation, and accumulated relative entropy (ARE) minimization, and obtain large sample properties of PL for the general case of dependent observations. As an example, the binary case with the sigmoidal perceptron as the probability model is presented. It is shown that the single and multilayer perceptron (MLP) models satisfy conditions for the equivalence of the two cost functions: ARE and negative log partial likelihood. The practical issue of their gradient descent minimization is then studied within the well-formed cost functions framework. It is shown that these are well-formed cost functions for networks without hidden units; hence, their gradient descent minimization is guaranteed to converge to a solution if one exists on such networks. The formulation is applied to adaptive channel equalization, and simulation results are presented to show the ability of the least relative entropy equalizer to realize complex decision boundaries and to recover during training from convergence at the wrong extreme in cases where the mean square error-based MLP equalizer cannot