Conditional distribution learning with neural networks and itsapplication to channel equalization

  • Authors:
  • T. Adali;X. Liu;M.K. Sonmez

  • Affiliations:
  • Dept. of Comput. Sci., Maryland Univ., Baltimore, MD;-;-

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 1997

Quantified Score

Hi-index 35.68

Visualization

Abstract

We present a conditional distribution learning formulation for real-time signal processing with neural networks based on an extension of maximum likelihood theory-partial likelihood (PL) estimation-which allows for (i) dependent observations and (ii) sequential processing. For a general neural network conditional distribution model, we establish a fundamental information-theoretic connection, the equivalence of maximum PL estimation, and accumulated relative entropy (ARE) minimization, and obtain large sample properties of PL for the general case of dependent observations. As an example, the binary case with the sigmoidal perceptron as the probability model is presented. It is shown that the single and multilayer perceptron (MLP) models satisfy conditions for the equivalence of the two cost functions: ARE and negative log partial likelihood. The practical issue of their gradient descent minimization is then studied within the well-formed cost functions framework. It is shown that these are well-formed cost functions for networks without hidden units; hence, their gradient descent minimization is guaranteed to converge to a solution if one exists on such networks. The formulation is applied to adaptive channel equalization, and simulation results are presented to show the ability of the least relative entropy equalizer to realize complex decision boundaries and to recover during training from convergence at the wrong extreme in cases where the mean square error-based MLP equalizer cannot