A General Probabilistic Formulation for Supervised Neural Classifiers

  • Authors:
  • Hongmei Ni;Tülay Adali;Bo Wang;Xiao Liu

  • Affiliations:
  • Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD 21250, USA;Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD 21250, USA;Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD 21250, USA;GlobeSpan Semiconductor Inc., 100 Schulz Drive, Red Bank, NJ 07701, USA

  • Venue:
  • Journal of VLSI Signal Processing Systems
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We use partial likelihood (PL) theory to introduce a general probabilistic framework for the design and analysis of neural classifiers. The formulation allows for the training samples used in the design to have correlations in time, and for use of a wide range of neural network probability models including recurrent structures. We use PL theory to establish a fundamental information-theoretic connection, show the equivalence of likelihood maximization and relative entropy minimization, without making the common assumptions of independent training samples and true distribution information. We use this result to construct the information geometry of partial likelihood and derive the information geometric e- and m-projection (em) algorithm for class conditional density modeling by finite normal mixtures. We demonstrate the successful application of the algorithm by a channel equalization example and give simulation results to show the efficiency of the scheme.