The weighted majority algorithm
Information and Computation
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Natural gradient works efficiently in learning
Neural Computation
A game of prediction with expert advice
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
Machine Learning - Special issue on context sensitivity and concept drift
Prior knowledge and preferential structures in gradient descent learning algorithms
The Journal of Machine Learning Research
Exploiting sparsity in adaptive filters
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
Mixture of expert algorithms are able to achieve a total loss close to the total loss of the best expert over a sequence of examples. We consider the use of mixture of expert algorithms applied to the signal processing problem of channel equalization. We use these mixture of expert algorithms to track the best parameter settings for equalizers in the presence of noise or when the channel characteristics are unknown, maybe non-stationary. The experiments performed demonstrate the use of expert algorithms in tracking the best LMS equalizer step size in the presence of additive noise and in prior selection for the approximate natural gradient (ANG) algorithm.