TD-Gammon, a self-teaching backgammon program, achieves master-level play
Neural Computation
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Microwave Mobile Communications
Microwave Mobile Communications
Learning to Predict by the Methods of Temporal Differences
Machine Learning
A neural fuzzy system with linguistic teaching signals
IEEE Transactions on Fuzzy Systems
Temporal difference learning applied to sequential detection
IEEE Transactions on Neural Networks
Power prediction in mobile communication systems using an optimal neural-network structure
IEEE Transactions on Neural Networks
Locally recurrent globally feedforward networks: a critical review of architectures
IEEE Transactions on Neural Networks
Memory neuron networks for identification and control of dynamical systems
IEEE Transactions on Neural Networks
HLA simulation using data compensation model
AsiaSim'04 Proceedings of the Third Asian simulation conference on Systems Modeling and Simulation: theory and applications
Hi-index | 0.24 |
In this paper, the problem of multi-step ahead prediction of long term deep fading in mobile networks is studied. We first briefly discuss the operating principle of the temporal difference (TD) method. A TD method-based multi-step ahead prediction scheme using the modified Elman neural network (MENN) is then proposed. This prediction approach provides for on-line adaptation and fast convergence rate. Next, it is applied to the prediction of the occurrence of long term deep fading in the mobile communications systems. Simulation experiments reveal that our prediction scheme is capable of predicting the degree of occurrence possibility of future deep fading. The prediction results are considered to be a solid basis for employing the reinforcement learning method in the power control of cellular phone systems.