End-to-end delay approximation in cascades of generalized processor sharing schedulers
ICC'09 Proceedings of the 2009 IEEE international conference on Communications
Bounds, approximations and applications for a two-queue GPS system
INFOCOM'96 Proceedings of the Fifteenth annual joint conference of the IEEE computer and communications societies conference on The conference on computer communications - Volume 3
Hi-index | 0.00 |
Learning to solve problem-solving tasks is a hallmark of intelligence. Intelligent agents learn not only from their own experiences but also from the experiences of others. One would also like a computerized agent to do this: to exploit both its own experiences and those of other agents when learning to solve problem-solving tasks. To this end, we introduce a model of learner/trainer interaction that describes how a learning agent and training agent work together to help the learning agent learn. This proposed model presents the learner as an agent that must make decisions about {\em how} it is going to learn. For example, when should the learning agent ask the trainer for help? The training agent must also make decisions about how it interacts with the learning agent. For example, when the learning agent requests help, should the trainer provide it or ignore the request? These decisions drive the interactions, which are the mechanisms by which the training agent provides knowledge to the learning agent. We propose to examine the issues that arise in implementing particular aspects of the learner/trainer model. We will present a restricted learner/trainer model, {\sc Interactive Training}, and discuss the requirements on the training agent and the learning agent. In our proposed research we will explore the trainer''s ability to interact with the learner as well as the learner''s ability to benefit from that interaction. Our goal is to develop a systematic method by which a training agent---human or automated---can train a learning agent effectively, allowing automated agents to be built more quickly. Furthermore, such a training method will allow agents to be built for problems that are currently deemed too difficult for automated learning agents to tackle.