Neural network architectures: an introduction
Neural network architectures: an introduction
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Task scheduling in parallel and distributed systems
Task scheduling in parallel and distributed systems
A Parallel Simulated Annealing Algorithm with Low Communication Overhead
IEEE Transactions on Parallel and Distributed Systems
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neuro-Adaptive Process Control: A Practical Approach
Neuro-Adaptive Process Control: A Practical Approach
Parallel Processing for Real-Time Simulation: A Case Study
IEEE Parallel & Distributed Technology: Systems & Technology
A Genetic Algorithm for Multiprocessor Scheduling
IEEE Transactions on Parallel and Distributed Systems
DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors
IEEE Transactions on Parallel and Distributed Systems
Link contention-constrained scheduling and mapping of tasks
Cluster Computing
Dynamic Task Scheduling using Genetic Algorithms for Heterogeneous Distributed Computing
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 6 - Volume 07
Framework for Task Scheduling in Heterogeneous Distributed Computing Using Genetic Algorithms
Artificial Intelligence Review
Multi-robot task allocation through vacancy chain scheduling
Robotics and Autonomous Systems
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
Task scheduling is important for the proper functioning of parallel processor systems. The static scheduling of tasks onto networks of parallel processors is well-defined and documented in the literature. However, in many practical situations a priori information about the tasks that need to be scheduled is not available. In such situations, tasks usually arrive dynamically and the scheduling should be performed on-line or "on the fly." In this paper, we present a framework based on stochastic reinforcement learning, which is usually used to solve optimization problems in a simple and efficient way. The use of reinforcement learning reduces the dynamic scheduling problem to that of learning a stochastic approximation of an unknown average error surface. The main advantage of the proposed approach is that no prior information is required about the parallel processor system under consideration. The learning system develops an association between the best action (schedule) and the current state of the environment (parallel system). The performance of reinforcement learning is demonstrated by solving several dynamic scheduling problems. The conditions under which reinforcement learning can used to efficiently solve the dynamic scheduling problem are highlighted.