Search: A survey of recent results
Exploring artificial intelligence
Artificial Intelligence
Depth-limited search for real-time problem solving
Real-Time Systems
Solving time-dependent problems: a decision-theoretic approach to planning in dynamic environments
Solving time-dependent problems: a decision-theoretic approach to planning in dynamic environments
To Schedule or to Execute: Decision Support and PerformanceImplications
Real-Time Systems
Specification and Analysis of Real-Time Problem Solvers
IEEE Transactions on Software Engineering
Deadline compliance, predictability, and on-line optimization in real-time problem solving
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Anytime problem solving using dynamic programming
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Can real-time search algorithms meet deadlines?
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Hi-index | 0.00 |
Games, such as Chess, Eight Queens, and Tiles Puzzle, have traditionally been used as popular benchmarks for evaluating different problem solving strategies. Such benchmarks usually impose specific constraints that are considered important for a technique to address and to solve. These specific problems and constraints facilitate a fair and tangible comparison of different techniques. Many of the game benchmarks (e.g. Eight Queens and Tiles Puzzle) are suitable for evaluating static scheduling techniques. In such benchmarks, the scheduling phase and the execution phase (i.e. when the schedule is executed to play the game) are disjoint. The scheduling technique can be executed to compute a complete schedule prior to the execution of any moves to play the game. Due to recent interests in on-line problem solving techniques, there is a need for benchmarks which can evaluate the performance trade-offs of dynamic scheduling techniques. Many modern video and computer games can be suitable candidates for dynamic scheduling benchmarks, since they require on-line problem solving. These benchmarks and their system testbeds should be chosen and implemented such that they can accurately reveal important performance trade-offs of dynamic scheduling techniques. In this paper, we introduce a dynamic scheduling benchmark and its system testbed. This benchmark is based on an extended version of the Tetris computer game. The rules and semantics of the game were modified to lend themselves well to evaluation of discrete problem solving and optimization techniques. The system testbed is implemented in a distributed and asynchronous fashion, on a network of workstations, to reveal performance trade-offs between scheduling time, schedule quality, and problem constraints. We provide the results of a set of experiments which evaluate the benchmark and its system, and which evaluate the performance trade-offs of a set of scheduling techniques under different benchmarks constraints.