Depth-first iterative-deepening: an optimal admissible tree search
Artificial Intelligence
A randomized parallel branch-and-bound procedure
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Scalable load balancing strategies for parallel A* algorithms
Journal of Parallel and Distributed Computing - Special issue on scalability of parallel algorithms and architectures
PRA*: massively parallel heuristic search
Journal of Parallel and Distributed Computing
Scalable Global and Local Hashing Strategies for Duplicate Pruning in Parallel A* Graph Search
IEEE Transactions on Parallel and Distributed Systems
Theoretical Computer Science - Special issue: Genome informatics
A Performance Analysis of Transposition-Table-Driven Work Scheduling in Distributed Search
IEEE Transactions on Parallel and Distributed Systems
Journal of the ACM (JACM)
Sequential and parallel algorithms for frontier A* with delayed duplicate detection
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Domain-independent structured duplicate detection
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Delayed duplicate detection: extended abstract
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Best-first heuristic search for multicore machines
Journal of Artificial Intelligence Research
Evaluation of a simple, scalable, parallel best-first search strategy
Artificial Intelligence
Efficient recovery of missing events
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
Hash Distributed A* (HDA*) is a parallel A* algorithm that is proven to be effective in optimal sequential planning with unit edge costs. HDA* leverages the Zobrist function to almost uniformly distribute and schedule work among processors. This paper evaluates the performance of HDA* in optimal sequence alignment. We observe that with a large number of CPU cores HDA* suffers from an increase of search overhead caused by reexpansions of states in the closed list due to nonuniform edge costs in this domain. We therefore present a new work distribution strategy limiting processors to distribute work, thus increasing the possibility of detecting such duplicate search effort. We evaluate the performance of this approach on a cluster of multi-core machines and show that the approach scales well up to 384 CPU cores.