Neural network exploration using optimal experiment design
Neural Networks
Hierarchical sampling for active learning
Proceedings of the 25th international conference on Machine learning
Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
A Multilevel Parallelization Framework for High-Order Stencil Computations
Euro-Par '09 Proceedings of the 15th International Euro-Par Conference on Parallel Processing
Accurate and efficient processor performance prediction via regression tree based modeling
Journal of Systems Architecture: the EUROMICRO Journal
A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design
The Journal of Machine Learning Research
A Novel Hybrid Sequential Design Strategy for Global Surrogate Modeling of Computer Experiments
SIAM Journal on Scientific Computing
Hi-index | 0.00 |
Characterizing performance is essential to optimize programs and architectures. The open source Adaptive Sampling Kit (ASK) measures the performance trade-offs in large design spaces. Exhaustively sampling all points is computationally intractable. Therefore, ASK concentrates exploration in the most irregular regions of the design space through multiple adaptive sampling methods. The paper presents the ASK architecture and a set of adaptive sampling strategies, including a new approach: Hierarchical Variance Sampling. ASK's usage is demonstrated on two performance characterization problems: memory stride accesses and stencil codes. ASK builds precise models of performance with a small number of measures. It considerably reduces the cost of performance exploration. For instance, the stencil code design space, which has more than 31.108 points, is accurately predicted using only 1 500 points.