SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
MapReduce: simplified data processing on large clusters
Communications of the ACM - 50th anniversary issue: 1958 - 2008
Bigtable: A Distributed Storage System for Structured Data
ACM Transactions on Computer Systems (TOCS)
Dynamic proportional share scheduling in Hadoop
JSSPP'10 Proceedings of the 15th international conference on Job scheduling strategies for parallel processing
Scheduling Hadoop Jobs to Meet Deadlines
CLOUDCOM '10 Proceedings of the 2010 IEEE Second International Conference on Cloud Computing Technology and Science
A load-aware scheduler for MapReduce framework in heterogeneous cloud environments
Proceedings of the 2011 ACM Symposium on Applied Computing
Scheduling Mixed Real-Time and Non-real-Time Applications in MapReduce Environment
ICPADS '11 Proceedings of the 2011 IEEE 17th International Conference on Parallel and Distributed Systems
Hi-index | 0.00 |
Job scheduling in hadoop is a hot topic, however, current research mainly focuses on the time optimization in scheduling. With the trend of providing hadoop as a service to the public or specified groups, more factors should be considered, such as time and cost. To solve this problem, we present a utility-driven share scheduling algorithm. Considering time and cost, algorithm offers a global optimization scheduling scheme according to the workload of the job. Furthermore, we present a model that can estimate job execute time by cost. Finally, we implement the algorithm and experiment it in a hadoop cluster.