MapReduce: simplified data processing on large clusters
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Evaluating MapReduce for Multi-core and Multiprocessor Systems
HPCA '07 Proceedings of the 2007 IEEE 13th International Symposium on High Performance Computer Architecture
MapReduce: simplified data processing on large clusters
Communications of the ACM - 50th anniversary issue: 1958 - 2008
Mars: a MapReduce framework on graphics processors
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
A comparison of join algorithms for log processing in MaPreduce
Proceedings of the 2010 ACM SIGMOD International Conference on Management of data
Twister: a runtime for iterative MapReduce
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Rapid parallel genome indexing with MapReduce
Proceedings of the second international workshop on MapReduce and its applications
Fast clustering using MapReduce
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Parallel rough set based knowledge acquisition using MapReduce from big data
Proceedings of the 1st International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications
Data assimilation using sequential monte carlo methods in wildfire spread simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Cloud MapReduce for Monte Carlo bootstrap applied to Metabolic Flux Analysis
Future Generation Computer Systems
Hi-index | 0.00 |
MapReduce is a domain-independent programming model for processing data in a highly parallel fashion. With MapReduce, parallel computing can be automatically carried out in large-scale commodity machines. This paper presents a method that utilizes the parallel and distributed processing capability of Hadoop MapReduce for particle filter-based data assimilation in wildfire spread simulation. We parallelize the sampling and weight computation steps of the particle filtering algorithm based on the MapReduce programming model. Experiment results show that our approach significantly increases the performance of particle filter-based data assimilation.