원문정보
초록
영어
As a widely used programming model for the purposes of processing large data sets, MapReduce (MR) becomes inevitable in data clusters or grids, e.g. a Hadoop environment. Load balancing as a key factor affecting the performance of map resource distribution, has recently gained high concerns to optimize. Current MR processes in the realization of distributed tasks to clusters use hashing with random modulo operations, which can lead to uneven data distribution and inclined loads, thereby obstruct the performance of the entire distribution system. In this paper, a virtual partition consistent hashing (VPCH) algorithm is proposed for the reduce stage of MR processes, in order to achieve such a trade-off on job allocation. Besides, experienced programmers are needed to decide the number of reducers used during the reduce phase of the MR, which makes the quality of MR scripts differ. So, an extreme learning method is employed to recommend potential number of reducer a mapped task needs. Execution time is also predicted for user to better arrange their tasks. According to the results, VPCH can lead to load balancing and our prediction model can provide fast prediction than SVM with similar accuracy maintained.
목차
1. Introduction
2. Related Work
3. Virtual Partition Consistent Hashing
3.1. Generation of VPCH Hash Circle
3.2. Allocation of Mapped Data Splits
4. A Prediction Model based on NO-ELM
4.1. Number of Hidden Neurons Optimized ELM (NO-ELM)
4.2. The Process to Build the Prediction Model based on NO-ELM
5. Experiment and Analysis
5.1. Evaluation of VPCH
5.2. Evaluation of NO-ELM
6. Conclusion
Acknowledgments
References