earticle

논문검색

Efficient Data Replication Scheme based on Hadoop Distributed File System

초록

영어

Hadoop distributed file system (HDFS) is designed to store huge data set reliably, has been widely used for processing massive-scale data in parallel. In HDFS, the data locality problem is one of critical problem that causes the performance decrement of a file system. To solve the data locality problem, we propose an efficient data replication scheme based on access count prediction in a Hadoop framework. By the previous data access count, the existing data replication scheme predicts the next access count of data files using Lagrange’s interpolation. Then, the proposed data replication scheme determines the replication factor with the predicted data access count, whether it generates a new replica or it uses the loaded data as cache selectively. Finally, the proposed scheme provides improvement of data locality. By performance evaluation, proposed efficient data replication scheme is compared with default data replication setting of Hadoop that shows proposed scheme reduces averagely 8.9% of the task completion time in the map phase. Regarding the data locality, proposed scheme provides the increase of node locality by 6.6% and the decrease of rack and rack-off locality by 38.9% and 56.5%.

목차

Abstract
 1. Introduction
 2. Related Works
  2.1. Previous Works
  2.2. Data Locality Problem
 3. Efficient Data Replication Scheme
  3.1. Access Count Prediction
  3.2. Efficient Data Replication and Replica Placement
 4. Performance Evaluation
  4.1. Evaluation Environment
  4.2. Performance Results
 6. Conclusion
 References

저자정보

  • Jungha Lee Division of Supercomputing, Korea Institute of Science and Technology Information, Korea
  • Jaehwa Chung Dept. of Computer Science, Korea National Open University, Korea
  • Daewon Lee Division of General Education, Seokyeong University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.