earticle

논문검색

GPU-accelerated Large Scale Analytics using MapReduce Model

초록

영어

Analysis and clustering of very large scale data set has been a complex problem. It becomes increasingly difficult to compute the results in a reasonable amount of time as data amount increases and with its feature dimensions. The GPU (graphics processing unit) has been a point of attraction in a last few years for its ability to compute highly-parallel and semi-parallel problems way faster than any traditional sequential processor. This paper explores the capability of GPU with MapReduce Model. This highly scalable model for distributed programming can be scaled upto thousands of machines. This was developed by Google’s developers Jeffrey Dean and Sanjay Ghemawat and has been implemented in many programming languages and frameworks like Apache Hadoop, Hive, and Pig etc. For this paper we’ll mainly focus on Hadoop framework. First two sections present the introduction and background. The working mechanism of this combination has been shown in section 3. Then further we explore frameworks present to implement MapReduce on GPU. In section 5, a comparative experiment was performed on GPU and CPU, both implementing MapReduce Model. The paper ends conclusion.

목차

Abstract
 1. Introduction
 2. Background
 3. Working Mechanism
 4. GPU-Hadoop Frameworks
 5. Results & Simulation Analysis
 6. Conclusion
 Reference

저자정보

  • RadhaKishan Yadav Research Assistant, Indian Institute of Technology, Indore
  • Robin Singh Bhadoria Research Scholar, Indian Institute of Technology, Indore
  • Amit Suri Senior Business Intelligence Consultant, Microsoft Inc., Redmond, Washington, USA

참고문헌

자료제공 : 네이버학술정보

    ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

    0개의 논문이 장바구니에 담겼습니다.