earticle

논문검색

Shuffle Reduction Based Sparse Matrix-Vector Multiplication on Kepler GPU

초록

영어

GPU is the suitable equipment for accelerating computing-intensive applications in order to get the higher throughput for High Performance Computing (HPC). Sparse Matrix-Vector Multiplication (SpMV) is the core algorithm of HPC, so the SpMV’s throughput on GPU may affect the throughput on HPC platform. In the paper, we focus on the latency of reduction routine in SpMV included in CUSP, such as accessing shared memory and bank conflicting while multiple threads simultaneously accessing the same bank. We provide shuffle method to reduce the partial results instead of reducing in the shared memory in order to improve the throughput of SpMV on Kepler GPU. Experiments show that shuffle method can improve the throughput up to 9% of the original routine of SpMV in CUSP on average.

목차

Abstract
 1. Introduction
 2. Preliminaries
  2.1. General Purpose Computing with GPU
  2.2. Compressed Sparse Row
  2.3. Shared Memory Reducing Based SpMV
 3. Shuffle Reduction Based CSR’s SpMV on GPU
 4. Experimental Results and Discussion
  4.1. Experimental Setup
  4.2. Experimental Results and Discussion
 5. Conclusion
 References

저자정보

  • Yuan Tao College of Mathematics, Jilin Normal University, Siping Jilin, China
  • Huang Zhi-Bin Beijing Key Lab of Intelligent Telecommunication Software and Multimedia,Beijing University of Posts and Telecommunications, Beijing China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.