earticle

논문검색

Workshop Session_KETI

Energy-Efficient SNN Implementation Method Using Zero-spike Prediction

초록

영어

Spiking neural networks (SNN), employing eventbased spike computation, can be implemented in hardware where on-chip learning and inference are supported in a powerand area-efficient manner. Although many SNN hardware have been proposed for energy-efficient designs using relatively shallow networks, SNN algorithms that support multi-layer learning need to be implemented in hardware to handle more complex datasets. However, multi-layer learning requires more complicated functions like softmax activation, which makes energy-efficient hardware design difficult. In this paper, we present a zero-spike prediction method to skip the complicated function in the convolution layer. Decomposing the original algorithm, the proposed method skips at least 76.90% of softmax activation operations without classification accuracy degradation.

목차

Abstract
I. INTRODUCTION
II. PRELIMINARIES
A. SNN
B. STDP-Based Multi-layer Learning Algorithm
III. PROPOSED ZERO-SPIKE PREDICTION
IV. EXPERIMENTAL RESULT
V. CONCLUSION
ACKNOWLEDGMENT
REFERENCES

저자정보

  • Hyeonseong Kim SoC Platform Research Center, Korea Electronics Technology Institute
  • Byung-Soo Kim SoC Platform Research Center, Korea Electronics Technology Institute
  • Taeho Hwang SoC Platform Research Center, Korea Electronics Technology Institute

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      0개의 논문이 장바구니에 담겼습니다.