원문정보
초록
영어
Spiking neural networks (SNN), employing eventbased spike computation, can be implemented in hardware where on-chip learning and inference are supported in a powerand area-efficient manner. Although many SNN hardware have been proposed for energy-efficient designs using relatively shallow networks, SNN algorithms that support multi-layer learning need to be implemented in hardware to handle more complex datasets. However, multi-layer learning requires more complicated functions like softmax activation, which makes energy-efficient hardware design difficult. In this paper, we present a zero-spike prediction method to skip the complicated function in the convolution layer. Decomposing the original algorithm, the proposed method skips at least 76.90% of softmax activation operations without classification accuracy degradation.
목차
I. INTRODUCTION
II. PRELIMINARIES
A. SNN
B. STDP-Based Multi-layer Learning Algorithm
III. PROPOSED ZERO-SPIKE PREDICTION
IV. EXPERIMENTAL RESULT
V. CONCLUSION
ACKNOWLEDGMENT
REFERENCES