원문정보
초록
영어
Resistive random-access memory (RRAM), one of the most potential candidates for synaptic devices, has been studied steadily for many years. However, destructive switching methods and process variations acted as factors that were difficult to apply to the neuromorphic system. In particular, the breakdown of switching layer may occur even before training is sufficiently performed if endurance is not secured in on-chip training. In this work, we propose a binary neural network of a hardware friendly learning algorithm to overcome this issue at system-level study. Binary neural network (BNN) can accelerate the time at which the recognition rate is saturated because all weight states are defined by one switching event. In addition, the resistance to variation can be improved by using the maximum/minimum of the current level of the memristors. However, the conventional BNN has the disadvantage that batch normalization and real value weights must be used together for learning. In this paper, we verified a method for learning BNN using boundary values.
목차
I. INTRODUCTION
II. RESULTS AND DISCUSSION
III. CONCLUSION
ACKNOWLEDGMENT
REFERENCES