원문정보
초록
영어
Neural networks are increasingly being used in cloud-based applications, which require users to upload their sensitive data to the cloud server. However, the data privacy may be compromised when the server trains or infers a neural network model using the plaintext data. To address this privacy issue, many studies have developed privacy-preserving neural networks. Recently, FENet, a privacy-preserving neural networks framework using functional encryption, was proposed by Panzade and Takabi. In this paper, we propose a method to accelerate the secure activation function of FENet. We adopt a precomputation approach to reduce the computational overhead of privacy-preserving matrix multiplication, which is the dominant operation in the secure activation function of FENet. According to our performance analysis, the privacypreserving matrix multiplication can be performed by 3.77 times faster than that of FENet with additional 3.49 MB of memory. Since the secure activation function of FENet can be applied to both the training and inference phases, the proposed method is expected to accelerate both phases.
목차
I. INTRODUCTION
II. PRELIMINARIES
A. Function-Hiding Inner Product Encryption (FHIPE)
B. Privacy-Preserving Matrix Multiplication using 𝛱!&quat;#
III. EXISTING METHOD: FENET
IV. PROPOSED METHOD
V. CONCLUSION
ACKNOWLEDGMENT
REFERENCES