원문정보
초록
영어
In this paper, we propose a PTQ (Post Training Quantization) static with QO (Quantization Only) technique to efficiently deploy and execute deep learning models. A comparative performance evaluation of PTQ static with QDQ (Quantize and DeQuantize) and the proposed quantization method was conducted using the MNIST (Modified National Institute of Standards and Technology database) dataset and 8- bit quantization. Experimental results indicate that the PTQ static with QO method reduces the size of the model by approximately 33%, increases the inference speed by 1.5 times, and minimizes the accuracy loss, similar to the PTQ static with QDQ method. The proposed PTQ static with QO method offers a significant technical enhancement to facilitate the efficient deployment and execution of AI (Artificial Intelligence) models through the quantization of deep learning models. We have shown that the PTQ static with QO method is a beneficial and efficient approach to decrease the size and computation of deep learning models. This study makes novel contributions to the quantization of deep learning models. The practical potential of the PTQ static with QO method lies in its ability to be more suitably deployed for the purposes of AI hardware.
목차
I. INTRODUCTION
II. QUANTIZATION METHOD
A. Quantize-Dequantize
B. Quantization-Only
III. PERFORMANCE EVALUATION
IV. CONCLUSION
ACKNOWLEDGMENT
REFERENCES