earticle

논문검색

Workshop Session_KETI

Efficient Deployment and Execution of AI Models: A Comparative Study on Post Training Quantization Techniques with Emphasis on the Quantization Only Method

초록

영어

In this paper, we propose a PTQ (Post Training Quantization) static with QO (Quantization Only) technique to efficiently deploy and execute deep learning models. A comparative performance evaluation of PTQ static with QDQ (Quantize and DeQuantize) and the proposed quantization method was conducted using the MNIST (Modified National Institute of Standards and Technology database) dataset and 8- bit quantization. Experimental results indicate that the PTQ static with QO method reduces the size of the model by approximately 33%, increases the inference speed by 1.5 times, and minimizes the accuracy loss, similar to the PTQ static with QDQ method. The proposed PTQ static with QO method offers a significant technical enhancement to facilitate the efficient deployment and execution of AI (Artificial Intelligence) models through the quantization of deep learning models. We have shown that the PTQ static with QO method is a beneficial and efficient approach to decrease the size and computation of deep learning models. This study makes novel contributions to the quantization of deep learning models. The practical potential of the PTQ static with QO method lies in its ability to be more suitably deployed for the purposes of AI hardware.

목차

Abstract
I. INTRODUCTION
II. QUANTIZATION METHOD
A. Quantize-Dequantize
B. Quantization-Only
III. PERFORMANCE EVALUATION
IV. CONCLUSION
ACKNOWLEDGMENT
REFERENCES

저자정보

  • Seokhun Jeon SoC Platform Research Center Korea Electronics Technology Institute Seongnam, Korea
  • Kyu Hyun Choi SoC Platform Research Center Korea Electronics Technology Institute

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      0개의 논문이 장바구니에 담겼습니다.