earticle

논문검색

Human-Machine Interaction Technology (HIT)

Analysis of Small Large Language Models(LLMs)

초록

영어

The trend of Small Large Language Models (LLMs) has been developing significantly in recent years. Lightweight LLMs are designed to operate efficiently on mobile devices or edge computing environments, and perform well even with limited resources. These models are optimized for specific domains and provide results that meet the needs of specific industries. In addition, they are easily accessible to non-developers due to their user-friendly interfaces, and are utilized in various fields. The purpose of this paper is to analyze the performance, functionality, and usability of Small Large Language Models (LLMs) to understand how they can be effectively used in various natural language processing (NLP) tasks. In particular, the key goal is to evaluate what advantages and disadvantages small models have compared to large models, and whether they can be optimized for specific tasks. Through this analysis, we aim to provide useful insights for developers and researchers in selecting and utilizing LLMs.

목차

Abstract
1. Introduction
2. LLM vs. Small LLM
3. Small LLMs
3.1 Phi-3-mini, Phi-3-small, Phi-3-medium
3.2 Tiny Llama
3.3 Gemma 7B
3.4 Mistral 7B
3.5 DistilBERT
3.6 MobileBERT
4. Analysis of Small LLMs
5. Conclusion
References

저자정보

  • Yo-Seob Lee Professor, Dept. of Smart Contents, Pyeongtaek University

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.