원문정보
초록
영어
This study aimed to quantitatively measure object recognition inference performance, a representative application in edge computing environments. For this purpose, YOLOv8n was executed by applying PyTorch versions 1.11, 2.0, 2.4, and 2.7 to Raspberry Pi 4, Raspberry Pi 5, LattePanda 3 Delta, Jetson Nano, and Jetson AGX Orin boards, as well as an x86 legacy server. Experimental results showed that boards equipped with GPUs recorded faster inference speeds than CPU-based boards. Among these, despite its lower CPU performance, the Jetson Nano, leveraging GPU acceleration, reduced processing time by approximately 29% compared to the LattePanda 3 Delta. Particularly, the Jetson AGX Orin exhibited the fastest performance in both GPU and CPU environments, recording 14ms in the GPU environment and 170ms in the CPU environment. It was also confirmed that differences in PyTorch versions could affect performance. Furthermore, applicable edge computing application areas for each board were suggested based on the observed performance.
목차
1. INTRODUCTION
2. EMBEDDED BOARDS
3. YOLOv8 MODEL AND PYTORCH FRAMEWORK
4. EXPERIMENTS AND RESULTS
5. CONCLUSION
Acknowledgement
REFERENCES
