earticle

논문검색

Nano Information Technology (NIT)

Study on Accelerating Distributed ML Training in Orchestration

초록

영어

As the size of data and models in machine learning training continues to grow, training on a single server is becoming increasingly challenging. Consequently, the importance of distributed machine learning, which distributes computational loads across multiple machines, is becoming more prominent. However, several unresolved issues remain regarding the performance enhancement of distributed machine learning, including communication overhead, inter-node synchronization challenges, data imbalance and bias, as well as resource management and scheduling. In this paper, we propose ParamHub, which utilizes orchestration to accelerate training speed. This system monitors the performance of each node after the first iteration and reallocates resources to slow nodes, thereby speeding up the training process. This approach ensures that resources are appropriately allocated to nodes in need, maximizing the overall efficiency of resource utilization and enabling all nodes to perform tasks uniformly, resulting in a faster training speed overall. Furthermore, this method enhances the system's scalability and flexibility, allowing for effective application in clusters of various sizes.

목차

Abstract
1. INTRODUCTION
2. PROPOSED SYSTEM
2.1. System Overview
2.2. System Component
3. COMPERATIVE ANALYSIS
4. CONCLISION
ACKNOWLEDGMENT
REFERENCES

저자정보

  • Su-Yeon Kim The master’s course, Graduate School of Smart Convergence, Kwangwoon University, Seoul, Korea
  • Seok-Jae Moon Professor, Graduate School of Smart Convergence, KwangWoon University, Seoul, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.