earticle

논문검색

Telecommunication Information Technology (TIT)

A reinforcement learning-based network path planning scheme for SDN in multi-access edge computing

초록

영어

With an increase in the relevance of next-generation integrated networking environments, the need to effectively utilize advanced networking techniques also increases. Specifically, integrating Software-Defined Networking (SDN) with Multi-access Edge Computing (MEC) is critical for enhancing network flexibility and addressing challenges such as security vulnerabilities and complex network management. SDN enhances operational flexibility by separating the control and data planes, introducing management complexities. This paper proposes a reinforcement learning-based network path optimization strategy within SDN environments to maximize performance, minimize latency, and optimize resource usage in MEC settings. The proposed Enhanced Proximal Policy Optimization (PPO)-based scheme effectively selects optimal routing paths in dynamic conditions, reducing average delay times to about 60 ms and lowering energy consumption. As the proposed method outperforms conventional schemes, it poses significant practical applications.

목차

Abstract
1. Introduction
2. Related Work
2.1 Software-Defined Networking (SDN)
2.2 Multi-access Edge Computing (MEC)
2.3 6G Communication
2.4 Reinforcement Learning
2.5 Integration of SDN, MEC, and 6G through Reinforcement Learning
3. System Model
3.1 Local Computing
3.2 Remote Computing
3.3 Optimization of path planning
4. Proposed Scheme
4.1 State, Action, and Reward -
4.2 Proposed Scheme
5. Performance Evaluation
6. Conclusion
Acknowledgement
References

저자정보

  • MinJung Kim Master Degree, Department of Computer Software, Hanyang University, Korea
  • Ducsun Lim Post-Doc, Department of Computer Software, Hanyang University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.