원문정보
초록
영어
With an increase in the relevance of next-generation integrated networking environments, the need to effectively utilize advanced networking techniques also increases. Specifically, integrating Software-Defined Networking (SDN) with Multi-access Edge Computing (MEC) is critical for enhancing network flexibility and addressing challenges such as security vulnerabilities and complex network management. SDN enhances operational flexibility by separating the control and data planes, introducing management complexities. This paper proposes a reinforcement learning-based network path optimization strategy within SDN environments to maximize performance, minimize latency, and optimize resource usage in MEC settings. The proposed Enhanced Proximal Policy Optimization (PPO)-based scheme effectively selects optimal routing paths in dynamic conditions, reducing average delay times to about 60 ms and lowering energy consumption. As the proposed method outperforms conventional schemes, it poses significant practical applications.
목차
1. Introduction
2. Related Work
2.1 Software-Defined Networking (SDN)
2.2 Multi-access Edge Computing (MEC)
2.3 6G Communication
2.4 Reinforcement Learning
2.5 Integration of SDN, MEC, and 6G through Reinforcement Learning
3. System Model
3.1 Local Computing
3.2 Remote Computing
3.3 Optimization of path planning
4. Proposed Scheme
4.1 State, Action, and Reward -
4.2 Proposed Scheme
5. Performance Evaluation
6. Conclusion
Acknowledgement
References