earticle

논문검색

Subgoal Discovery in Reinforcement Learning Using Local Graph Clustering

초록

영어

Reinforcement Learning is an area of machine learning that studies the problem of solving sequential decision making problems. The agent must learn behavior through trial-and-error interaction with a dynamic environment. Learning efficiently in large scale problems and complex tasks demands a decomposition of the original complex task into simple and smaller subtasks. In this paper, we present a subgoal-based method for automatically creating useful skills in reinforcement Learning. Our method identifies subgoals using a local graph clustering algorithm. The main advantage of the proposed algorithm is that only the local information of the graph is considered to cluster the agent state space. Clustering of the transition graphs corresponding to MDPs can be performed in linear time using the proposed method. Subgoals discovered by the algorithm are then used to generate skills using the option framework. Experimental results show that the proposed subgoal discovery algorithm has a dramatic effect on the learning performance.

목차

Abstract
 1. Introduction
 2. Reinforcement Learning With Option
 3. Proposed Method
 4. Complexity Analysis
 5. Experimental Results
  5.1. Six-room Gridworld
  5.2. Soccer Simulation Test Bed
  5.3 Results
 6. Conclusion
 References

저자정보

  • Negin Entezari Department of Computer Sicence, Amirkabir University of Technology, Tehran, Iran
  • Mohammad Ebrahim Shiri Department of Computer Sicence, Amirkabir University of Technology, Tehran, Iran
  • Parham Moradi University of Kurdistan, Department of Electrical & Computer Engineering, Sanandaj, Iran

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.