earticle

논문검색

Reinforcement Learning-based Duty Cycle Interval Control in Wireless Sensor Networks

초록

영어

One of the distinct features of Wireless Sensor Networks (WSNs) is duty cycling mechanism, which is used to conserve energy and extend the network lifetime. Large duty cycle interval introduces lower energy consumption, meanwhile longer end-to-end (E2E) delay. In this paper, we introduce an energy consumption minimization problem for duty-cycled WSNs. We have applied Q-learning algorithm to obtain the maximum duty cycle interval which supports various delay requirements and given Delay Success ratio (DSR) i.e. the required probability of packets arriving at the sink before given delay bound. Our approach only requires sink to compute Q-leaning which makes it practical to implement. Nodes in the different group have the different duty cycle interval in our proposed method and nodes don’t need to know the information of the neighboring node. Performance metrics show that our proposed scheme outperforms existing algorithms in terms of energy efficiency while assuring the required delay bound and DSR.

목차

Abstract
1. Introduction
2. Network Model
3. Duty Cycle Interval Controller Based on Q-Learning
3.1 Background on Reinforcement Learning
3.2 Q-learning based duty cycle interval Control
4. Result and Analysis
5. Conclusion
Acknowledgement
References

저자정보

  • Shathee Akter Department of Electrical and Computer Engineering, University of Ulsan, Korea
  • Seokhoon Yoon Department of Electrical and Computer Engineering, University of Ulsan, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.