원문정보
초록
영어
One of the distinct features of Wireless Sensor Networks (WSNs) is duty cycling mechanism, which is used to conserve energy and extend the network lifetime. Large duty cycle interval introduces lower energy consumption, meanwhile longer end-to-end (E2E) delay. In this paper, we introduce an energy consumption minimization problem for duty-cycled WSNs. We have applied Q-learning algorithm to obtain the maximum duty cycle interval which supports various delay requirements and given Delay Success ratio (DSR) i.e. the required probability of packets arriving at the sink before given delay bound. Our approach only requires sink to compute Q-leaning which makes it practical to implement. Nodes in the different group have the different duty cycle interval in our proposed method and nodes don’t need to know the information of the neighboring node. Performance metrics show that our proposed scheme outperforms existing algorithms in terms of energy efficiency while assuring the required delay bound and DSR.
목차
1. Introduction
2. Network Model
3. Duty Cycle Interval Controller Based on Q-Learning
3.1 Background on Reinforcement Learning
3.2 Q-learning based duty cycle interval Control
4. Result and Analysis
5. Conclusion
Acknowledgement
References