By Topic

A Reinforcement Learning-Based ToD Provisioning Dynamic Power Management for Sustainable Operation of Energy Harvesting Wireless Sensor Node

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
3 Author(s)
Roy Chaoming Hsu ; Dept. of Electr. Eng., Nat. Chiayi Univ., Chiayi, Taiwan ; Cheng-Ting Liu ; Hao-Li Wang

In this paper, a reinforcement learning-based throughput on demand (ToD) provisioning dynamic power management method (RLTDPM) is proposed for sustaining perpetual operation and satisfying the ToD requirements for today's energy harvesting wireless sensor node (EHWSN). The RLTDPM monitors the environmental state of the EHWS and adjusts their operational duty cycle under criteria of energy neutrality to meet the demanded throughput. Outcomes of these observation-adjustment interactions are then evaluated by feedback/reward that represents how well the ToD requests are met; subsequently, the observation-adjustment-evaluation process, so-called reinforcement learning, continues. After the learning process, the RLTDPM is able to autonomously adjust the duty cycle for satisfying the ToD requirement, and in doing so, sustain the perpetual operation of the EHWSN. Simulations of the proposed RLTDPM on a wireless sensor node powered by a battery and solar cell for image sensing tasks were performed. Experimental results demonstrate that the achieved demanded throughput is improved 10.7% for the most stringent ToD requirement, while the residual battery energy of the RLTDPM is improved 7.4% compared with an existing DPM algorithm for EHWSN with image sensing purpose.

Published in:

IEEE Transactions on Emerging Topics in Computing  (Volume:2 ,  Issue: 2 )