By Topic

Q-Learning Based Energy Management Policies for a Single Sensor Node with Finite Buffer

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Prabuchandran, K.J. ; Department of Computer Science and Automation, Indian Institute of Science, Bangalore-56012, India ; Meena, Sunil Kumar ; Bhatnagar, Shalabh

In this paper, we consider the problem of finding optimal energy management policies in the presence of energy harvesting sources to maximize network performance. We formulate this problem in the discounted cost Markov decision process framework and apply two reinforcement learning algorithms. Prior work obtains optimal policy in the case when the conversion function mapping energy to data transmitted is linear and provides heuristic policies in the case when the same is nonlinear. Our algorithms, however, provide optimal policies regardless of the form of the conversion function. Through simulations, our policies are seen to outperform those of in the nonlinear case.

Published in:

Wireless Communications Letters, IEEE  (Volume:2 ,  Issue: 1 )