By Topic

Multi-Agent Q-Learning for Competitive Spectrum Access in Cognitive Radio Systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Husheng Li ; Dept. of Electr. Eng. & Comput. Sci., Univ. of Tennessee, Knoxville, TN, USA

Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, an Aloha-like spectrum access scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of $Q$-learning by considering other secondary users as a part of the environment. A rigorous proof of the convergence of $Q$-learning is provided via the similarity between the $Q$-learning and Robinson-Monro algorithm, as well as the analysis of the corresponding ordinary differential equation (via Lyapunov function). The performance of learning (speed and gain in utility) is evaluated by numerical simulations.

Published in:

Networking Technologies for Software Defined Radio (SDR) Networks, 2010 Fifth IEEE Workshop on

Date of Conference:

21-21 June 2010