By Topic

Anti-jamming in cognitive radio networks using reinforcement learning algorithms

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Singh, S. ; ABV-IIITM, Gwalior, India ; Trivedi, A.

Cognitive radio technology is a promising approach to enhance the spectrum utilization. As the cognitive radio network (CRN) is prone to random attackers, security becomes an important issue for the successful deployment of CRN. In CRN, the dynamic spectrum characteristics of the channel changes very rapidly and further inclusion of the random jammer makes the scenario even more challenging to model. This particular scenario is modeled using the stochastic zero-sum game and Markov decision process (MDP) framework. The time-varying characteristics of the channel as well as the jammer's random strategy can be learnt by the secondary user using the reinforcement learning (RL) algorithms. In this paper, we have proposed to use the QV and the State-action-reward-state-action (SARSA) RL algorithms in place of the earlier proposed Minimax-Q learning. Though the Minimax-Q learning tries to achieve the optimal solution, but in the scenario of anti-jamming, going for the optimal solution may not be the best solution, as for the anti-jamming maximizing the gain is not an issue. Minimax-Q learning is off-policy and greedy algorithm, whereas the QV and SARSA are on-policy algorithms. QV learning performs even better than SARSA as in QV both Q- as well as V- values of the game are updated. Simulation results are also showing the improvement in learning probability of the secondary user by the use of SARSA and QV learning algorithms compared to Minimax-Q learning algorithm.

Published in:

Wireless and Optical Communications Networks (WOCN), 2012 Ninth International Conference on

Date of Conference:

20-22 Sept. 2012