Skip to Main Content
Cognitive radio technology is a promising approach to enhance the spectrum utilization. As the cognitive radio network (CRN) is prone to random attackers, security becomes an important issue for the successful deployment of CRN. In CRN, the dynamic spectrum characteristics of the channel changes very rapidly and further inclusion of the random jammer makes the scenario even more challenging to model. This particular scenario is modeled using the stochastic zero-sum game and Markov decision process (MDP) framework. The time-varying characteristics of the channel as well as the jammer's random strategy can be learnt by the secondary user using the reinforcement learning (RL) algorithms. In this paper, we have proposed to use the QV and the State-action-reward-state-action (SARSA) RL algorithms in place of the earlier proposed Minimax-Q learning. Though the Minimax-Q learning tries to achieve the optimal solution, but in the scenario of anti-jamming, going for the optimal solution may not be the best solution, as for the anti-jamming maximizing the gain is not an issue. Minimax-Q learning is off-policy and greedy algorithm, whereas the QV and SARSA are on-policy algorithms. QV learning performs even better than SARSA as in QV both Q- as well as V- values of the game are updated. Simulation results are also showing the improvement in learning probability of the secondary user by the use of SARSA and QV learning algorithms compared to Minimax-Q learning algorithm.