Skip to Main Content
This study introduces two novel approaches, pre-partitioning and weight-driven exploration, to enable an efficient learning process in the context of cognitive radio. Learning efficiency is crucial when applying reinforcement learning to cognitive radio since cognitive radio users will cause a higher level of disturbance in the exploration phase. Careful control of the tradeoff between exploration and exploitation for a learning-enabled cognitive radio in order to efficiently learn from the interactions with a dynamic radio environment is investigated. In the pre-partitioning scheme, the potential action space of cognitive radios is reduced by initially randomly partitioning the spectrum in each cognitive radio. Cognitive radios are therefore able to finish their exploration stage faster than more basic reinforcement learning-based schemes. In the weight-driven exploration scheme, exploitation is merged into exploration by taking into account the knowledge gained in exploration to influence action selection, thereby achieving a more efficient exploration phase. The learning efficiency in a cognitive radio scenario is defined and the learning efficiency of the proposed schemes is investigated. The simulation results show that the exploration of cognitive radio is more efficient by using pre-partitioning and weight-driven exploration and the system performance is improved accordingly.