Skip to Main Content
Next-generation wireless networks will integrate multiple wireless access technologies to provide seamless mobility to mobile users with high-speed wireless connectivity. This will give rise to a heterogeneous wireless access environment where network selection becomes crucial for load balancing to avoid network congestion and performance degradation. We study the dynamics of network selection in a heterogeneous wireless network using the theory of evolutionary games. The competition among groups of users in different service areas to share the limited amount of bandwidth in the available wireless access networks is formulated as a dynamic evolutionary game, and the evolutionary equilibrium is considered to be the solution to this game. We present two algorithms, namely, population evolution and reinforcement-learning algorithms for network selection. Although the network-selection algorithm based on population evolution can reach the evolutionary equilibrium faster, it requires a centralized controller to gather, process, and broadcast information about the users in the corresponding service area. In contrast, with reinforcement learning, a user can gradually learn (by interacting with the service provider) and adapt the decision on network selection to reach evolutionary equilibrium without any interaction with other users. Performance of the dynamic evolutionary game-based network-selection algorithms is empirically investigated. The accuracy of the numerical results obtained from the game model is evaluated by using simulations.