Skip to Main Content
In this paper, we systematically formulate the problem of multi-user wireless video transmission as a multi-user Markov decision process (MUMDP) by explicitly considering the users' heterogeneous video traffic characteristics, time-varying network conditions as well as, importantly, the dynamic coupling among the users' resource allocations across time, which are often ignored in existing multi-user video transmission solutions. To comply with the decentralized wireless networks' architecture, we propose to decompose the MUMDP into multiple local MDPs using Lagrangian relaxation. Unlike in conventional multi-user video transmission solutions stemming from the network utility maximization framework, the proposed decomposition enables each wireless user to individually solve its own local MDP (i.e. dynamic single-user cross-layer optimization) and the network coordinator to update the Lagrangian multipliers (i.e. resource prices) based on not only current, but also the future resource needs of all users, such that the long-term video quality of all users is maximized. This MUMDP solution provides us the necessary foundations and structures for solving multiuser video communication problems. However, to implement this framework in practice requires statistical knowledge of the experienced environment dynamics, which is often unavailable before transmission time. To overcome this obstacle, we propose a novel online learning algorithm, which allows the wireless users to simultaneously update their policies at multiple states during each time slot. This is different from conventional learning solutions, which often update the current visited state per time slot. The proposed learning algorithm can significantly improve the learning performance, thereby dramatically improving the video quality experienced by the wireless users over time. Our simulation results demonstrate the efficiency of the proposed MUMDP framework as compared to conventional multi-user video transmi- ssion solutions.