Skip to Main Content
Remote rendering of video games for 3DTV becomes a hot topic with the emergence of 3D-enabled mobile devices and cloud-based services. It is however a very challenging task that requires live encoding at very low latency for user interactivity as well as optimal encoding decisions for an acceptable QoE. One key-aspect is that most video games make use of a 3D engine, which is typically accelerated on a GPU, containing information on the composition of the 3D scene and its objects as well as their motion. In this paper, we explore how to extract this information from the GPU and how to exploit it in order to successfully offload the most time consuming tasks of a multiview video encoder. We show that near-optimal encoding decisions can be taken while minimizing the encoder computational complexity as well as the total delay.