I. Introduction
The recent diffusion of Virtual and Augmented Reality (VR and AR) applications on mobile and wearable devices has posed new and challenging problems to designer and app developers. Indeed, providing an immersive experience to the user implies a smooth rendering of 3D models, as well as an accurate registration of the displayed view with respect to the object location and the viewer pose and orientation [1]. The viewer is considered as the observer’s viewpoint: it corresponds either to the user or to the camera depending on the rendering device (head mounted display or mobile device respectively). Such requirements have significant implications on the computational effort of the devices [2], the adopted transmission bandwidth (when the 3D model is streamed) [3] and on the resulting quality perceived by the user [4]. As a matter of fact, 3D model simplification and formatting [5] significantly affect this task since it allows reducing the total amount of triangles or 3D points with negligible quality difference with respect to the original 3D model [6]. To this purpose, several shape and appearance simplification solutions have been adopted [7] to adapt the Level-of-Details (LODs) along time according to users’ proximity and interaction. Most of the previous works focus on adapting the cognitive load [8], predicting users’ action in order to optimize the training experience [9] or minimizing the amount of transmitted information [10].