Skip to Main Content
We present in this paper a new and original application for saliency maps, intending to simulate the visual perception of a synthetic actor. Within computer graphics field, simulating virtual humans has become a challenging task. Animating such an autonomous actor within a virtual environment requires most of the time in modeling of a perception-decision-action cycle. To model a part of the perception process, we have designed a new model of saliency map, based on geometric and depth information, allowing our synthetic humanoid to perceive its environment in a biologically plausible way.