I. Introduction
Since the dawn of mobile robotics, much research has aimed to increase robot autonomy. Classically, it is assumed that all the sensors the robot has at its disposal are not malicious, and that the only source of uncertainty for the robot is random noise in the sensor readings. Motivated by safety issues arising in autonomous vehicles applications—above all, self-driving cars—recent research has shown that an attacker might easily tamper with the robots’ perception pipeline [1]. For example, attackers might make the robot collide with obstacles by setting all its lidar readings to very large values. Or they might trap the robot at its current spot by surrounding it with fake obstacles.