In this paper we describe how a humanoid robot can learn a representation of its own reachable space from motor experience: a Reachable Space Map. The map provides information about the reachability of a visually detected object (i.e. a 3D point in space). We propose a bio-inspired solution in which the map is built in a gaze-centered reference frame: the position of a point in space is encoded with the motor configuration of the robot head and eyes which allows the fixation of that point. We provide experimental results in which a simulated humanoid robot learns this map autonomously and we discuss how the map can be used for planning whole-body and bimanual reaching.