This article presents our attempts to direct an autonomous robot using efficient and universal topological instructions, which can be incrementally interpreted by a moving robot that does not have its own map initially. Many real-world experiments are included, featuring autonomous exploration and mapping. Surprisingly, we conclude and show that for this type of navigation, abilities in object recognition are more important than better mapping. The article describes a GVD-derived topology of spatial affordances, in which junctions are defined by the physical capabilities of the navigating robot. Similar to the extended GVD, our topology follows walls in open spaces to ensure robust edge transition so that all features can be modeled egocentcally. The specified wall-following distance is calculated to maximize the stability of the egocentrically modeled topology even when obstacle detection is intermittent.