Skip to Main Content
A major hurdle in the development of intelligent robots is that we still do not possess efficient computational and representational methodologies for emulating knowledge and expectation driven behavior so basic to human cognition and problem solving. Even if we use techniques such as geometric modeling for representing objects in the robot world, we are still lacking in methods for linking such representations with sensory feedback. In this paper, we have proposed the use of intermediate representations - we call them sensor-tuned representations - for linking CSG based solid modeling with sensory information. We also discuss how sensor-tuned representations are constructed from range data and how object recognition can be done with sensor-tuned representations. Finally, we show results of manipulation experiments produced by the current implementation of the system.