Skip to Main Content
Machine learning for real world applications is a complex task due to the huge state and action sets they deal with and the a priori unknown dynamics of the environment involved. Reinforcement learning offers very efficient model-free methods which are often combined with approximation architectures to overcome these problems. We present a Q-learning implementation that uses a new adaptive clustering method to approximate state and actions sets. Experimental results for an obstacle avoidance behavior with the mobile robot Khepera are given.