Skip to Main Content
Intrinsically motivated reinforcement learning (IMRL) has been proposed as a framework within which agents exploit "internal reinforcement" to acquire general-purpose building-block behaviors ("skills") which can be later combined for solving several specific tasks. The architectures so far proposed within this framework are limited in that: (1) they use hardwired "salient events" to form and train skills, and this limits agents' autonomy; (2) they are applicable only to problems with abstract states and actions, as grid-world problems. This paper proposes solutions to these problems in the form of a hierarchical reinforcement-learning architecture that: (1) exploits evolutionary robotics techniques so to allow the system to autonomously discover "salient events"; (2) uses neural networks so to allow the system to cope with continuous states and noisy environments. The viability of the proposed approach is demonstrated with a simulated robotic scenario.