Skip to Main Content
How sequences of actions are learned, remembered, and generated is a core problem of cognition. Despite considerable theoretical work on serial order, it typically remains unexamined how physical agents may direct sequential actions at the environment within which they are embedded. Situated physical agents face a key problem - the need to accommodate variable amounts of time it takes to terminate each individual action within the sequence. Here we examine how Dynamic Field Theory (DFT), a neuronally grounded dynamical systems approach to embodied cognition, may address sequence learning and sequence generation. To demonstrate that the proposed DFT solution works with real and potentially noisy sensory systems as well as with real physical action systems, we implement the approach on a simple autonomous robot. We demonstrate how the robot acquires sequences from experiencing the associated sensory information and how the robot generates sequences based on visual information from its environment using low-level visual features.