Skip to Main Content
We present a computational model of approach teaming in a T-maze environment. We show that our model learns the correct sequence of six decisions that lead to the location of positive reinforcement and in a manner consistent with experimental observations. Our model exhibits many properties that are characteristic of animal learning in maze environments including delay conditioning, secondary conditioning, and backward chaining. Our model incorporates a comprehensive definition of drive that consists of a primary drive (food) and deficit-related signal (hunger), and an acquired drive (the learned expectation for future reward or punishment). In the T-maze environment, the deficit-related drive of hunger motivates the teaming system to search for food. After several trials in the T-maze, the acquired drive (learned expectation) will shape the teaming system's behavior and allow it to consistently find the food. We propose that changes in drive level, not merely the level of the drive, lead to teaming. Positive changes in drive level results in the enhanced behavior and negative changes result in the depressed behavior. Our comprehensive definition of drive allows us to explain teaming in a biologically plausible manner and is supported by results from hypertension, obesity, and Parkinson's disease research.