Skip to Main Content
We study large population leader-follower stochastic multi-agent systems where the agents have linear stochastic dynamics and are coupled via their quadratic cost functions. The cost of each leader is based on a trade-off between moving toward a certain reference trajectory which is unknown to the followers and staying near their own centroid. On the other hand, followers react by tracking a convex combination of their own centroid and the centroid of the leaders. We approach this large population dynamic game problem by use of so-called Mean Field (MF) linear-quadratic-Gaussian (LQG) stochastic control theory. In this model, followers are adaptive in the sense that they use a likelihood ratio estimator (on a sample population of the leaders' trajectories) to identify the member of a given finite class of models which is generating the reference trajectory of the leaders. Under appropriate conditions, it is shown that the true reference trajectory model is identified by each follower in finite time with probability one as the leaders' population goes to infinity. Furthermore, we show that the resulting sets of mean field control laws for both leaders and adaptive followers possess an almost sure εN-Nash equilibrium property for a system with population N where εN goes to zero as N goes to infinity. Numerical experiments are presented illustrating the results.