Skip to Main Content
This article presents consensus + innovations inference algorithms that intertwine consensus (local averaging among agents) and innovations (sensing and assimilation of new observations). These algorithms are of importance in many scenarios that involve cooperation and interaction among a large number of agents with no centralized coordination. The agents only communicate locally over sparse topologies and sense new observations at the same rate as they communicate. This stands in sharp contrast with other distributed inference approaches, in which interagent communications are assumed to occur at a much faster rate than agents can sense (sample) the environment so that, in between measurements, agents may iterate enough times to reach a decision-consensus before a new measurement is made and assimilated. While optimal design of distributed inference algorithms in stochastic time-varying scenarios is a hard (often intractable) problem, this article emphasizes the design of asymptotically (in time) optimal distributed inference approaches, i.e., distributed algorithms that achieve the asymptotic performance of the corresponding optimal centralized inference approach (with instantaneous access to the entire network sensed information at all times). Consensus + innovations algorithms extend consensus in nontrivial ways to mixed-scale stochastic approximation algorithms, in which the time scales (or weighting) of the consensus potential (the potential for distributed agent collaboration) and of the innovation potential (the potential for local innovations) are suitably traded for optimal performance. This article shows why this is needed and what the implications are, giving the reader pointers to new methodologies that are useful in their own right and in many other contexts.