Skip to Main Content
We study agents in a social network who learn by observing the actions of their neighbors. The agents iteratively estimate an unknown "state of the world" s from initial private signals, and the past actions of their neighbors in the social network. First, we consider a set of Bayesian agents, and investigate the computational problem the agents face in implementing the (myopic) Bayesian decision rule. When private signals are independent conditioned on s, and when the social network graph is a tree, we provide a new `dynamic cavity algorithm' for the agents' calculations, with computational effort that is exponentially lower than what is currently known. We use our algorithm to perform the first numerical simulations of interacting Bayesian agents on networks with hundreds of nodes. Second, we investigate a different model of social learning, with naive agents who practice "majority dynamics", i.e., at each round adopt the majority opinion of their neighbors. Under mild conditions, we show that under majority dynamics, agents learn s with probability 1-ϵ in O(log log (1/ϵ)) rounds. We conjecture that on d-regular trees, myopic Bayesian agents learn s as quickly as agents who practice majority dynamics. Using our algorithm for Bayesian agents, the conjecture implies that the computational effort required of Bayesian agents to learn s is only polylogarithmic in 1/ϵ on d-regular trees. Thus, our results challenge the belief that iterative Bayesian learning is computationally intractable.