Skip to Main Content
The loopy belief propagation algorithm (LBP) is known to perform extremely well in many practical problems of probability inference and learning on graphical models, even in presence of multiple loops. Although general necessary conditions for convergence of LBP to a unique fixed point solution are still unknown, various techniques have been explored to understand error propagation when LBP fails to converge. In this paper, we rely on the contractive mapping of message errors to present novel distance bounds between multiple fixed point solutions when LBP does not converge. We give examples of networks where our bounds are tighter than existing ones.