This article explores the potential performance gains achievable by applying the multiview video coding paradigm in wireless multimedia sensor networks (WMSN). Recent studies have illustrated how significant performance gains (in terms of energy savings and consequently of network lifetime) can be obtained by leveraging the spatial correlation among partially overlapped fields of view of multiple video cameras observing the same scene. A crucial challenge is then how to describe the correlation among different views of the same scene with accurate yet simple metrics. In this article, we first experimentally assess the performance gains of multiview video coding as a function of metrics capturing the correlation among different views. We compare their effectiveness in predicting the correlation among different views and consequently assess the potential performance gains of multiview video coding in WMSNs. In particular, we show that, in addition to geometric information, occlusions and movement need to be considered to fully take advantage of multiview video coding.