Skip to Main Content
We formulate a coverage optimization problem for mobile visual sensor networks as a repeated multi-player game. Each visual sensor tries to optimize its own coverage while minimizing the processing cost. The rewards for the sensing are not prior information to the agents. We present an asynchronous distributed learning algorithm where each sensor only remembers the utility values obtained by its neighbors and itself, and the actions it played during the last two time steps when it was active. We show that this algorithm is convergent in probability to the set of global optima of certain coverage performance metric.