Skip to Main Content
Support for intelligent, autonomous, adaptive and distributed resource management is a key to the success of scalable and dynamic wireless sensor network applications. Distributed independent reinforcement learning (DIRL) is a micro-learning framework that enables distributed, adaptive resource management using only local information at individual sensor nodes. In this paper we propose COllective INtelligence (COIN) a macro-learning paradigm that aims to specifically address the problem of designing utility functions for individual agents in order to achieve higher system wide utility. We extend DIRL, by combining it with COIN based macro-learning paradigm to steer the system towards global optimization, improving performance with minimal communication overheads. We present results of simulation to compare our approach with other existing approaches such as team game and DIRL in an example object tracking application. Simulation results demonstrate that a combination of micro- and macro-learners is two times more energy-efficient than micro-learners (DIRL) alone and four times more energy-efficient than macro-learners (TEAM game) alone.