Skip to Main Content
In wireless sensor networks, resource-constrained nodes are expected to operate in unattended highly dynamic environments. Hence, the need for adaptive and autonomous resource/task management in wireless sensor networks is well recognized. We present distributed independent reinforcement learning (DIRL), a Q-learning based framework to enable autonomous self-learning/adaptive applications with inherent support for efficient resource/task management. The proposed scheme based on DIRL, learns the utility of performing various tasks over time using mostly local information at nodes and uses the utility value along with application constraints for task management by optimizing global system-wide parameters like total energy usage, network lifetime etc. We also present an object tracking application design based on DIRL to exemplify our framework. Finally, we present results of simulation studies to demonstrate the feasibility of our approach and compare its performance against other existing approaches. In general for applications requiring autonomous adaptation, we show that DIRL on average is about 90% more efficient than traditional resource management schemes like static scheduling without losing any significant accuracy/performance.