Skip to Main Content
There is a plethora of recent research on high performance wireless communications using a cross-layer approach in that adaptive modulation and coding (AMC) schemes at wireless physical layer are used for combating time varying channel fading and enhance link throughput. However, in a wireless sensor network, transmitting packets over deep fading channel can incur excessive energy consumption due to the usage of stronger forwarding error code (FEC) or more robust modulation mode. To avoid such energy inefficient transmission, a straightforward approach is to temporarily buffer packets when the channel is in deep fading, until the channel quality recovers. Unfortunately, packet buffering may lead to communication latency and buffer overflow, which, in turn, can result in severe degradation in communication performance. Specifically, to improve the buffering approach, we need to address two challenging issues: (1) how long should we buffer the packets?, and (2) how to choose the optimum channel transmission threshold above which to transmit the buffered packets? In this paper, by using discrete-time queuing model, we analyze the effects of Rayleigh fading over AMC- based communications in a wireless sensor network. We then analytically derive the packet delivery rate and average delay. Guided by these numerical results, we can determine the most energy-efficient operation modes under different transmission environments. Extensive simulation results on NS-2 have validated the analytical results, and indicates that under these modes, we can achieve as much as 40% reduction in energy dissipation.