Skip to Main Content
Linear regression is used in a number of time synchronization protocols to establish a linear relationship between clocks at different sensor nodes to achieve energy-efficient time synchronization. These protocols are predictive in the sense that a node predicts a target clock based on collected time stamp data. However, the use of linear regression for sensor network time synchronization has not been thoroughly studied in the literature. This paper attempts to close the gap by analyzing the impacts of two parameters on the precision of linear regression based time synchronization protocols: (1) the frequency at which the time stamp data are collected; (2) the window size, i.e., the number of time stamps used for linear regression. Through theoretical analysis, experiments and simulations, we show a counter-intuitive result: given the prediction interval, if the clock relationship varies slowly over time, more frequent synchronization results in worse synchronization precision. This result suggests that a linear regression based time synchronization protocol can achieve both high precision and good energy efficiency when operating at a low synchronization frequency. We also show that increasing the window size improves the synchronization performance but the synchronization uncertainty is bounded away from zero.