Skip to Main Content
There are many communication as well as control systems--in fact, an increasing number of them--in which at some stage a continuous data source is sampled "periodically," at the nominal rate of samples a second, being the highest frequency component of the data. There are, generally speaking, two kinds of errors introduced by the sampling mechanism: errors in amplitude and errors in timing, or "time jitter." This paper is concerned with the latter. We assume a random model for the jitter. We begin with a study of the properties of the jittered samples for both deterministic and stochastic signals. Depending on the stochastic properties of the jitter, the presence of a discrete component in the signal may give rise to new discrete components as a result of jitter. Generally speaking, however, the effect of jitter is to produce a (frequency) selective attenuation as well as a uniform spectral density component. The more correlation in the jitter, the less the spectral distribution is affected. Various measures of the "error" due to jitter are estimated. Thus the error may be the mean square in the fitted samples or some linear or nonlinear operation thereof. Also, weighted mean-square errors are considered, and general methods of estimating these errors are developed. The problem of optimal use of the jittered samples is next considered. Interpreting the optimality to be in the mean-square sense, an explicit solution for optimal linear operation is obtained. Also for a wide class of signals it is shown that jitter does not affect the nature of the optimal operations; linear operations, for instance, remain linear, although with different weights. To illustrate the methods an example drawn from telemetry is given, where the timing is derived from the zero crossings of a sine wave and the time jitter is taken as due to noise. The jitter is highly correlated and the results involve some lengthy calculations.