Skip to Main Content
In a previous paper, we reported an algorithm that can be used to accurately measure sampling timing errors in a data acquisition system that encounters nonuniform sampling. In this paper, we first study the sensitivity of the algorithm to input frequency inaccuracy. We then investigate the dependency of the accuracy of the algorithm on the number of effective bits in an analog-to-digital (A/D) converter. It is observed that, if the initial timing error is "reasonably large," then the residual timing error decreases by one order of magnitude for each increase in the number of effective bits by four. Finally, we propose the use of "alias sampling" to "magnify" the timing error so that the algorithm's sensitivity is greatly improved and can be used to estimate a much smaller timing offset with only a modest number of effective bits in the A/D converter.