Skip to Main Content
The sine wave test of an analog-to-digital converter (ADC) means to excite the ADC with a pure sine wave, look for the sine wave which best fits the output in least squares (LS) sense, and analyze the difference. This is described in the IEEE standards 1241-2000 and 1057-1994. Least squares is the "best" fitting method most of us can imagine, and it yields very good results indeed. Its known properties are achieved when the error (the deviation of the samples from the true sine wave) is random, white (the error samples are all independent), with zero mean Gaussian distribution. Then, the LS fit coincides with the maximum likelihood estimate of the parameters. However, in sine wave testing of ADCs, these assumptions are far from being true. The quantization error is partly deterministic, and the sample values are strongly interdependent. For sine waves covering less than, say, 20 quantum levels, this makes the sine wave fit worse than expected, and since small changes in the sine wave affect the residuals significantly, especially close to the peaks, ADC error analysis may become misleading. Processing of the residuals [e.g., the calculation of the effective number of bits, (ENOB)] can exhibit serious errors. This paper describes this phenomenon, analyzes its consequences, and suggests modified processing of samples and residuals to reduce the errors to negligible level.