Skip to Main Content
Using the calibration of a phase meter with a nominally linear response as an example, a statistical approach is discussed for predicting worst-case offsets of the meter response characteristic from the value of the reference standard. A linear calibration curve is used to model the meter response, and statistical tests are described which test the appropriateness of the model and whether the calculated calibration curve differs significantly from the ideal. Various levels of corrections to be applied can then be determined on the basis of these tests, and limits to offsets are calculated for each of the levels. By extending this approach, it is possible to predict limits of uncertainty when using the calibrated meter to make measurements.