Skip to Main Content
The recommended length of calibration intervals of measurement instrumentation can be determined by means of several techniques. In this paper, three different methods are compared for the establishment of optimal calibration intervals of atomic clocks. The first one is based on a stochastic model and provides the estimation of the calibration interval also in the transient situation, while the others pertain to the class of the so-called reactive methods, which determine the value of the optimal interval on the basis of the last calibration outcomes. Algorithms have been applied to experimental data, and the obtained results have been compared in order to determine the most effective technique. Since the analyzed reactive methods present a long transient time, a new algorithm is proposed and applied to the available data.