Skip to Main Content
The times between failures of a large software system, between any two program fixes, need not be s-independently and exponentially distributed, as has often been assumed. In several instances such times between failures occur in clusters, often systematically, and we present some data to substantiate this claim. We propose an empirically developed Fourier series model which can adequately describe our data, and which under certain circumstances can be used to predict future failures. Much of the analysis is informal, and the key tool that we use to develop our approach is a spectrogram of the data. We emphnsize data analysis rather than statistical inference in this paper, and thus, we have not performed any formal tests (such as analysis of variance, s-significance testing, or goodness of fit) for assessing the adequacy of the fitted models. A reason for not doing this is that the available tests are frequentist in nature and to these one of us (NDS) has philosophic objections.