Skip to Main Content
Software producers can better manage the quality of their deployed software products using estimates of quality. Current best practices for making estimates are to use software reliability growth modeling (SRGM), which assumes that testing environments approximate deployment environments. This important assumption does not hold for widely used software products, which are operated in a wide variety of configurations under many different usage scenarios. However, the literature contains little empirical data on the impact of this violation of assumptions on the accuracy and the usefulness of predictions. In this paper, we report results and experiences using SRGM on an IBM federated database project. We examine defect data from 3 releases spanning approximately 9 years. We find SRGM to be of limited use to the project: absolute relative errors are at least 34%, and predictions are, at times, implausible. We discuss alternative approaches for estimating quality of widely used software products.