By Topic

The Linear Software Reliability Model and Uniform Testing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Trachtenberg, Martin ; Bldg. 108-127; RCA MSR; Moorestown, New Jersey 08057 USA.

The Jelinski-Moranda, Shooman, and Musa software reliability models all predict that the software error detection rate in a software system is a linear function of the detected errors. The basic differences among the models are that the error rates are, respectively, in terms of calendar-time, manpower, and computer-time. The models are simple to use for estimating the number of errors still in the tested software. Published studies generally show that error rates during system testing correlate best with the Musa model, and progressively less with the Shooman, and Jelinski-Moranda models. Simulation shows that, with respect to the number of detected errors, 1) testing the functions of a software system in a random or round-robin order gives linearly decaying system-error rates, 2) testing each function exhaustively one at a time gives flat system-error rates, 3) testing different functions at widely different frequencies gives exponentially decaying system-error rates, and 4) testing strategies which result in linear decaying error rates tend to requlire the fewest tests to detect a given number of errors.

Published in:

Reliability, IEEE Transactions on  (Volume:R-34 ,  Issue: 1 )