By Topic

An experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
McCarthy, P. ; Dept. of Comput. Sci., Maryland Univ., College Park, MD, USA ; Porter, A. ; Siy, H. ; Votta, L.G., Jr.

We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables: the inspection method used (two methods involve meetings, one method does not); the requirements specification to be inspected (there are two); the inspection round (each team participates in two inspections); and the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: the individual fault detection rate; the team fault detection rate; and the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, we describe the experiment's design and the provocative hypotheses we are evaluating. We summarize our observations from the experiment's initial run, and discuss how we are using these observations to verify our data collection instruments and to refine future experimental runs

Published in:

Software Metrics Symposium, 1996., Proceedings of the 3rd International

Date of Conference:

25-26 Mar 1996