Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Software Metrics Symposium, 1998. Metrics 1998. Proceedings. Fifth International

Date 20-21 Nov. 1998

Filter Results

Displaying Results 1 - 25 of 34
  • Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262)

    Publication Year: 1998
    Save to Project icon | Request Permissions | PDF file iconPDF (226 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 1998 , Page(s): 276
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Freely Available from IEEE
  • A metric suite for a team PSP

    Publication Year: 1998 , Page(s): 89 - 92
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    The PSP (Personal Software Process) defined by Watts Humphrey (1997) is based on the definition of a personal process, and its monitoring and improvement through a set of metrics. The process and the related metrics are designed to be used by a single person. We have modified the PSP to be used in a consistent way by a team, introducing the personal and the team level and defining their interactions. This paper presents the team PSP as far as the metrics and the tool are concerned View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collecting metrics for CORBA-based distributed systems

    Publication Year: 1998 , Page(s): 11 - 22
    Cited by:  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    The Common Object Request Broker Architecture (CORBA) supports the creation of distributed systems that cross processor, language and paradigm boundaries. These systems can be large and complex entities that consume considerable resources in their creation and execution. Measurements of characteristics of software systems is an important area of study in general and of particular interest for distributed systems. The work presented in this paper describes a specific technique for instrumenting components in a distributed system. The technique constructs a wrapper around the component being measured. Interactions with the ORB and other components are monitored and summarized. Each wrapper mimics the interface of the component that it is wrapping so that the remaining objects in the system do not need modification. Two approaches to wrapping the component are presented and contrasted. The result is an efficient and modular technique that can quickly be applied to a component View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inferring change effort from configuration management databases

    Publication Year: 1998 , Page(s): 267 - 273
    Cited by:  Papers (18)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    In this paper we describe a methodology and algorithm for historical analysis of the effort necessary for developers to make changes to software. The algorithm identifies factors which have historically increased the difficulty of changes. This methodology has implications for research into cost drivers. As an example of a research finding, we find that a system under study was “decaying” in that changes grew more difficult to implement at a rate of 20% per year. We also quantify the difference in costs between changes that fix faults and additions of new functionality: fixes require 80% more effort after accounting for size. Since our methodology adds no overhead to the development process, we also envision it being used as a project management tool: for example, developers can identify code modules which have grown more difficult to change than previously, and can match changes to developers with appropriate expertise. The methodology uses data from a change management system, supported by monthly time sheet data if available. The method's performance does not degrade much when the quality of the time sheet data is limited. We validate our results using a survey of the developers under study: the change efforts resulting from the algorithm match the developers' opinions. Our methodology includes a technique based on the jackknife to determine factors that contribute significantly to change effort View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost implications of interrater agreement for software process assessments

    Publication Year: 1998 , Page(s): 38 - 51
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    Much empirical research has been done on evaluating and modeling interrater agreement in software process assessments. Interrater agreement is the extent to which assessors agree in their ratings of software process capabilities when presented with the same evidence and performing their ratings independently. This line of research was based on the premise that lack of interrater agreement can lead to erroneous decisions from process assessment scores. However, thus far we do not know the impact of interrater agreement on the cost of assessments. We report on a study that evaluates the relationship between interrater agreement and the cost of the consolidation activity in assessments. The study was conducted in the context of two assessments using the emerging international standard ISO/IEC 15504. Our results indicate that for organizational processes, the relationship is strong and in the expected direction. For project level processes no relationship was found. These results indicate that for assessments that include organizational processes in their scope, ensuring high interrater agreement could lead to a reduction in their costs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Metric selection for effort assessment in multimedia systems development

    Publication Year: 1998 , Page(s): 97 - 100
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (40 KB)  

    This paper describes ongoing research directed at formulating a set of appropriate metrics for assessing effort requirements for multimedia systems development. An exploratory investigation of the factors that are considered by industry to be influential in determining development effort is presented. This work incorporates the use of a GQM framework to assist the metric selection process from a literature basis, followed by an industry questionnaire. The results provide some useful insights into contemporary project management practices in relation to multimedia systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An integrated process and product model

    Publication Year: 1998 , Page(s): 224 - 234
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    The relationship between product quality and process capability and maturity has been recognized as a major issue in software engineering based on the premise that improvements in process will lead to higher quality products. To this end, we have been investigating an important facet of process capability, stability, as defined and evaluated by trend, change, and shape metrics, across releases and within a release. Our integration of product and process measurement and evaluation serves the dual purpose of using metrics to assess and predict reliability and risk and concurrently using these metrics for process stability evaluation. We use the NASA Space Shuttle flight software to illustrate our approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On evidence supporting the FEAST hypothesis and the laws of software evolution

    Publication Year: 1998 , Page(s): 84 - 88
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    As part of its study of the impact of feedback in the global software process on software product evolution, the FEAST/1 project has examined metric data relating to various systems in different application areas. High level similarities in the growth trends of the systems studied support the FEAST hypothesis. Inter alia, the results provide evidence compatible with the laws of software evolution, subject only to minor adjustments of the latter View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Business impact, benefit, and cost of applying GQM in industry: an in-depth, long-term investigation at Schlumberger RPS

    Publication Year: 1998 , Page(s): 93 - 96
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    Many success stories have been reported on specific effects of measurement, but little is known about the multiple interactions of measurement programmes with the business environment of a software organisation. This paper summarises industrial experiences with the Goal/Question/Metric (GQM) approach to software engineering measurement. They are based on long-term observation and additional detailed investigations at Schlumberger RPS. The paper reports the business impact of GQM in terms of identified benefit, cost models, and factors for successful application of GQM View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Getting a handle on the fault injection process: validation of measurement tools

    Publication Year: 1998 , Page(s): 133 - 141
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    In any manufacturing environment, the fault injection rate might be considered one of the most meaningful criterion to evaluate the goodness of the development process. In our field, the estimates of such a rate are often oversimplified or misunderstood generating unrealistic expectations on their prediction power. The computation of fault injection rates in software development requires accurate and consistent measurement, which translates into demanding parallel efforts for the development organization. This paper presents the techniques and mechanisms that can be implemented in a software development organization to provide a consistent method of anticipating fault content and structural evolution across multiple projects over time. The initial estimates of fault insertion rates can serve as a baseline against which future projects can be compared to determine whether progress is being made in reducing the fault insertion rate, and to identify those development techniques that seem to provide the greatest reduction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Directing software development projects with product metrics

    Publication Year: 1998 , Page(s): 193 - 204
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    Software development managers are responsible for the timely completion of projects. They try to effectively focus team effort on the appropriate activities to complete projects on schedule and with high quality. In order to judge the status of projects so that teams can react accordingly, managers need project measurements which consist of both product and process metrics. We show the effective application of a small set of metrics, identified in a series of empirical studies, to assist managers in tracking and controlling software projects. In our studies, development team effort is directed by the metric curves which are driven to conform to “signatures of confidence” derived from successful projects with similar characteristics. Projects have been successfully managed by using this technique and ongoing studies in industry are showing positive results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coupling metrics for object-oriented design

    Publication Year: 1998 , Page(s): 150 - 157
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    We describe and evaluate some recently innovated coupling metrics for object-oriented (OO) design. The Coupling Between Objects (CBO) metric of Chidamber and Kemerer (1991) is evaluated empirically using five OO systems, and compared with an alternative OO design metric called SAS, which measures the number of associations between a class and its peers. The NAS metric is directly collectible from design documents such as the Object Model of OMT. Results from all systems studied indicate a strong relationship between CBO and NAS, suggesting that they are not orthogonal. We hypothesised that coupling would be related to understandability, the number of errors and error density. So relationships were found for any of the systems between class understandability and coupling. Only limited evidence was found to support our hypothesis linking increased coupling to increased error density. The work described in this paper is part of the `Metrics for OO Programming Systems' (MOOPS) project, which aims to evaluate existing OO metrics, and to innovate and evaluate new OO analysis and design metrics, aimed specifically at the early stages of development View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Definition and experimental evaluation of function points for object-oriented systems

    Publication Year: 1998 , Page(s): 167 - 178
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    We present a method for estimating the size, and consequently effort and duration, of object oriented software development projects. Different estimates may be made in different phases of the development process, according to the available information. We define an adaptation of traditional function points, called Object Oriented Function Points, to enable the measurement of object oriented analysis and design specifications. Tools have been constructed to automate the counting method. The novel aspect of our method is its flexibility. An organisation can experiment with different counting policies, to find the most accurate predictors of size, effort, etc. in its environment. The method and preliminary results of its application in an industrial environment are presented and discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability modeling of freely-available Internet-distributed software

    Publication Year: 1998 , Page(s): 101 - 104
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    A wealth of freely-available software is available on the Internet; however, developers are wary of its reuse because it is assumed to be of poor quality. Reliability is one way that software quality can be measured, but it requires metrics data that are typically not maintained for freely-available software. A technique is presented which allows reliability data to be extracted from available data, and is validated by showing that the data can be used to fit a logarithmic reliability model. By modeling the reliability, estimates of overall quality, remaining faults, and release times can be predicted for the software View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The predictive validity criterion for evaluating binary classifiers

    Publication Year: 1998 , Page(s): 235 - 244
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    The development of binary classifiers to identify highly error-prone or high maintenance cost components is increasing in the software engineering quality modeling literature and in practice. One approach for evaluating these classifiers is to determine their ability to predict the classes of unseen cases, i.e., predictive validity. A chi-square statistical test has been frequently used to evaluate predictive validity. We illustrate that this test has a number of disadvantages. The disadvantages include a difficulty in using the results of the test to determine whether a classifier is a good predictor, demonstrated through a number of examples, and a rather conservative Type I error rate, demonstrated through a Monte Carlo simulation. We present an alternative test that has been used in the social sciences for evaluating agreement with a “gold standard”. The use of this alternative test is illustrated in practice by developing a classification model to predict maintenance effort for an object oriented system, and evaluating its predictive validity on data from a second object-oriented system in the same environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applications of measurement in product-focused process improvement: a comparative industrial case study

    Publication Year: 1998 , Page(s): 105 - 108
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (24 KB)  

    In ESPRIT project PROFES, measurement according to the Goal/Question/Metric (GQM) approach is conducted in industrial software projects at Drager Medical Technology, Ericsson Finland, and Schlumberger Retail Petroleum Systems. A comparative case study investigates three different ways of applying GQM in product-focused process improvement: long-term GQM measurement programmes at the application sites to better understand and improve software products and processes; GQM-based construction and validation of product/process dependency models, which describe the process impact on software quality; and cost/benefit investigation of the PROFES improvement methodology using GQM for (meta-) analysis of improvement programmes. This paper outlines how GQM is applied for these three purposes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comprehensive empirical validation of design measures for object-oriented systems

    Publication Year: 1998 , Page(s): 246 - 257
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    This paper aims at empirically exploring the relationships between existing object-oriented coupling, cohesion, and inheritance measures and the probability of fault detection in system classes during testing. The underlying goal of such a study is to better understand the relationship between existing design measurement in OO systems and the quality of the software developed. Results show that many of the measures capture similar dimensions in the data set, thus reflecting the fact that many of them are based on similar principles and hypotheses. Besides the size of classes, the frequency of method invocations and the depth of inheritance hierarchies seem to be the main driving factors of fault-proneness View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A cohesion measure for classes in object-oriented systems

    Publication Year: 1998 , Page(s): 158 - 166
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    Classes are the fundamental concepts in the object-oriented paradigm. They are the basic units of object-oriented programs, and serve as the units of encapsulation, which promotes the modifiability and the reusability of them. In order to take full advantage of the desirable features provided by classes, such as data abstraction and encapsulation, classes should be designed to have good quality. Because object-oriented systems are developed by heavily reusing the existing classes, the classes of poor quality can be a serious obstacle to the development of systems. We define a new cohesion measure for assessing the quality of classes. Our approach is based on the observations on the salient natures of classes which have not been considered in the previous approaches. A Most Cohesive Component (MCC) is introduced as the most cohesive form of a class. We believe that the cohesion of a class depends on the connectivity of itself and its constituent components. We propose the connectivity factor to indicate the degree of the connectivity among the members of a class, and the structure factor to take into account the cohesiveness of its constituent components. Consequently, the cohesion of a class is defined as the product of the connectivity factor and the structure factor. This cohesion measure indicates how closely a class approaches MCC; the closely a class approaches MCC, the greater cohesion the class has View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A methodology for evaluating predictive metrics

    Publication Year: 1998
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    For over thirty years, software engineers have been interested in the ability to accurately measure characteristics of software and its production which could lead to improvements in both. In that time, a large number of metrics have been proposed, some with attempts at empirical validation of their effectiveness. Unfortunately, many if not most of these laudable efforts at empirical validation have foundered on a lack of knowledge about the appropriate methods to use. For example, a central goal in software metrics is the prediction of software characteristics based on other metrics of the software or its production process. This prediction problem is a quintessentially statistical one, but the lack of statistical training in the typical crowded engineering curriculum leaves most engineers uncertain about how to proceed. The result has been many well-intentioned but poorly executed empirical studies. This paper addresses this problem by providing a simple methodology for the predictive evaluation of metrics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The internal consistency of the ISO/IEC 15504 software process capability scale

    Publication Year: 1998 , Page(s): 72 - 81
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    ISO/IEC 15504 is an emerging international standard for software process assessment. It has undergone a major change in the rating scale used to measure the capability of processes. The objective of this paper is to present a follow up evaluation of the internal consistency of this process capability scale. Internal consistency is a form of reliability of a subjective measurement instrument. A previous study evaluated the internal consistency of the first version of the ISO/IEC 15504 document set (also known as SPICE version 1). In the current study we evaluate the internal consistency of the second version (also known as ISO/IEC PDTR 15504). Our results indicate that the internal consistency of the capability dimension did not deteriorate, and that it is still sufficiently high for practical purposes. Furthermore, we identify that the capability scale has two dimensions that we termed “Process Implementation” and “Quantitative Process Management” View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a Personal Software ProcessSM to improve performance

    Publication Year: 1998 , Page(s): 61 - 71
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    The use of software measurement and process definition by individual engineers is embodied in the Personal Software Process (PSP) SM, a collection of techniques and guidelines for individual software engineers to use in building software. This paper presents a brief overview of the PSP and summarizes data collected by engineers in order to illustrate the efficacy of the PSP. Implications for the use of methods associated with statistical process control are discussed, and the power of rigorous data collection by individual software engineers is highlighted through discussion of empirical data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying software metrics to formal specifications: a cognitive approach

    Publication Year: 1998 , Page(s): 216 - 223
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    It is generally accepted that failure to reason correctly during the early stages of software development causes developers to make incorrect decisions which can lead to the introduction of faults or anomalies in systems. Most key development decisions are usually made at the early system specification stage of a software project and developers do not receive feedback on their accuracy until near its completion. Software metrics are generally aimed at the coding or testing stages of development, however, when the repercussions of erroneous work have already been incurred. This paper presents a tentative model for predicting those parts of formal specifications which are most likely to admit erroneous inferences, in order that potential sources of human error may be reduced. The empirical data populating the model was generated during a series of cognitive experiments aimed at identifying linguistic properties of the Z notation which are prone to admit non-logical reasoning errors and biases in trained users View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experimenting with error abstraction in requirements documents

    Publication Year: 1998 , Page(s): 114 - 121
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (52 KB)  

    In previous experiments we showed that the Perspective-Based Reading (PBR) family of defect detection techniques was effective at detecting faults in requirements documents in some contexts. Experiences from these studies indicate that requirements faults are very difficult to define, classify and quantify. In order to address these difficulties, we present an empirical study whose main purpose is to investigate whether defect detection in requirements documents can be improved by focusing on the errors (i.e., underlying human misconceptions) in a document rather than the individual faults that they cause. In the context of a controlled experiment, we assess both benefits and costs of the process of abstracting errors from faults in requirements documents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A metrics framework for multimedia creation

    Publication Year: 1998 , Page(s): 144 - 147
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (36 KB)  

    The quality of multimedia systems can be defined using a hierarchical structure. Compared to software, more emphasis is needed towards content and human issues. Some existing concepts such as usability and reliability apply to content as well as functionality. For multimedia systems, a distinction is required between primary use and the indirect rewards that promote extended use of the system. Attention is also needed to choices of terminology, within the domain of quality, to suit the very diverse community of multimedia specialists View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.