By Topic

Software Engineering, IEEE Transactions on

Issue 6 • Date Nov.-Dec. 2011

Filter Results

Displaying Results 1 - 15 of 15
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (134 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (160 KB)  
    Freely Available from IEEE
  • A Dynamic Slicing Technique for UML Architectural Models

    Page(s): 737 - 771
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5341 KB)  

    This paper proposes a technique for dynamic slicing of UML architectural models. The presence of related information in diverse model parts (or fragments) makes dynamic slicing of Unified Modeling Language (UML) models a complex problem. We first extract all relevant information from a UML model specifying a software architecture into an intermediate representation, which we call a Model Dependency Graph (MDG). For a given slicing criterion, our slicing algorithm traverses the constructed MDG to identify the relevant model parts that are directly or indirectly affected during the execution of a specified scenario. One novelty of our approach is computation of dynamic slice based on the structural and behavioral (interactions only) UML models as against independently processing separate UML models, and determining the implicit interdependencies among different model elements distributed across model views. We also briefly discuss a prototype tool named Archlice, which we have developed to implement our algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities

    Page(s): 772 - 787
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2304 KB) |  | HTML iconHTML  

    Security inspection and testing require experts in security who think like an attacker. Security experts need to know code locations on which to focus their testing and inspection efforts. Since vulnerabilities are rare occurrences, locating vulnerable code locations can be a challenging task. We investigated whether software metrics obtained from source code and development history are discriminative and predictive of vulnerable code locations. If so, security experts can use this prediction to prioritize security inspection and testing efforts. The metrics we investigated fall into three categories: complexity, code churn, and developer activity metrics. We performed two empirical case studies on large, widely used open-source projects: the Mozilla Firefox web browser and the Red Hat Enterprise Linux kernel. The results indicate that 24 of the 28 metrics collected are discriminative of vulnerabilities for both projects. The models using all three types of metrics together predicted over 80 percent of the known vulnerable files with less than 25 percent false positives for both projects. Compared to a random selection of files for inspection and testing, these models would have reduced the number of files and the number of lines of code to inspect or test by over 71 and 28 percent, respectively, for both projects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring the Discriminative Power of Object-Oriented Class Cohesion Metrics

    Page(s): 788 - 804
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3087 KB) |  | HTML iconHTML  

    Several object-oriented cohesion metrics have been proposed in the literature. These metrics aim to measure the relationship between class members, namely, methods and attributes. Different metrics use different models to represent the connectivity pattern of cohesive interactions (CPCI) between class members. Most of these metrics are normalized to allow for easy comparison of the cohesion of different classes. However, in some cases, these metrics obtain the same cohesion values for different classes that have the same number of methods and attributes but different CPCIs. This leads to incorrectly considering the classes to be the same in terms of cohesion, even though their CPCIs clearly indicate that the degrees of cohesion are different. We refer to this as a lack of discrimination anomaly (LDA) problem. In this paper, we list and discuss cases in which the LDA problem exists, as expressed through the use of 16 cohesion metrics. In addition, we empirically study the frequent occurrence of the LDA problem when the considered metrics are applied to classes in five open source Java systems. Finally, we propose a metric and a simulation-based methodology to measure the discriminative power of cohesion metrics. The discrimination metric measures the probability that a cohesion metric will produce distinct cohesion values for classes with the same number of attributes and methods but different CPCIs. A highly discriminating cohesion metric is more desirable because it exhibits a lower chance of incorrectly considering classes to be cohesively equal when they have different CPCIs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Preventing Temporal Violations in Scientific Workflows: Where and How

    Page(s): 805 - 825
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2396 KB) |  | HTML iconHTML  

    Due to the dynamic nature of the underlying high-performance infrastructures for scientific workflows such as grid and cloud computing, failures of timely completion of important scientific activities, namely, temporal violations, often take place. Unlike conventional exception handling on functional failures, nonfunctional QoS failures such as temporal violations cannot be passively recovered. They need to be proactively prevented through dynamically monitoring and adjusting the temporal consistency states of scientific workflows at runtime. However, current research on workflow temporal verification mainly focuses on runtime monitoring, while the adjusting strategy for temporal consistency states, namely, temporal adjustment, has so far not been thoroughly investigated. For this issue, two fundamental problems of temporal adjustment, namely, where and how, are systematically analyzed and addressed in this paper. Specifically, a novel minimum probability time redundancy-based necessary and sufficient adjustment point selection strategy is proposed to address the problem of where and an innovative genetic-algorithm-based effective and efficient local rescheduling strategy is proposed to tackle the problem of how. The results of large-scale simulation experiments with generic workflows and specific real-world applications demonstrate that our temporal adjustment strategy can remarkably prevent the violations of both local and global temporal constraints in scientific workflows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Putting Preemptive Time Petri Nets to Work in a V-Model SW Life Cycle

    Page(s): 826 - 844
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1928 KB) |  | HTML iconHTML  

    Preemptive Time Petri Nets (pTPNs) support modeling and analysis of concurrent timed SW components running under fixed priority preemptive scheduling. The model is supported by a well-established theory based on symbolic state space analysis through Difference Bounds Matrix (DBM) zones, with specific contributions on compositional modularization, trace analysis, and efficient overapproximation and cleanup in the management of suspension deriving from preemptive behavior. In this paper, we devise and implement a framework that brings the theory to application. To this end, we cast the theory into an organic tailoring of design, coding, and testing activities within a V-Model SW life cycle in respect of the principles of regulatory standards applied to the construction of safety-critical SW components. To implement the toolchain subtended by the overall approach into a Model Driven Development (MDD) framework, we complement the theory of state space analysis with methods and techniques supporting semiformal specification and automated compilation into pTPN models and real-time code, measurement-based Execution Time estimation, test case selection and execution, coverage evaluation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Swarm Verification Techniques

    Page(s): 845 - 857
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1499 KB) |  | HTML iconHTML  

    The range of verification problems that can be solved with logic model checking tools has increased significantly in the last few decades. This increase in capability is based on algorithmic advances and new theoretical insights, but it has also benefitted from the steady increase in processing speeds and main memory sizes on standard computers. The steady increase in processing speeds, though, ended when chip-makers started redirecting their efforts to the development of multicore systems. For the near-term future, we can anticipate the appearance of systems with large numbers of CPU cores, but without matching increases in clock-speeds. We will describe a model checking strategy that can allow us to leverage this trend and that allows us to tackle significantly larger problem sizes than before. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tuning Temporal Features within the Stochastic π-Calculus

    Page(s): 858 - 871
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1224 KB) |  | HTML iconHTML  

    The stochastic π-calculus is a formalism that has been used for modeling complex dynamical systems where the stochasticity and the delay of transitions are important features, such as in the case of biochemical reactions. Commonly, durations of transitions within stochastic π-calculus models follow an exponential law. The underlying dynamics of such models are expressed in terms of continuous-time Markov chains, which can then be efficiently simulated and model-checked. However, the exponential law comes with a huge variance, making it difficult to model systems with accurate temporal constraints. In this paper, a technique for tuning temporal features within the stochastic π-calculus is presented. This method relies on the introduction of a stochasticity absorption factor by replacing the exponential distribution with the Erlang distribution, which is a sum of exponential random variables. This paper presents a construction of the stochasticity absorption factor in the classical stochastic π-calculus with exponential rates. Tools for manipulating the stochasticity absorption factor and its link with timed intervals for firing transitions are also presented. Finally, the model-checking of such designed models is tackled by supporting the stochasticity absorption factor in a translation from the stochastic π-calculus to the probabilistic model checker PRISM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Distribution of Bugs in the Eclipse System

    Page(s): 872 - 877
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (701 KB) |  | HTML iconHTML  

    The distribution of bugs in software systems has been shown to satisfy the Pareto principle, and typically shows a power-law tail when analyzed as a rank-frequency plot. In a recent paper, Zhang showed that the Weibull cumulative distribution is a very good fit for the Alberg diagram of bugs built with experimental data. In this paper, we further discuss the subject from a statistical perspective, using as case studies five versions of Eclipse, to show how log-normal, Double-Pareto, and Yule-Simon distributions may fit the bug distribution at least as well as the Weibull distribution. In particular, we show how some of these alternative distributions provide both a superior fit to empirical data and a theoretical motivation to be used for modeling the bug generation process. While our results have been obtained on Eclipse, we believe that these models, in particular the Yule-Simon one, can generalize to other software systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Computer Society Magazines and Transactions available in ePUB format [advertisement]

    Page(s): 878
    Save to Project icon | Request Permissions | PDF file iconPDF (385 KB)  
    Freely Available from IEEE
  • New Transactions Issue Alerts

    Page(s): 879
    Save to Project icon | Request Permissions | PDF file iconPDF (1640 KB)  
    Freely Available from IEEE
  • What's new in Transactions [advertisement]

    Page(s): 880
    Save to Project icon | Request Permissions | PDF file iconPDF (345 KB)  
    Freely Available from IEEE
  • [Inside back cover]

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (160 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (134 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org