By Topic

Software Engineering, IEEE Transactions on

Issue 3 • Date May-June 2012

Filter Results

Displaying Results 1 - 19 of 19
  • Table of Contents [Front cover]

    Publication Year: 2012 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (123 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2012 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (196 KB)  
    Freely Available from IEEE
  • A Theoretical and Empirical Analysis of the Role of Test Sequence Length in Software Testing for Structural Coverage

    Publication Year: 2012 , Page(s): 497 - 519
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2422 KB) |  | HTML iconHTML  

    In the presence of an internal state, often a sequence of function calls is required to test software. In fact, to cover a particular branch of the code, a sequence of previous function calls might be required to put the internal state in the appropriate configuration. Internal states are not only present in object-oriented software, but also in procedural software (e.g., static variables in C programs). In the literature, there are many techniques to test this type of software. However, to the best of our knowledge, the properties related to the choice of the length of these sequences have received only a little attention in the literature. In this paper, we analyze the role that the length plays in software testing, in particular branch coverage. We show that, on “difficult” software testing benchmarks, longer test sequences make their testing trivial. Hence, we argue that the choice of the length of the test sequences is very important in software testing. Theoretical analyses and empirical studies on widely used benchmarks and on an industrial software are carried out to support our claims. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Autonomous Engine for Services Configuration and Deployment

    Publication Year: 2012 , Page(s): 520 - 536
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1987 KB) |  | HTML iconHTML  

    The runtime management of the infrastructure providing service-based systems is a complex task, up to the point where manual operation struggles to be cost effective. As the functionality is provided by a set of dynamically composed distributed services, in order to achieve a management objective multiple operations have to be applied over the distributed elements of the managed infrastructure. Moreover, the manager must cope with the highly heterogeneous characteristics and management interfaces of the runtime resources. With this in mind, this paper proposes to support the configuration and deployment of services with an automated closed control loop. The automation is enabled by the definition of a generic information model, which captures all the information relevant to the management of the services with the same abstractions, describing the runtime elements, service dependencies, and business objectives. On top of that, a technique based on satisfiability is described which automatically diagnoses the state of the managed environment and obtains the required changes for correcting it (e.g., installation, service binding, update, or configuration). The results from a set of case studies extracted from the banking domain are provided to validate the feasibility of this proposal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing Semi-Automated Clustering Methods for Persona Development

    Publication Year: 2012 , Page(s): 537 - 546
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1953 KB) |  | HTML iconHTML  

    Current and future information systems require a better understanding of the interactions between users and systems in order to improve system use and, ultimately, success. The use of personas as design tools is becoming more widespread as researchers and practitioners discover its benefits. This paper presents an empirical study comparing the performance of existing qualitative and quantitative clustering techniques for the task of identifying personas and grouping system users into those personas. A method based on Factor (Principal Components) Analysis performs better than two other methods which use Latent Semantic Analysis and Cluster Analysis as measured by similarity to expert manually defined clusters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven Development

    Publication Year: 2012 , Page(s): 547 - 560
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1067 KB) |  | HTML iconHTML  

    This study is a quasi experiment comparing the software defect rates and implementation costs of two methods of software defect reduction: code inspection and test-driven development. We divided participants, consisting of junior and senior computer science students at a large Southwestern university, into four groups using a two-by-two, between-subjects, factorial design and asked them to complete the same programming assignment using either test-driven development, code inspection, both, or neither. We compared resulting defect counts and implementation costs across groups. We found that code inspection is more effective than test-driven development at reducing defects, but that code inspection is also more expensive. We also found that test-driven development was no more effective at reducing defects than traditional programming methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DEC: Service Demand Estimation with Confidence

    Publication Year: 2012 , Page(s): 561 - 578
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4384 KB) |  | HTML iconHTML  

    We present a new technique for predicting the resource demand requirements of services implemented by multitier systems. Accurate demand estimates are essential to ensure the efficient provisioning of services in an increasingly service-oriented world. The demand estimation technique proposed in this paper has several advantages compared with regression-based demand estimation techniques, which many practitioners employ today. In contrast to regression, it does not suffer from the problem of multicollinearity, it provides more reliable aggregate resource demand and confidence interval predictions, and it offers a measurement-based validation test. The technique can be used to support system sizing and capacity planning exercises, costing and pricing exercises, and to predict the impact of changes to a service upon different service customers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting Dynamic Information in IDEs Improves Speed and Correctness of Software Maintenance Tasks

    Publication Year: 2012 , Page(s): 579 - 591
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1698 KB) |  | HTML iconHTML  

    Modern IDEs such as Eclipse offer static views of the source code, but such views ignore information about the runtime behavior of software systems. Since typical object-oriented systems make heavy use of polymorphism and dynamic binding, static views will miss key information about the runtime architecture. In this paper, we present an approach to gather and integrate dynamic information in the Eclipse IDE with the goal of better supporting typical software maintenance activities. By means of a controlled experiment with 30 professional developers, we show that for typical software maintenance tasks, integrating dynamic information into the Eclipse IDE yields a significant 17.5 percent decrease of time spent while significantly increasing the correctness of the solutions by 33.5 percent. We also provide a comprehensive performance evaluation of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model Checking Semantically Annotated Services

    Publication Year: 2012 , Page(s): 592 - 608
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1846 KB) |  | HTML iconHTML  

    Model checking is a formal verification method widely accepted in the web service world because of its capability to reason about service behavior at process level. It has been used as a basic tool in several scenarios such as service selection, service validation, and service composition. The importance of semantics is also widely recognized. Indeed, there are several solutions to the problem of providing semantics to web services, most of them relying on some form of Description Logic. This paper presents an integration of model checking and semantic reasoning technologies in an efficient way. This can be considered the first step toward the use of semantic model checking in problems of selection, validation, and composition. The approach relies on a representation of services at process level that is based on semantically annotated state transition systems (asts) and a representation of specifications based on a semantically annotated version of computation tree logic (anctl). This paper proves that the semantic model checking algorithm is sound and complete and can be accomplished in polynomial time. This approach has been evaluated with several experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Evolution of Services

    Publication Year: 2012 , Page(s): 609 - 628
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1866 KB) |  | HTML iconHTML  

    In an environment of constant change and variation driven by competition and innovation, a software service can rarely remain stable. Being able to manage and control the evolution of services is therefore an important goal for the Service-Oriented paradigm. This work extends existing and widely adopted theories from software engineering, programming languages, service-oriented computing, and other related fields to provide the fundamental ingredients required to guarantee that spurious results and inconsistencies that may occur due to uncontrolled service changes are avoided. The paper provides a unifying theoretical framework for controlling the evolution of services that deals with structural, behavioral, and QoS level-induced service changes in a type-safe manner, ensuring correct versioning transitions so that previous clients can use a versioned service in a consistent manner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Oracles for Distributed Testing

    Publication Year: 2012 , Page(s): 629 - 641
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB) |  | HTML iconHTML  

    The problem of deciding whether an observed behavior is acceptable is the oracle problem. When testing from a finite state machine (FSM), it is easy to solve the oracle problem and so it has received relatively little attention for FSMs. However, if the system under test has physically distributed interfaces, called ports, then in distributed testing, we observe a local trace at each port and we compare the set of local traces with the set of allowed behaviors (global traces). This paper investigates the oracle problem for deterministic and nondeterministic FSMs and for two alternative definitions of conformance for distributed testing. We show that the oracle problem can be solved in polynomial time for the weaker notion of conformance (⊆w) but is NP-hard for the stronger notion of conformance (⊆), even if the FSM is deterministic. However, when testing from a deterministic FSM with controllable input sequences, the oracle problem can be solved in polynomial time and similar results hold for nondeterministic FSMs. Thus, in some cases, the oracle problem can be efficiently solved when using ⊆s and where this is not the case, we can use the decision procedure for ⊆w as a sound approximation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pointcut Rejuvenation: Recovering Pointcut Expressions in Evolving Aspect-Oriented Software

    Publication Year: 2012 , Page(s): 642 - 657
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2647 KB) |  | HTML iconHTML  

    Pointcut fragility is a well-documented problem in Aspect-Oriented Programming; changes to the base code can lead to join points incorrectly falling in or out of the scope of pointcuts. In this paper, we present an automated approach that limits fragility problems by providing mechanical assistance in pointcut maintenance. The approach is based on harnessing arbitrarily deep structural commonalities between program elements corresponding to join points selected by a pointcut. The extracted patterns are then applied to later versions to offer suggestions of new join points that may require inclusion. To illustrate that the motivation behind our proposal is well founded, we first empirically establish that join points captured by a single pointcut typically portray a significant amount of unique structural commonality by analyzing patterns extracted from 23 AspectJ programs. Then, we demonstrate the usefulness of our technique by rejuvenating pointcuts in multiple versions of three of these programs. The results show that our parameterized heuristic algorithm was able to accurately and automatically infer the majority of new join points in subsequent software versions that were not captured by the original pointcuts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoS Assurance for Dynamic Reconfiguration of Component-Based Software Systems

    Publication Year: 2012 , Page(s): 658 - 676
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1959 KB) |  | HTML iconHTML  

    A major challenge of dynamic reconfiguration is Quality of Service (QoS) assurance, which is meant to reduce application disruption to the minimum for the system's transformation. However, this problem has not been well studied. This paper investigates the problem for component-based software systems from three points of view. First, the whole spectrum of QoS characteristics is defined. Second, the logical and physical requirements for QoS characteristics are analyzed and solutions to achieve them are proposed. Third, prior work is classified by QoS characteristics and then realized by abstract reconfiguration strategies. On this basis, quantitative evaluation of the QoS assurance abilities of existing work and our own approach is conducted through three steps. First, a proof-of-concept prototype called the reconfigurable component model is implemented to support the representation and testing of the reconfiguration strategies. Second, a reconfiguration benchmark is proposed to expose the whole spectrum of QoS problems. Third, each reconfiguration strategy is tested against the benchmark and the testing results are evaluated. The most important conclusion from our investigation is that the classified QoS characteristics can be fully achieved under some acceptable constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software Development Estimation Biases: The Role of Interdependence

    Publication Year: 2012 , Page(s): 677 - 693
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2310 KB) |  | HTML iconHTML  

    Software development effort estimates are frequently too low, which may lead to poor project plans and project failures. One reason for this bias seems to be that the effort estimates produced by software developers are affected by information that has no relevance for the actual use of effort. We attempted to acquire a better understanding of the underlying mechanisms and the robustness of this type of estimation bias. For this purpose, we hired 374 software developers working in outsourcing companies to participate in a set of three experiments. The experiments examined the connection between estimation bias and developer dimensions: self-construal (how one sees oneself), thinking style, nationality, experience, skill, education, sex, and organizational role. We found that estimation bias was present along most of the studied dimensions. The most interesting finding may be that the estimation bias increased significantly with higher levels of interdependence, i.e., with stronger emphasis on connectedness, social context, and relationships. We propose that this connection may be enabled by an activation of one's self-construal when engaging in effort estimation, and a connection between a more interdependent self-construal and increased search for indirect messages, lower ability to ignore irrelevant context, and a stronger emphasis on socially desirable responses. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Specifying Dynamic Analyses by Extending Language Semantics

    Publication Year: 2012 , Page(s): 694 - 706
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2034 KB) |  | HTML iconHTML  

    Dynamic analysis is increasingly attracting attention for debugging, profiling, and program comprehension. Ten to twenty years ago, many dynamic analyses investigated only simple method execution traces. Today, in contrast, many sophisticated dynamic analyses exist, for instance, for detecting memory leaks, analyzing ownership properties, measuring garbage collector performance, or supporting debugging tasks. These analyses depend on complex program instrumentations and analysis models, making it challenging to understand, compare, and reproduce the proposed approaches. While formal specifications and proofs are common in the field of static analysis, most dynamic analyses are specified using informal, textual descriptions. In this paper, we propose a formal framework using operational semantics that allows researchers to precisely specify their dynamic analysis. Our goal is to provide an accessible and reusable basis on which researchers who may not be familiar with rigorous specifications of dynamic analyses can build. By extending the provided semantics, one can concisely specify how runtime events are captured and how this data is transformed to populate the analysis model. Furthermore, our approach provides the foundations to reason about properties of a dynamic analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • StakeRare: Using Social Networks and Collaborative Filtering for Large-Scale Requirements Elicitation

    Publication Year: 2012 , Page(s): 707 - 735
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4052 KB) |  | HTML iconHTML  

    Requirements elicitation is the software engineering activity in which stakeholder needs are understood. It involves identifying and prioritizing requirements-a process difficult to scale to large software projects with many stakeholders. This paper proposes StakeRare, a novel method that uses social networks and collaborative filtering to identify and prioritize requirements in large software projects. StakeRare identifies stakeholders and asks them to recommend other stakeholders and stakeholder roles, builds a social network with stakeholders as nodes and their recommendations as links, and prioritizes stakeholders using a variety of social network measures to determine their project influence. It then asks the stakeholders to rate an initial list of requirements, recommends other relevant requirements to them using collaborative filtering, and prioritizes their requirements using their ratings weighted by their project influence. StakeRare was evaluated by applying it to a software project for a 30,000-user system, and a substantial empirical study of requirements elicitation was conducted. Using the data collected from surveying and interviewing 87 stakeholders, the study demonstrated that StakeRare predicts stakeholder needs accurately and arrives at a more complete and accurately prioritized list of requirements compared to the existing method used in the project, taking only a fraction of the time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Computer Society OnlinePlus Coming Soon to TSE

    Publication Year: 2012 , Page(s): 736
    Save to Project icon | Request Permissions | PDF file iconPDF (219 KB)  
    Freely Available from IEEE
  • [Inside back cover]

    Publication Year: 2012 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (196 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2012 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (123 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org