By Topic

Software Engineering, IEEE Transactions on

Issue 9 • Date Sept. 2013

Filter Results

Displaying Results 1 - 9 of 9
  • Editorial

    Publication Year: 2013 , Page(s): 1187 - 1189
    Save to Project icon | Request Permissions | PDF file iconPDF (187 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Capsule-Based User Interface Modeling for Large-Scale Applications

    Publication Year: 2013 , Page(s): 1190 - 1207
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1549 KB) |  | HTML iconHTML  

    We present a novel approach to modeling and implementing user interfaces (UI) of large business applications. The approach is based on the concept of capsule, a profiled structured class from UML which models a simple UI component or a coherent UI fragment of logically and functionally coupled components or other fragments with a clear interface. Consequently, the same modeling concept of capsule with internal structure can be reapplied recursively at successively lower levels of detail within a model, starting from high architectural modeling levels down to the lowest levels of modeling simple UI components. The interface of capsules is defined in terms of pins, while the functional coupling of capsules is specified declaratively by simply wiring their pins. Pins and wires transport messages between capsules, ensuring strict encapsulation. The approach includes a method for formal coupling of capsules' behavior with the underlying object space that provides proper impedance matching between the UI and the business logic while preserving clear separation of concerns between them. We also briefly describe an implementation of a framework that supports the proposed method, including a rich library of ready-to-use capsules, and report on our experience in applying the approach in large-scale industrial systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data Quality: Some Comments on the NASA Software Defect Datasets

    Publication Year: 2013 , Page(s): 1208 - 1215
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1233 KB) |  | HTML iconHTML  

    Background--Self-evidently empirical analyses rely upon the quality of their data. Likewise, replications rely upon accurate reporting and using the same rather than similar versions of datasets. In recent years, there has been much interest in using machine learners to classify software modules into defect-prone and not defect-prone categories. The publicly available NASA datasets have been extensively used as part of this research. Objective--This short note investigates the extent to which published analyses based on the NASA defect datasets are meaningful and comparable. Method--We analyze the five studies published in the IEEE Transactions on Software Engineering since 2007 that have utilized these datasets and compare the two versions of the datasets currently in use. Results--We find important differences between the two versions of the datasets, implausible values in one dataset and generally insufficient detail documented on dataset preprocessing. Conclusions--It is recommended that researchers 1) indicate the provenance of the datasets they use, 2) report any preprocessing in sufficient detail to enable meaningful replication, and 3) invest effort in understanding the data prior to applying machine learners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating Test Cases for Real-Time Systems Based on Symbolic Models

    Publication Year: 2013 , Page(s): 1216 - 1229
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1238 KB) |  | HTML iconHTML  

    The state space explosion problem is one of the challenges to be faced by test case generation techniques, particularly when data values need to be enumerated. This problem gets even worse for real-time systems (RTS) that also have time constraints. The usual solution in this context, based on finite state machines or time automata, consists of enumerating data values (restricted to finite domains) while treating time symbolically. In this paper, a symbolic model for conformance testing of real-time systems software named TIOSTS that addresses both data and time symbolically is presented. Moreover, a test case generation process is defined to select more general test cases with variables and parameters that can be instantiated at testing execution time. Generation is based on a combination of symbolic execution and constraint solving for the data part and symbolic analysis for timed aspects. Furthermore, the practical application of the process is investigated through a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-Based Test Oracle Generation for Automated Unit Testing of Agent Systems

    Publication Year: 2013 , Page(s): 1230 - 1244
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2232 KB) |  | HTML iconHTML  

    Software testing remains the most widely used approach to verification in industry today, consuming between 30-50 percent of the entire development cost. Test input selection for intelligent agents presents a problem due to the very fact that the agents are intended to operate robustly under conditions which developers did not consider and would therefore be unlikely to test. Using methods to automatically generate and execute tests is one way to provide coverage of many conditions without significantly increasing cost. However, one problem using automatic generation and execution of tests is the oracle problem: How can we automatically decide if observed program behavior is correct with respect to its specification? In this paper, we present a model-based oracle generation method for unit testing belief-desire-intention agents. We develop a fault model based on the features of the core units to capture the types of faults that may be encountered and define how to automatically generate a partial, passive oracle from the agent design models. We evaluate both the fault model and the oracle generation by testing 14 agent systems. Over 400 issues were raised, and these were analyzed to ascertain whether they represented genuine faults or were false positives. We found that over 70 percent of issues raised were indicative of problems in either the design or the code. Of the 19 checks performed by our oracle, faults were found by all but 5 of these checks. We also found that 8 out the 11 fault types identified in our fault model exhibited at least one fault. The evaluation indicates that the fault model is a productive conceptualization of the problems to be expected in agent unit testing and that the oracle is able to find a substantial number of such faults with relatively small overhead in terms of false positives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • OBEY: Optimal Batched Refactoring Plan Execution for Class Responsibility Redistribution

    Publication Year: 2013 , Page(s): 1245 - 1263
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3451 KB) |  | HTML iconHTML  

    The redistribution of class responsibilities is a common reengineering practice in object-oriented (OO) software evolution. During the redistribution, developers frequently construct batched refactoring plans for moving multiple methods and fields among various classes. With an objective of carefully maintaining the cohesion and coupling degree of the class design, executing a batched refactoring plan without introducing any objective-violating side effect into the refactored code is essential. However, using most refactoring engines for batched refactoring plan execution introduces coupling-increasing Middle Man bad smell in the final refactored code and therefore makes the refactoring execution suboptimal in achieving the redistribution objective. This work proposes Obey, a methodology for optimal batched refactoring plan execution. Obey analyzes a batched refactoring plan, identifies Middle Man symptoms that cause suboptimal execution, and renovates the plan for optimal execution. We have conducted an empirical study on three open-source software projects to confirm the effectiveness of Obey in a practical context. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Patterns of Knowledge in API Reference Documentation

    Publication Year: 2013 , Page(s): 1264 - 1282
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3104 KB) |  | HTML iconHTML  

    Reading reference documentation is an important part of programming with application programming interfaces (APIs). Reference documentation complements the API by providing information not obvious from the API syntax. To improve the quality of reference documentation and the efficiency with which the relevant information it contains can be accessed, we must first understand its content. We report on a study of the nature and organization of knowledge contained in the reference documentation of the hundreds of APIs provided as a part of two major technology platforms: Java SDK 6 and .NET 4.0. Our study involved the development of a taxonomy of knowledge types based on grounded methods and independent empirical validation. Seventeen trained coders used the taxonomy to rate a total of 5,574 randomly sampled documentation units to assess the knowledge they contain. Our results provide a comprehensive perspective on the patterns of knowledge in API documentation: observations about the types of knowledge it contains and how this knowledge is distributed throughout the documentation. The taxonomy and patterns of knowledge we present in this paper can be used to help practitioners evaluate the content of their API documentation, better organize their documentation, and limit the amount of low-value content. They also provide a vocabulary that can help structure and facilitate discussions about the content of APIs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TACO: Efficient SAT-Based Bounded Verification Using Symmetry Breaking and Tight Bounds

    Publication Year: 2013 , Page(s): 1283 - 1307
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3810 KB) |  | HTML iconHTML  

    SAT-based bounded verification of annotated code consists of translating the code together with the annotations to a propositional formula, and analyzing the formula for specification violations using a SAT-solver. If a violation is found, an execution trace exposing the failure is exhibited. Code involving linked data structures with intricate invariants is particularly hard to analyze using these techniques. In this paper, we present Translation of Annotated COde (TACO), a prototype tool which implements a novel, general, and fully automated technique for the SAT-based analysis of JML-annotated Java sequential programs dealing with complex linked data structures. We instrument code analysis with a symmetry-breaking predicate which, on one hand, reduces the size of the search space by ignoring certain classes of isomorphic models and, on the other hand, allows for the parallel, automated computation of tight bounds for Java fields. Experiments show that the translations to propositional formulas require significantly less propositional variables, leading to an improvement of the efficiency of the analysis of orders of magnitude, compared to the noninstrumented SAT--based analysis. We show that in some cases our tool can uncover bugs that cannot be detected by state-of-the-art tools based on SAT-solving, model checking, or SMT-solving. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Verifying Protocol Conformance Using Software Model Checking for the Model-Driven Development of Embedded Systems

    Publication Year: 2013 , Page(s): 1307 - 13256
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2412 KB) |  | HTML iconHTML  

    To facilitate modular development, the use of state machines has been proposed to specify the protocol (i.e., the sequence of messages) that each port of a component can engage in. The protocol conformance checking problem consists of determining whether the actual behavior of a component conforms to the protocol specifications on its ports. In this paper, we consider this problem in the context of the model-driven development (MDD) of embedded systems based on UML 2, in which UML 2 state machines are used to specify component behavior. We provide a definition of conformance which slightly extends those found in the literature and reduce the conformance check to a state space exploration. We describe a tool implementing the approach using the Java PathFinder software model checker and the MDD tool IBM Rational RoseRT, discuss its application to three case studies, and show how the tool repeatedly allowed us to find unexpected conformance errors with encouraging performance. We conclude that the approach is promising for supporting the modular development of embedded components in the context of industrial applications of MDD. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org