By Topic

Automated Software Engineering, 2006. ASE '06. 21st IEEE/ACM International Conference on

Date 18-22 Sept. 2006

Filter Results

Displaying Results 1 - 25 of 74
  • 21st IEEE International Conference on Automated Software Engineering - Cover

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (172 KB)  
    Freely Available from IEEE
  • 21st IEEE International Conference on Automated Software Engineering - Title

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • 21st IEEE International Conference on Automated Software Engineering - Copyright

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • 21st IEEE International Conference on Automated Software Engineering - Table of contents

    Page(s): v - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (79 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Conference Committee

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • Expert reviewer panel

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Introduction to tool demonstrations

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • Verifying Specifications with Proof Scores in CafeOBJ

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (129 KB) |  | HTML iconHTML  

    Verifying specifications is still one of the most important undeveloped research topics in software engineering. It is important because quite a few critical bugs are caused at the level of domains, requirements, and/or designs. It is also important for the cases where no program codes are generated and specifications are analyzed and verified only for justifying models of problems in real world. This paper gives a survey of our research activities in verifying specifications with proof scores in CafeOBJ. After explaining fundamental issues and importance of verifying specifications, an overview of CafeOBJ language, the proof score approach in CafeOBJ including its applications to several areas are given. This paper is based on our already published books or papers (Diaconescu and Futatsugi, 1998; Futatsugi et al., 2005), and refers to many of our related publications. Interested readers are invited to look into them View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Winning the DARPA Grand Challenge: A Robot Race through the Mojave Desert

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Summary form only given. The DARPA Grand Challenge was the most significant event in the field of robotics in more than a decade. A mobile ground robot had to traverse 132 miles of punishing desert terrain in less than ten hours. In 2004, the best robot only made 7.3 miles. A year later, Stanford won this historical challenge and cashed the } prize. This talk, delivered by the leader of the Stanford Racing Team, will provide insights into the software architecture of Stanford's winning robot "Stanley." The robot heavily relied on advanced artificial intelligence, and it used a pipelining architecture to turn sensor data into vehicle controls. The talk will introduce the audience into the fascinating world of autonomous robotics, share many of the race insights, and discuss some of the implications for the future of our society View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Property Checking for Software: Past, Present and Future

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (55 KB)  

    Summary form only given. Over the past few years, we have seen several automatic static analysis tools being developed and deployed in industrial-strength software development. I will survey several of these tools ranging from heuristic and scalable analysis tools (such as PREFix, PREFast and Metal), to sound analysis tools based on counter example driven refinement (such as SLAM). Then, I will present two exciting recent developments in counterexample driven refinement: (1) generalizing counterexample driven refinement to work with any abstract interpretation, and (2) combining directed testing with counterexample driven refinement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Information Aggregation for Scaling Scale-Resistant Services

    Page(s): 15 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Machine learning provides techniques to monitor system behavior and predict failures from sensor data. However, such algorithms are "scale resistant" $high computational complexity and not parallelizable. The problem then becomes identifying and delivering the relevant subset of the vast amount of sensor data to each monitoring node, despite the lack of explicit "relevance" labels. The simplest solution is to deliver only the "closest" data items under some distance metric. We demonstrate a better approach using a more sophisticated architecture: a scalable data aggregation and dissemination overlay network uses an influence metric reflecting the relative influence of one node's data on another, to efficiently deliver a mix of raw and aggregated data to the monitoring components, enabling the application of machine learning tools on real-world problems. We term our architecture level of detail after an analogous computer graphics technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating Domain-Specific Visual Language Editors from High-level Tool Specifications

    Page(s): 25 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1255 KB) |  | HTML iconHTML  

    Domain-specific visual language editors are useful in many areas of software engineering but developing such editors is challenging and time-consuming. We describe an approach to generating a wide range of these graphical editors for use as plug-ins to the Eclipse environment. Tool specifications from an existing meta-tool, Pounamu, are interpreted to produce dynamic, multi-view, multiuser Eclipse graphical editors. We describe the architecture and implementation of our approach, examples of its use realizing domain-specific modelling tools, and strengths and limitations of the approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Automated Formal Approach to Managing Dynamic Reconfiguration

    Page(s): 37 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    Dynamic reconfiguration is the process of making changes to software at run-time. The motivation for this is typically to facilitate adaptive systems which change their behavior in response to changes in their operating environment or to allow systems with a requirement for continuous service to evolve uninterrupted. To enable development of reconfigurable applications, we have developed OpenRec, a framework which comprises a reflective component model plus an open and extensible reconfiguration management infrastructure. Recently we have extended OpenRec to verify whether an intended (re)configuration would result in an application's structural constraints being satisfied. Consequently OpenRec can automatically veto proposed changes that would violate configuration constraints. This functionality has been realized by integrating OpenRec with the ALLOY Analyzer tool via a service-oriented architecture. ALLOY is a formal modelling notation which can be used to specify systems and associated constraints. In this paper, we present an overview of the OpenRec framework. In addition, we describe the application of ALLOY to modelling re-configurable component based systems and highlight some interesting experiences with integrating OpenRec and the ALLOY Analyzer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Differencing and Merging of Architectural Views

    Page(s): 47 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (326 KB) |  | HTML iconHTML  

    Existing approaches to differencing and merging architectural views are based on restrictive assumptions such as requiring view elements to have unique identifiers or exactly matching types. We propose an approach based on structural information by generalizing a published polynomial-time tree-to-tree correction algorithm (that detects inserts, renames and deletes) into a novel algorithm to additionally detect restricted moves and support forcing and preventing matches between view elements. We incorporate the algorithm into tools to compare and merge component-and-connector (C&C) architectural views. Finally, we provide an empirical evaluation of the algorithm on case studies to find and reconcile interesting divergences between architectural views View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Empirical Comparison of Automated Generation and Classification Techniques for Object-Oriented Unit Testing

    Page(s): 59 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (221 KB) |  | HTML iconHTML  

    Testing involves two major activities: generating test inputs and determining whether they reveal faults. Automated test generation techniques include random generation and symbolic execution. Automated test classification techniques include ones based on uncaught exceptions and violations of operational models inferred from manually provided tests. Previous research on unit testing for object-oriented programs developed three pairs of these techniques: model-based random testing, exception-based random testing, and exception-based symbolic testing. We develop a novel pair, model-based symbolic testing. We also empirically compare all four pairs of these generation and classification techniques. The results show that the pairs are complementary (i.e., reveal faults differently), with their respective strengths and weaknesses View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Command-Form Coverage for Testing Database Applications

    Page(s): 69 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    The testing of database applications poses new challenges for software engineers. In particular, it is difficult to thoroughly test the interactions between an application and its underlying database, which typically occur through dynamically-generated database commands. Because traditional code-based coverage criteria focus only on the application code, they are often inadequate in exercising these commands. To address this problem, we introduce a new test adequacy criterion that is based on coverage of the database commands generated by an application and specifically focuses on the application-database interactions. We describe the criterion, an analysis that computes the corresponding testing requirements, and an efficient technique for measuring coverage of these requirements. We also present a tool that implements our approach and a preliminary study that shows the approach's potential usefulness and feasibility View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Identification of Bug-Introducing Changes

    Page(s): 81 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1181 KB) |  | HTML iconHTML  

    Bug-fixes are widely used for predicting bugs or finding risky parts of software. However, a bug-fix does not contain information about the change that initially introduced a bug. Such bug-introducing changes can help identify important properties of software bugs such as correlated factors or causalities. For example, they reveal which developers or what kinds of source code changes introduce more bugs. In contrast to bug-fixes that are relatively easy to obtain, the extraction of bug-introducing changes is challenging. In this paper, we present algorithms to automatically and accurately identify bug-introducing changes. We remove false positives and false negatives by using annotation graphs, by ignoring non-semantic source code changes, and outlier fixes. Additionally, we validated that the fixes we used are true fixes by a manual inspection. Altogether, our algorithms can remove about 38%~51% of false positives and 14%~15% of false negatives compared to the previous algorithm. Finally, we show applications of bug-introducing changes that demonstrate their value for research View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modularity Analysis of Logical Design Models

    Page(s): 91 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (241 KB) |  | HTML iconHTML  

    Traditional design representations are inadequate for generalized reasoning about modularity in design and its technical and economic implications. We have developed an architectural modeling and analysis approach, and automated tool support, for improved reasoning in these terms. However, the complexity of constraint satisfaction limited the size of models that we could analyze. The contribution of this paper is a more scalable approach. We exploit the dominance relations in our models to guide a divide-and-conquer algorithm, which we have implemented it in our Simon tool. We evaluate its performance in case studies. The approach reduced the time needed to analyze small but representative models from hours to seconds. This work appears to make our modeling and analysis approach practical for research on the evolvability and economic properties of software design architectures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Portable Compiler-Integrated Approach to Permanent Checking

    Page(s): 103 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Program checking technology is now a mature technology, but is not yet used on a large scale. We identify one cause of this gap in the decoupling of checking tools from the everyday development tools. To radically change the situation, we explore the integration of simple user-defined checks into the core of every development process: the compiler. The checks we implement express constrained reachability queries in the control flow graph taking the form "from x to y avoiding z", where x, y, and z are native code patterns containing a blend of syntactic, semantic and dataflow information. Compiler integration enables continuous checking throughout development, but also a pervasive propagation of checking technology. This integration poses some interesting challenges, but opens up new perspectives. Factorizing analyses between checking and compiling improves both the efficiency and the expressiveness of the checks. Minimalist user properties and language-independent code pattern matching ensure that our approach can be integrated almost for free in any compiler for any language. We illustrate this approach with a full-fledged checking compiler for C. We demonstrate the need for permanent checking by partially analyzing two different releases of the Linux kernel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating and Scheduling an Open Set of Static Analyses

    Page(s): 113 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (242 KB) |  | HTML iconHTML  

    To improve the productivity of the development process, more and more tools for static software analysis are tightly integrated into the incremental build process of an IDE. If multiple interdependent analyses are used simultaneously, the coordination between the analyses becomes a major obstacle to keep the set of analyses open. We propose an approach to integrating and scheduling an open set of static analyses which decouples the individual analyses and coordinates the analysis executions such that the overall time and space consumption is minimized. The approach has been implemented for the Eclipse IDE and has been used to integrate a wide range of analyses such as finding bug patterns, detecting violations of design guidelines, or type system extensions for Java View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reverse Engineering of Design Patterns from Java Source Code

    Page(s): 123 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (567 KB) |  | HTML iconHTML  

    Recovering design patterns can enhance existing source code analysis tools by bringing program understanding to the design level. This paper presents a new, fully automated pattern detection approach. The new approach is based on our reclassification of the GoF patterns by their pattern intent. We argue that the GoF pattern catalog classifies design patterns in the forward-engineering sense; our reclassification is better suited for reverse engineering. Our approach uses lightweight static program analysis techniques to capture program intent. This paper also describes our tool, PINOT, that implements this new approach. PINOT detects all the GoF patterns that have concrete definitions driven by code structure or system behavior. Our tool is faster, more accurate, and targets more patterns than existing pattern detection tools. PINOT has been used successfully in detecting patterns in Java AWT, JHotDraw, Swing, Apache Ant, and many other programs and packages View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ArchTrace: Policy-Based Support for Managing Evolving Architecture-to-Implementation Traceability Links

    Page(s): 135 - 144
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    Traditional techniques of traceability detection and management are not equipped to handle evolution. This is a problem for the field of software architecture, where it is critical to keep synchronized an evolving conceptual architecture with its realization in an evolving code base. ArchTrace is a new tool that addresses this problem through a policy-based infrastructure for automatically updating traceability links every time an architecture or its code base evolves. ArchTrace is pluggable, allowing developers to choose a set of traceability management policies that best match their situational needs and working styles. We discuss ArchTrace, its conceptual basis, its implementation, and our evaluation of its strengths and weaknesses in a retrospective analysis of data collected from a 20 month period of development of Odyssey, a large-scale software development environment. Results are promising: with respect to the ideal set of traceability links, the policies applied resulted in 95% precision at 89% recall View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.