By Topic

Software Maintenance and Reengineering, 2005. CSMR 2005. Ninth European Conference on

Date 21-23 March 2005

Filter Results

Displaying Results 1 - 25 of 57
  • Proceedings. Ninth European Conference on Software Maintenance and Reengineering

    Save to Project icon | Request Permissions | PDF file iconPDF (491 KB)  
    Freely Available from IEEE
  • Ninth European Conference on Software Maintenance and Reengineering - Title Page

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • Ninth European Conference on Software Maintenance and Reengineering - Copyright Page

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Ninth European Conference on Software Maintenance and Reengineering - Table of contents

    Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (81 KB)  
    Freely Available from IEEE
  • Message from Chair

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Message from Program Chair

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Committees

    Page(s): xi - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (61 KB)  
    Freely Available from IEEE
  • Additional reviewers

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • Characterizing the Evolution of Class Hierarchies

    Page(s): 2 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB) |  | HTML iconHTML  

    Analyzing historical information can show how a software system evolved into its current state, which parts of the system are stable and which have changed more. However, historical analysis implies processing a vast amount of information making the interpretation of the results difficult. To address this issue, we introduce the notion of the history of source code artifacts as a first class entity and define measurements which summarize the evolution of such entities. We use these measurements to define rules by which to detect different characteristics of the evolution of class hierarchies. Furthermore, we discuss the results we obtained by visualizing them using a polymetric view¹. We apply our approach on two large open source case studies and classify their class hierarchies based on their history. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Prediction Based on Knowledge of Prior Product Versions

    Page(s): 12 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Performance estimation is traditionally carried out when measurement from a product can be obtained. In many cases there is, however, a need to start to make predictions earlier in a development project, when for example different architectures are compared. In this paper, two methods for subjective predictions of performance are investigated. With one of the methods experts estimate the relative resource usage of software tasks without using any knowledge of earlier versions of the product, and with the other method experts use their experience and knowledge of earlier versions of the system.With both methods there are rather large differences between different individual predictions, but the median of the prediction error indicates that the second method is worth further investigations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring the Relationship between Cumulative Change and Complexity in an Open Source System

    Page(s): 21 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    This paper explores the relationship between cumulative change and complexity in an evolving Open Source system. The study involves measurements at the function and file level. In order to measure cumulative change, the approach used a metric termed release-touches, which counts the number of releases for which a given file has been modified. Based on the value of this metric, we ranked the files and used the ranking in order to identify two groups, the more stable and the less stable parts of the source code. Complexity was measured using two derivatives of the McCabe index. Histograms and distributions were visually and statistically analyzed. The results empirically suggest that at the file level there are correlations between high cumulative change, large size and high complexity. This paper provides an approach for identifying with functions need to be refactored first if one wishes to reduce the complexity of the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ADAMS Re-Trace: A Traceability Recovery Tool

    Page(s): 32 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    We present the traceability recovery tool developed in the ADAMS artefact management system. The tool is based on an Information Retrieval technique, namely Latent Semantic Indexing and aims at supporting the software engineer in the identification of the traceability links between artefacts of different types. We also present a case study involving seven student projects which represented an ideal workbench for the tool. The results emphasise the benefits provided by the tool in terms of new traceability links discovered, in addition to the links manually traced by the software engineer. Moreover, the tool was also helpful in identifying cases of lack of similarity between artefacts manually traced by the software engineer, thus revealing inconsistencies in the usage of domain terms in these artefacts. This information is valuable to assess the quality of the produced artefacts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An XML-Based Framework for Language Neutral Program Representation and Generic Analysis

    Page(s): 42 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    XML applications are becoming increasingly popular to define structured or semi-structured constrained data in XML for special application areas. In pursuit there is a growing momentum of activities related to XML representation of source code in the area of program comprehension and software re-engineering. The source code and the artifacts extracted from a program are necessarily structured information that needs to be stored and exchanged among different tools. This makes XML to be a natural choice to be used as the external representation formats for program representations. Most of the XML representations proposed so far abstract the source code at the AST level. These AST representations are tightly coupled with the language grammar of the source code and hence require development of different tools for different programming languages to perform the same type of analysis. Moreover AST abstracts the program at a very fine level of granularity and hence not suitable to be used directly for higher-level sophisticated program analysis. As such, we propose XML applications for language neutral representation of programs at different levels of abstractions and by combining them we present a program representation framework in order to facilitate the development of generic program analysis tools. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model Synthesis for Real-Time Systems

    Page(s): 52 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    In this paper, we present a method for model synthesis. Based on observations of running system, a model that can describe the observed behavior is automatically generated. This allows faster and more accurate modeling of existing systems. The models can be used for impact analysis, verification, documentation etc. The method has been implemented; we describe that implementation and present an evaluation of its performance, the conclusion of the evaluation is in favor of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discovering Unanticipated Dependency Schemas in Class Hierarchies

    Page(s): 62 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Object-oriented applications are difficult to extend and maintain, due to the presence of implicit dependencies in the inheritance hierarchy. Although these dependencies often correspond to well-known schemas, such as hook and template methods, new unanticipated dependency schemas occur in practice, and can consequently be hard to recognize and detect. To tackle this problem, we have applied Concept Analysis to automatically detect recurring dependency schemas in class hierarchies used in object-oriented applications. In this paper we describe our mapping of OO dependencies to the formal framework of Concept Analysis, we apply our approach to a non-trivial case study, and we report on the kinds of dependencies that are uncovered with this technique. As a result, we show how the discovered dependency schemas correspond not only to good design practices, but also to "bad smells" in design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting Entity Relationship Diagram from a Table-Based Legacy Database

    Page(s): 72 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Current database reverse engineering researches presume that the information regarding semantics of attributes, primary keys, and foreign keys in database tables is complete. However, this may not be the case. In this paper, we present a process that extracts an extended entity relationship diagram from a table-based database with little descriptions for the fields in its tables and no description for keys. The primary inputs of our approach are system display forms and table schema. An extended ER diagram is successfully extracted in a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracing Cross-Cutting Requirements via Context-Based Constraints

    Page(s): 80 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    In complex systems, it is difficult to identify which system element is involved in which requirement. In this article, we present a new approach for expressing and validating a requirement even if we don’t precisely know which system elements are involved: a context-based constraint (CoCon) can identify the involved elements according to their context. CoCons support checking the system for compliance with requirements during (re-)design, during (re-)configuration or at runtime because they specify requirements on an abstract level independent of the monitored artefact type. They facilitate handling cross-cutting requirements for possibly large, overlapping or dynamically changing sets of system elements - even across different artefact types or platforms. Besides defining CoCons, we discuss algorithms for detecting violated or contradicting CoCons. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards the Optimization of Automatic Detection of Design Flaws in Object-Oriented Software Systems

    Page(s): 92 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    In order to increase the maintainability and the flexibility of a software, its design and implementation quality must be properly assessed. For this purpose a large number of metrics and several higher-level mechanisms based on metrics are defined in literature. But the accuracy of these quantification means is heavily dependent on the proper selection of threshold values, which is oftentimes totally empirical and unreliable. In this paper we present a novel method for establishing proper threshold values for metrics-based rules used to detect design flaws in object-oriented systems. The method, metaphorically called "tuning machine", is based on inferring the threshold values based on a set of reference examples, manually classified in "flawed" respectively "healthy" design entities (e.g., classes, methods). More precisely, the "tuning machine" searches, based on a genetic algorithm, for those thresholds which maximize the number of correctly classified entities. The paper also defines a repeatable process for collecting examples, and discusses the encouraging and intriguing results while applying the approach on two concrete metrics-based rules that quantify two well-known design flaws i.e., "God Class" and "Data Class". View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design Pattern Recovery by Visual Language Parsing

    Page(s): 102 - 111
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    We propose an Object Oriented (OO) design pattern recovery approach which makes use of a design pattern library, expressed in terms of visual grammars, and based on a visual language parsing technique. We also present a visual environment which supports the pattern recognition process by automatically retrieving design patterns from imported UML class diagrams. The visual environment has been automatically generated through the VLDesk system, starting from a description of the design pattern grammar. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recovering Behavioral Design Models from Execution Traces

    Page(s): 112 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    Recovering behavioral design models from execution traces is not an easy task due to the sheer size of typical traces. In this paper, we describe a novel technique for achieving this. Our approach is based on filtering traces by distinguishing the utility components from the ones that implement high-level concepts. In the paper, we first define the concept of utilities; then we present an algorithm based on fan-in analysis that can be used for the detection of utilities. To represent the high-level behavioral models, we explore the Use Case Map (UCM) notation, which is a language used to describe and understand emergent behavior of complex and dynamic systems. Finally, we test the validity of our approach on an object-oriented system called TConfig. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software Clustering Based on Dynamic Dependencies

    Page(s): 124 - 133
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    The reverse engineering literature contains many software clustering approaches that attempt to cluster large software systems based on the static dependencies between software artifacts. However, the usefulness of clustering based on dynamic dependencies has not been investigated. It is possible that dynamic clusterings can provide a fresh outlook on the structure of a large software system. In this paper, we present an approach for the evaluation of dynamic clusterings. We apply this approach to a large open source software system, and present experimental results that suggest that dynamic clusterings have considerable merit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying Webmining Techniques to Execution Traces to Support the Program Comprehension Process

    Page(s): 134 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Well-designed object-oriented programs typically consist of a few key classes that work tightly together to provide the bulk of the functionality. As such, these key classes are excellent starting points for the program comprehension process. We propose a technique that uses web-mining principles on execution traces to discover these important and tightly interacting classes. Based on two medium-scale case studies — Apache Ant and Jakarta JMeter — and detailed architectural information from its developers, we show that our heuristic does in fact find a sizeable number of the classes deemed important by the developers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparison of Online and Dynamic Impact Analysis Algorithms

    Page(s): 143 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    Impact analysis is the process of determining the effect, or impact, of a change to a software system. Dynamic impact analysis uses data obtained from executing a program to perform analysis after program termination for determining impacts more in line with how a program is used. Online impact analysis has the same goal, but is performed concurrently with program execution. While some of the tradeoffs between dynamic algorithms have been studied, no such study has been performed for online algorithms. In this paper, we present such a study by comparing two online algorithms and two previously published dynamic algorithms in terms of their space overhead, time for computation, computed impact sets, and scalability. Our results indicate that performing impact analysis online can be more scalable than the dynamic counterparts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Legacy applications - a case for restoration?

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    Summary form only given. Although legacy applications often need attention, it is also the case that "legacy" anything has acquired a bad reputation that (in some cases at least) needs restoration. In this paper the author look at various strategies for large organisations with massive investments in legacy software, processes and systems (in the large), to move forward without necessarily throwing out the baby with the bathwater. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintenance and Analysis of Visual Programs — An Industrial Case

    Page(s): 158 - 167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    A domain-specific visual language, Function Block Language (FBL), is used in Metso Automation for writing automation control programs. The same engineering environment is used for both forward and reverse engineering activities, providing convenient support for the maintenance and evolution of FBL programs. Various data and program analysis methods are applied to study the FBL programs stored in project library archives. Metadata stored about the program allows various kinds of queries and enables focusing the analysis to certain kinds of programs. The application of the provided analysis methods further aids the maintenance and reuse activities. Software and data reverse engineering techniques are traditionally used to support program and data comprehension, respectively. In this paper we show how corresponding techniques can be used to analyze visual programs. The visual language under study in this paper is FBL. FBL and the analysis techniques proposed have been used in real-world projects at Metso Automation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.