By Topic

Program Comprehension, 2002. Proceedings. 10th International Workshop on

Date 27-29 June 2002

Filter Results

Displaying Results 1 - 25 of 32
  • 10th International Workshop on Program Comprehension

    Page(s): iii - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (308 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 293
    Save to Project icon | Request Permissions | PDF file iconPDF (216 KB)  
    Freely Available from IEEE
  • Comprehension of object-oriented software cohesion: the empirical quagmire

    Page(s): 33 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (386 KB) |  | HTML iconHTML  

    Chidamber and Kemerer (1991) proposed an object-oriented (OO) metric suite which included the Lack of Cohesion Of Methods (LCOM) metric. Despite considerable effort both theoretically and empirically since then, the software engineering community is still no nearer finding a generally accepted definition or measure of OO cohesion. Yet, achieving highly cohesive software is a cornerstone of software comprehension and hence, maintainability. In this paper, we suggest a number of suppositions as to why a definition has eluded (and we feel will continue to elude) us. We support these suppositions with empirical evidence from three large C++ systems and a cohesion metric based on the parameters of the class methods; we also draw from other related work. Two major conclusions emerge from the study. Firstly, any sensible cohesion metric does at least provide insight into the features of the systems being analysed. Secondly however, and less reassuringly, the deeper the investigative search for a definitive measure of cohesion, the more problematic its understanding becomes; this casts serious doubt on the use of cohesion as a meaningful feature of object-orientation and its viability as a tool for software comprehension. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern-supported architecture recovery

    Page(s): 53 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (378 KB) |  | HTML iconHTML  

    Architectural patterns and styles represent important design decisions and thus are valuable abstractions for architecture recovery. Recognizing them is a challenge because styles and patterns basically span several architectural elements and can be implemented in various ways depending on the problem domain and the implementation variants. Our approach uses source code structures as patterns and introduces an iterative and interactive architecture recovery approach built upon such lower-level patterns extracted from source code. Associations between extracted pattern instances and architectural elements such as modules arise which result in new and higher-level views of the software system. These pattern views provide information for a consecutive refinement of pattern definitions to aggregate and abstract higher-level patterns which finally enable the description of a software system's architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lightweight impact analysis using island grammars

    Page(s): 219 - 228
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    Impact analysis is needed for the planning and estimation of software maintenance projects. Traditional impact analysis techniques tend to be too expensive for this phase, so there is need for more lightweight approaches. We present a technique for the generation of lightweight impact analyzers from island grammars. We demonstrate this technique using a real-world case study in which we describe how island grammars can be used to find account numbers in the software portfolio of a large bank. We show how we have implemented this analysis and achieved lightweightness using a reusable generative framework for impact analyzers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using graph patterns to extract scenarios

    Page(s): 239 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    Scenario diagrams are useful for helping software developers to understand the interactions among the components of a software system. We present a semi-automatic approach to extracting scenarios from the implementation of a software system. In our approach, the source code of a software system is represented as a graph and scenarios are specified as graph patterns. A relational calculator, Grok, is extended to support graph pattern matching. Grok, as extended, is used in our analysis of the Nautilus open source file manager. Multiple scenarios are extracted and analyzed. These scenarios have helped us to analyze Nautilus's architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experience with FADE for the visualization and abstraction of software views

    Page(s): 11 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1443 KB) |  | HTML iconHTML  

    This paper describes the FADE paradigm for visualization and a series of experiments for the fast layout, abstract representation, and measurement of software views. In program comprehension, graph models are typically used to represent relational information, where the visualization of such graphs is referred to as graph drawing. Here we present the results of an investigation into efficient techniques for drawing and abstractly representing large software views with thousands of nodes from four medium sized software systems. The paradigm presented in this paper marries a solution to problems of computation time, screen space, cognitive load, and rendering for large-scale drawings using a single graph model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependence-cache slicing: a program slicing method using lightweight dynamic information

    Page(s): 169 - 177
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    When we try to debug or to comprehend a large program, it is important to separate suspicious program portions from the overall source program. Program slicing is a promising technique used to extract a program portion; however, such slicing sometimes raises difficulties. Static slicing sometimes produces a large portion of a source program, especially for programs with array and pointer variables, and dynamic slicing requires unacceptably large run-time overhead. In this paper, we propose a slicing method named "dependence-cache slicing", which uses both static and dynamic information. An algorithm has been implemented in our experimental slicing system, and execution data for several sample programs have been collected The results show that dependence-cache slicing reduces the slice size by 30-90% compared with the static slice size, with an increased and affordable run-time overhead, even for programs using array variables. In the future, dependence-cache slicing will become an important feature for effective debugging environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building program understanding tools using visitor combinators

    Page(s): 137 - 146
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (311 KB) |  | HTML iconHTML  

    Program understanding tools manipulate program representations, such as abstract syntax trees, control-flow graphs or data-flow graphs. This paper deals with the use of visitor combinators to conduct such manipulations. Visitor combinators are an extension of the well-known visitor design pattern. They are small, reusable classes that carry out specific visiting steps. They can be composed in different constellations to build more complex visitors. We evaluate the expressiveness, reusability, ease of development and applicability of visitor combinators to the construction of program understanding tools. To that end, we conduct a case study in the use of visitor combinators for control-flow analysis and visualization as used in a commercial Cobol program understanding tool. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Slicing aspect-oriented software

    Page(s): 251 - 260
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB) |  | HTML iconHTML  

    Program slicing has many applications in software engineering activities including program comprehension, debugging, testing, maintenance, and model checking. In this paper, we propose an approach to slicing aspect-oriented software. To solve this problem, we present a dependence-based representation called aspect-oriented system dependence graph (ASDG), which extends previous dependence graphs, to represent aspect-oriented software. The ASDG of an aspect-oriented program consists of three parts: a system dependence graph for non-aspect code, a group of dependence graphs for aspect code, and some additional dependence arcs used to connect the system dependence graph to the dependence graphs for aspect code. After that, we show how to compute a static slice of an aspect-oriented program based on the ASDG. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theory-based analysis of cognitive support in software comprehension tools

    Page(s): 75 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    Past research on software comprehension tools has produced a wealth of lessons in building good tools. However, our explanations of these tools tend to be weakly grounded in existing theories of cognition and human-computer interaction. As a result, the interesting rationales underlying their design are poorly articulated, leaving the lessons primarily implicit. This paper describes a way of using existing program comprehension theories to rationalize tool designs. To illustrate the technique, key design rationales underlying a prominent reverse engineering tool (the Reflexion Model Tool) are reconstructed. The reconstruction shows that theories of cognitive support can be applied to existing cognitive models of software developer behaviour. The method for constructing the rationales is described, and implications are drawn for codifying existing design knowledge, evaluating tools and improving design reasoning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Traceability recovery in RAD software systems

    Page(s): 207 - 216
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    Proposes an approach and a process to recover traceability links between source code and free text documents in a software system developed with extensive use of COTS, middleware, and automatically generated code. The approach relies on a process to filter information gathered from low level artifacts. Information filtering was performed according to a taxonomy of factors affecting traceability links recovery methods. Those factors directly stem from software rapid development techniques. The approach was applied to recover traceability links from a industrial software, developed with RAD techniques and tools, and making use of COTS (e.g., database access components), automatically generated code (e.g., via GUI builder and report generators), and middleware (i.e., CORBA). Results are presented, along with lessons learned. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving Ispell: a case study of program understanding for reuse

    Page(s): 197 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (402 KB) |  | HTML iconHTML  

    Text processing has proven helpful in a number of software engineering tasks. We discuss how a morphological analyser for the Italian language, and its associated linguistic resources, have been developed by reusing and evolving an existing system, Ispell, which is an open-source spell-checker. The need to develop such an analyser derives from the need to improve the traceability link recovery process described by G. Antoniol et al. (2000, 2002). This paper shows how the program understanding exercise was useful to develop a system in a specialized application domain in which we had a very limited background knowledge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An open visualization toolkit for reverse architecting

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB) |  | HTML iconHTML  

    Maintenance and evolution of complex software systems (such as large telecom embedded devices) involve activities such as reverse engineering (RE) and software visualization. Although several RE tools exist, we found their architecture hard to adapt to the domain specific requirements posed by our current practice in Nokia. We present an open architecture which allows easy prototyping of RE data exploration and visualization scenarios for a large range of domain models. We pay special attention to the visual and interactive requirements of the reverse engineering process. The article describes the basic architecture of our toolkit, compares it to the existing RE environments and present several visualizations taken from real cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compression techniques to simplify the analysis of large execution traces

    Page(s): 159 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (314 KB) |  | HTML iconHTML  

    Dynamic analysis consists of analyzing the behavior of a software system to extract its properties. There have been many studies that use dynamic information to extract high-level views of a software system or simply to help software engineers to perform their daily maintenance activities more effectively. One of the biggest challenges that such tools face is to deal with very large execution traces. By analyzing the execution traces of the software systems we are working on, we noticed that they contain many redundancies that can be removed. This led us to create a comprehension-driven compression framework that compresses the traces to make them more understandable. In this paper, we present and explain its components. The compression framework is reversible that is the original trace can be reconstructed from its compressed version. In addition to that, we conducted an experiment with the execution traces of two software systems to measure the gain attained by such compression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Program comprehension experiences with GXL; comprehension for comprehension

    Page(s): 147 - 156
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (442 KB) |  | HTML iconHTML  

    Tools are vital to support the various activities that form the many tasks that are part of the program comprehension process. In order for these tools to be used and useful, it is necessary that they support the activities of the user. This support must complement the work methods and activities of the user and not hinder them. Whilst features of good tools have been identified, tool builders do not always adhere to them. It is important to consider whether needs have changed, and if those desirable properties need augmenting or revising. From experience of maintaining and enhancing an existing program comprehension tool for the purposes of participating in a re-engineering activity, many lessons on tool support have been learned. Various program comprehension strategies are introduced in this paper. The use of GXL (Graph eXchange Language) and involvement in the SORTIE project are presented with reference to the tool being adapted and used. Details of the changes made are given to illustrate the support desired. These all feed into the final section of the paper that discusses the sort of support that tools should provide, current tool deficiencies and some of the ways in which these could be addressed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relocating XML elements from preprocessed to unprocessed code

    Page(s): 229 - 238
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    Transformations performed on source code by a preprocessor complicate the accurate reporting of information extracted to support program comprehension. Differences between the file input to the preprocessor and the output seen by parser-based analyzers creates a need for techniques to back-locate extracted information. To correctly map analysis results back to the preprocessor input files requires a record of the substitutions performed by the preprocessor. This record takes the form of a list, for each character, of the directives responsible for the character's inclusion in the preprocessor's output. We have developed algorithms to utilize the substitution history for the start and end tags of an XML element to correctly place the element in the unprocessed source. The use of substitution histories ensures that element relocation produces well-formed XML. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An integrated approach for studying architectural evolution

    Page(s): 127 - 136
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB) |  | HTML iconHTML  

    Studying how a software system has evolved over time is difficult, time consuming, and costly; existing techniques are often limited in their applicability, are hard to extend, and provide little support for coping with architectural change. The paper introduces an approach to studying software evolution that integrates the use of metrics, software visualization, and origin analysis, which is a set of techniques for reasoning about structural and architectural change. Our approach incorporates data from various statistical and metrics tools, and provides a query engine as well as a Web-based visualization and navigation interface. It aims to provide an extensible, integrated environment for aiding software maintainers in understanding the evolution of long-lived systems that have undergone significant architectural change. We use the evolution of GCC as an example to demonstrate the uses of various functionalities of BEAGLE, a prototype implementation of the proposed environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aspects of internal program documentation-an elucidative perspective

    Page(s): 43 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB) |  | HTML iconHTML  

    It is difficult and challenging to comprehend the internal aspects of a program. The internal aspects are seen as contrasts to end user aspects and interface aspects. Internal program documentation is relevant for almost any kind of software. The internal program documentation represents the original as well as the accumulated understanding of the program, which is very difficult to extract from the source program and its modifications over time. Elucidative programming is a documentation technique that was originally inspired by literate programming. As an important difference between the two, elucidative programming does not call for any reorganization of the source programs, as required by literate programming tools. Elucidative programming provides for mutual navigation in between program source files and sections of the documentation. The navigation takes place in an Internet browser applying a two-framed layout. In this paper we investigate the applicability of elucidative programming in a number of areas related to internal program documentation. It is concluded that elucidative programming can solve a number of concrete problems in the areas of program tutorials, frameworks, and program reviews. In addition we see positive impacts of elucidative programming in the area of programming education. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The role of concepts in program comprehension

    Page(s): 271 - 278
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    The paper presents an overview of the role of concepts in program comprehension. It discusses concept location, in which the implementation of a specific concept is located in the code. This process is very common and precedes a large proportion of code changes. The paper also discusses the process of learning about the domain from the code, which is a prerequisite of code reengineering. The paper notes the similarities and overlaps between program comprehension and human learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fused data-centric visualizations for software evolution environments

    Page(s): 187 - 196
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    During software evolution, several different facets of the system need to be related to one another at multiple levels of abstraction. Current software evolution tools have limited capabilities for effectively visualizing and evolving multiple system facets in an integrated manner. Many tools provide methods for tracking and relating different levels of abstraction within a single facet. However, it is less well understood how to represent and understand relationships between and among different abstraction hierarchies, i.e. for inter-hierarchy relations. Often, these are represented and explored independently, making them difficult to relate to one another. As a result, engineers are likely to have difficulty understanding how the various facets of a system relate and interact. We describe preliminary results of a collaborative research project between industry and academia to enhance the inter-hierarchy visualization capabilities of an existing software evolution environment called "KLOCwork Suite". Specifically, we describe our efforts to add a "fused" visualization based on story board diagrams. This visualization integrates - or "fuses" - facets of architecture, behavior and data. We describe how these diagrams bridge currently isolated visualizations of system information, and argue how they can help drive architecture excavation tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comprehending Web applications by a clustering based approach

    Page(s): 261 - 270
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (462 KB) |  | HTML iconHTML  

    The number and complexity of Web applications are increasing dramatically to satisfy market needs, and the need of effective approaches for comprehending them is growing accordingly. Recently, reverse engineering methods and tools have been proposed to support the comprehension of a Web application; the information recovered by these tools is usually rendered in graphical representations. However, the graphical representations become progressively less useful with large-scale applications, and do not support adequately the comprehension of the application. To overcome this limitation, we propose an approach based on a clustering method for decomposing a Web application (WA) into groups of functionally related components. The approach is based on the definition of a coupling measure between interconnected components of the WA that takes into account both the typology and topology of the connections. The coupling measure is exploited by a clustering algorithm that produces a hierarchy of clustering. This hierarchy allows a structured approach for comprehension of the Web application to be carried out. The approach has been experimented with medium sized Web applications and produced interesting and encouraging results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Source code files as structured documents

    Page(s): 289 - 292
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB) |  | HTML iconHTML  

    A means to add explicit structure to program source code is presented. XML is used to augment source code with syntactic information from the parse tree. More importantly, comments and formatting are preserved and identified for future use by development environments and program comprehension tools. The focus is to construct a document representation in XML instead of a more traditional data representation of the source code. This type of representation supports a programmer centric view of the source rather than a compiler centric view. Our representation is made relevant with respect to other research on XML representations of parse trees and program code. The highlights of the representation are presented and the use of queries and transformations discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On using a benchmark to evaluate C++ extractors

    Page(s): 114 - 123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (301 KB) |  | HTML iconHTML  

    In this paper, we take the concept of benchmarking, as used extensively in computing, and apply it to the evaluation of C++ fact extractors. We demonstrate the efficacy of this approach by developing a prototype benchmark, CppETS 1.0 (C++ Extractor Test Suite, pronounced 'see-pets') and collecting feedback in a workshop setting. The CppETS benchmark characterises C++ extractors along two dimensions: accuracy and robustness. It consists of a series of test buckets that contain small C++ programs and related questions that pose different challenges to the extractors. As with other research areas, benchmarks are best developed through technical work and consultation with a community, so we invited researchers to apply CppETS to their extractors and report on their results in a workshop. Four teams participated in this effort, evaluating the four extractors Ccia, cppx, the Rigi C++ parser and TkSee/SN. They found that CppETS gave results that were consistent with their experience with these tools and therefore had good external validity. Workshop participants agreed that CppETS was an important contribution to fact extractor development and testing. Further efforts to make CppETS a widely-accepted benchmark will involve technical improvements and collaboration with the broader community. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating using animation to improve understanding of sequence diagrams

    Page(s): 107 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    This paper describes an experiment whereby the benefit of using animation to improve the comprehensibility of UML sequence diagrams is assessed. The paper hypothesizes that through animation the control flow of sequence diagram will become more evident. The development a system that seeks to enable stakeholders to better interpret UML modeling behaviour is described. This system aims to provide dynamic visualization through the use of animation techniques. A study to evaluate the extent to which the animation of a diagram can aid its interpretation is then described. The results of the study show that the animation system did improve the comprehensibility of the sequence diagram control flow thus improving the comprehensibility when compared to the comprehensibility of a traditional static representation. Finally, this paper discusses the reasoning for these results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.