By Topic

Software Maintenance, 2003. ICSM 2003. Proceedings. International Conference on

Date 22-26 Sept. 2003

Filter Results

Displaying Results 1 - 25 of 61
  • A framework for understanding conceptual changes in evolving source code

    Publication Year: 2003 , Page(s): 431 - 439
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    As systems evolve, they become harder to understand because the implementation of concepts (e.g. business rules) becomes less coherent. To preserve source code comprehensibility, we need to be able to predict how this property will change. This would allow the construction of a tool to suggest what information should be added or clarified (e.g. in comments) to maintain the code's comprehensibility. We propose a framework to characterize types of concept change during evolution. It is derived from an empirical investigation of concept changes in evolving commercial COBOL II files. The framework describes transformations in the geometry and interpretation of regions of source code. We conclude by relating our observations to the types of maintenance performed and suggest how this work could be developed to provide methods for preserving code quality based on comprehensibility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards evergreen architectures: on the usage of scenarios in system architecting

    Publication Year: 2003 , Page(s): 298 - 303
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2671 KB)  

    The system architecture of successful systems typically enjoys a long life (ten years or more). From an architecting point of view, there are two major challenges. The first challenge is to establish the "right" initial architecture that satisfies the strategic intentions, the needs and the requirements of its stakeholders and intended users. The second challenge is to nurture the architecture and keep it architecture up to date (fit and slim) with respect to the changing requirements and technologies that will occur during its lifetime. Keeping the architecture so called "evergreen" (i.e. ongoing satisfying its stakeholders needs and requirements is a very challenging architecting). The second problem is addressed in a separate paper (America et al., 2003) where we describe a method to assess and improve the ease of accommodating new requirements by system architecture once it has been established. The first problem of how to establish the initial architecture satisfying the strategic plans and intentions of its stakeholders is addressed in this presentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A heuristic approach to solving the software clustering problem

    Publication Year: 2003 , Page(s): 285 - 288
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    This paper provides an overview of the author's Ph.D. thesis (2002). The primary contribution of this research involved developing techniques to extract architectural information about a system directly from its source code. To accomplish this objective a series of software clustering algorithms were developed. These algorithms use metaheuristic search techniques to partition a directed graph generated from the entities and relations in the source code into subsystems. Determining the optimal solution to this problem was shown to be NP-hard, thus significant emphasis was placed on finding solutions that were regarded as "good enough" quickly. Several evaluation techniques were developed to gauge solution quality, and all of the software clustering tools created to support this work was made available for download over the Internet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Continual resource estimation for evolving software

    Publication Year: 2003 , Page(s): 289 - 292
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    A resource estimation approach, specifically oriented towards long-lived software being actively evolved, is proposed and investigated. The approach seeks to be coherent with empirically grounded knowledge including the observation that the software progresses through a series of stages during its lifetime and that a distinctive model characterizes the economics of each individual stage. Instead of a single estimation model for the whole lifetime of the software, the approach calibrates models to each stage. Its feasibility was assessed in case studies using industrial data, yielding accuracy (in magnitude of relative error - MRE) between 20 and 33 percent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of neural networks for software quality prediction using object-oriented metrics

    Publication Year: 2003 , Page(s): 116 - 125
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    The paper presents the application of neural networks in software quality estimation using object-oriented metrics. Quality estimation includes estimating reliability as well as maintainability of software. Reliability is typically measured as the number of defects. Maintenance effort can be measured as the number of lines changed per class. In this paper, two kinds of investigation are performed: predicting the number of defects in a class; and predicting the number of lines change per class. Two neural network models are used: they are Ward neural network; and General Regression neural network (GRNN). Object-oriented design metrics concerning inheritance related measures, complexity measures, cohesion measures, coupling measures and memory allocation measures are used as the independent variables. GRNN network model is found to predict more accurately than Ward network model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Difference tools for analysis and design documents

    Publication Year: 2003 , Page(s): 13 - 22
    Cited by:  Papers (6)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    This paper presents a concept and tools for the detection and visualization of differences between versions of graphical software documents such as ER, class or object diagrams, state charts, etc. We first analyze the problems which occur when comparing graphical documents and displaying their similarities and differences. Our basic approach is to use a unified document which contains the common and specific parts of both base documents with the specific parts being highlighted. The central problem is how to reduce the amount of highlighted elements and enable the developer to have a certain amount of control over the changes be selectively highlighted. With regard to tool construction, we assume that software documents are modeled in a fine-grained way, that they are stored as syntax trees in XML (eXtensible Markup Language) files or a repository system and that a version management system is used. By using the features of the data model and the version model we are able to detect and visualize differences between diagram versions, including structural changes (e.g. shifting of a method from one class to another). We further exploit information about the version history delivered by the underlying version management system by highlighting only differences based on structural or logical changes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reverse engineering of the interaction diagrams from C++ code

    Publication Year: 2003 , Page(s): 159 - 168
    Cited by:  Papers (17)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    In object oriented programming, the functionalities of a system result from the interactions (message exchanges) among the objects allocated by the system. While designing object interactions is far more complex than designing the object structure in forward engineering, the problem of understanding object interactions during code evolution is even harder, because the related information is spread across the code. In this paper, a technique for the automatic extraction of UML interaction diagrams from C++ code is proposed. The algorithm is based on a static, conservative flow analysis that approximates the behavior of the system in any execution and for any possible input. Applicability of the approach to large software is achieved by means of two mechanisms: partial analysis and focusing. Usage of our method on a real world, large C++ system confirmed its viability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A large-scale empirical study of forward and backward static slice size and context sensitivity

    Publication Year: 2003 , Page(s): 44 - 53
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    A large-scale study of 43 C programs totaling just over 1 million lines of code is presented. The study includes the forward and backward static slice on every executable statement. In total 2353598 slices were constructed, with an average slice size being just under 30% of the original program. The results also show that ignoring calling-context led to a 50% increase in average slice size and, in contrast to previous results, a 66-77% increase in computation time (due to the increased size). Though not the principal focus of the study, the results also show an average pace for the slicing engine, on a standard PC, of 3 million lines of code per second thereby providing additional evidence for static slicing's practicability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving software maintenance by using agent-based remote maintenance shell

    Publication Year: 2003 , Page(s): 440 - 449
    Cited by:  Papers (2)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    The paper deals with a method developed for software maintenance called remote maintenance shell. It allows software installation, modification and verification on the remote target system without suspending its regular operation. The method is based on remote operations performed by mobile agents. The role of remote maintenance shell in software maintenance is elaborated, as well as its architecture. A case study on version replacement of an object-oriented application is included. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The case for maintaining assurance cases

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (189 KB)  

    When we build and maintain safety-, mission-, or security-critical systems, we are usually constrained by regulations and acquisition guidelines that requires us to provide a documented body of evidence that the system satisfies specified critical properties. In other words, we must construct an "assurance case" to convince the purchaser or user of the system's suitability or quality. However, in building such high-quality software and balancing many objectives, it has become painfully clear that the resulting software is brittle: small changes in the software itself; the hardware and software environment; or in its operational use, can have unexpected and significant (unwanted) effects. Unfortunately, assurance cases for software are often even more brittle than the software itself. This presentation will address the challenges we confront in preserving the quality of the assurance cases as we maintain the quality of the associated software. It is critical that we make progress in addressing these challenges as software continues to become a fundamental enabling technology for 21st-century society. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A case study in optimization

    Publication Year: 2003 , Page(s): 214 - 223
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    This paper describes a case study in which the software architecture of a business application was modified to improve runtime performance. Such modifications should be performed whenever application users encounter known areas of sluggish response, long periods of maintenance, or a change in processing volume requirements. For this particular study, a framework for source code instrumentation was designed to provide convenience, data granularity, and improved control for profiling of elapsed time, operating system events, and CPU counters. The study confirms that proper selection of algorithms and data structures is essential for peak performance. Furthermore, known optimization methods, when summarized, can be used as a roadmap for tuning once hotspots are identified. Upon completion, this optimization project resulted in a speed-up factor of 18 for a typical data set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Populating a Release History Database from version control and bug tracking systems

    Publication Year: 2003 , Page(s): 23 - 32
    Cited by:  Papers (86)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    Version control and bug tracking systems contain large amounts of historical information that can give deep insight into the evolution of a software project. Unfortunately, these systems provide only insufficient support for a detailed analysis of software evolution aspects. We address this problem and introduce an approach for populating a release history database that combines version data with bug tracking data and adds missing data not covered by version control systems such as merge points. Then simple queries can be applied to the structured data to obtain meaningful views showing the evolution of a software project. Such views enable more accurate reasoning of evolutionary aspects and facilitate the anticipation of software evolution. We demonstrate our approach on the large open source project Mozilla that offers great opportunities to compare results and validate our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Context-driven testing of object-oriented systems

    Publication Year: 2003 , Page(s): 281 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    Many different testing techniques have been proposed by researchers, but essentially only two main testing philosophies exist, black box and white box. There exist a number of different testing methods for structural testing of procedural languages. However, the features of object-oriented languages are not addressed by such techniques. The article explores a new structural testing technique for object-oriented systems by developing a testing methodology based on object manipulations and driven by the context of the program under test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using intentional source-code views to aid software maintenance

    Publication Year: 2003 , Page(s): 169 - 178
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (415 KB) |  | HTML iconHTML  

    The conceptual structure of existing software systems is often implicit or non-existing in the source code. We propose the lightweight abstraction of intentional source-code views as a means of making these conceptual structures more explicit. Based on the experience gained with two case studies, we illustrate how intentional source-code views can simplify and improve software understanding, maintenance and evolution in various ways. We present the results as a catalog of usage scenarios in a pattern-like format. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Making maximum use of legacy code: Transavia Internet booking engine

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (182 KB)  

    Summary form only given. Transaction processing in the airline industry has a rich history. The IBM transaction processing facility (TPF) platform played, and still plays, a major role in the airline industry. We introduce a wrapper on the TPF platform itself to access the legacy applications. A very pragmatic and successful approach that resulted in a very thin Web server made it possible to offer real-time e-commerce functionality against a very low price per booking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring software sustainability

    Publication Year: 2003 , Page(s): 450 - 459
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5443 KB)  

    Planning and management of software sustainment is impaired by a lack of consistently applied, practical measures. Without these measures, it is impossible to determine the effect of efforts to improve sustainment practices. In this paper we provide a context for evaluating sustainability and discuss a set of measures developed at the Software Engineering Institute at Carnegie Mellon University. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Slicing of state-based models

    Publication Year: 2003 , Page(s): 34 - 43
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (462 KB) |  | HTML iconHTML  

    System modeling is a widely used technique to model state-based systems. Several state-based languages are used to model such systems, e.g., EFSM (extended finite state machine), SDL (specification description language) and state charts. Although state-based modeling is very useful, system models are frequently large and complex and are hard to understand and modify. Slicing is a well-known reduction technique. Most of the research on slicing is code-based. There has been limited research on specification-based slicing and model-based slicing. In this paper, we present an approach to slicing state-based models, in particular EFSM models. Our approach automatically identifies the parts of the model that affect an element of interest using EFSM dependence analysis. Slice reduction techniques are then used to reduce the size of the EFSM slice. Our experience with the presented slicing approach showed that significant reduction of state-based models could be achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Infrastructures of virtual IT enterprises

    Publication Year: 2003 , Page(s): 199 - 208
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (333 KB) |  | HTML iconHTML  

    Service quality has become a critical survivability factor. The value of IT-business does not only lie in the products but also in the needs it serves. More and more customers require the IT companies with which they do business to continuously improve the speed and quality of their service. To provide seamless high quality service, the collaborating IT-companies/departments must organize themselves in a way so that they can act as one virtual enterprise providing a single point of contact. In this paper, we study how thirty eight companies belonging to thirty seven independent virtual enterprises have organized themselves in order to provide optimal maintenance service to their customers. Our goal is to provide a basis for future support process models and for future business models. Our results show strongly diversified infrastructures of confluent service organizations. These infrastructures were matched against CM3: Roadmap: Organizational Perspective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software systems integration and architectural analysis - a case study

    Publication Year: 2003 , Page(s): 338 - 347
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    Software systems no longer evolve as separate entities but are also integrated with each other. The purpose of integrating software systems can be to increase user-value or to decrease maintenance costs. Different approaches, one of which is software architectural analysis, can be used in the process of integration planning and design. This paper presents a case study in which three software systems were to be integrated. We show how architectural reasoning was used to design and compare integration alternatives. In particular, four different levels of the integration were discussed (interoperation, a so-called enterprise application integration, an integration based on a common data model, and a full integration). We also show how cost, time to delivery and maintainability of the integrated solution were estimated. On the basis of the case study, we analyze the advantages and limits of the architectural approach as such and conclude by outlining directions for future research: how to incorporate analysis of cost; time to delivery; and risk in architectural analysis, and how to make architectural analysis more suitable for comparing many aspects of many alternatives during development. Finally we outline the limitations of architectural analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deriving tolerant grammars from a base-line grammar

    Publication Year: 2003 , Page(s): 179 - 188
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    A grammar-based approach to tool development in reengineering and reverse engineering promises precise structure awareness, but it is problematic in two respects. Firstly, it is a considerable up-front investment to obtain a grammar for a relevant language or cocktail of languages. Existing work on grammar recovery addresses this concern to some extent. Secondly, it is often not feasible to insist on a precise grammar, e.g., when different dialects need to be covered. This calls for tolerant grammars. In this paper, we provide a well-engineered approach to the derivation of tolerant grammars, which is based on previous work on error recovery, fuzzy parsing, and island grammars. The technology of this paper has been used in a complex Cobol restructuring project on several millions of lines of code in different Cobol dialects. Our approach is founded on an approximation relation between a tolerant grammar and a base-line grammar which serves as a point of reference. Thereby, we avoid false positives and false negatives when parsing constructs of interest in a tolerant mode. Our approach accomplishes the effective derivation of a tolerant grammar from the syntactical structure that is relevant for a certain re- or reverse engineering tool. To this end, the productions for the constructs of interest are reused from the base-line grammar together with further productions that are needed for completion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QuaTrace: a tool environment for (semi-) automatic impact analysis based on traces

    Publication Year: 2003 , Page(s): 246 - 255
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB) |  | HTML iconHTML  

    Cost estimation of changes to software systems is often inaccurate and implementation of changes is time consuming, cost intensive, and error prone. One reason for these problems is that relationships between documentation entities (e.g., between different requirements) are not documented at all or only incompletely. In this paper, we describe a constructive approach to support later changes to software systems. Our approach consists of a traceability technique and a supporting tool environment. The tracing approach describes which traces should be established in which way. The proposed tool environment supports the application of the guidelines in a concrete development context. The tool environment integrates two existing tools: a requirements management tool (i.e., RequisitePro) and a CASE tool (i.e., Rhapsody). Our approach allows traces to be established, analyzed, and maintained effectively and efficiently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Re-using software architecture in legacy transformation projects

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (183 KB)  

    Summary form only given, as follows. Software engineers sometimes have to take part in the legacy transformation projects, which are characterized by the complete absence of automated migration tools. In such cases, specialists usually aim at reproducing the original system using the new technologies, without adding any new features. It is common knowledge that it makes sense to keep the functionality as close to the original as possible, because in this case on could use the legacy system as an executable set of requirements. We argue that another, less obvious advantage of "replicating" the old system is re-use of architectural decisions that built in the original legacy system and usually represent an invaluable treasure, because they reflect an implemented understanding of the application domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On modeling software architecture recovery as graph matching

    Publication Year: 2003 , Page(s): 224 - 234
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (478 KB) |  | HTML iconHTML  

    This paper presents a graph matching model for the software architecture recovery problem. Because of their expressiveness, the graphs have been widely used for representing both the software system and its high-level view, known as the conceptual architecture. Modeling the recovery process as graph matching is an attempt to identify a sub-optimal transformation from a pattern graph, representing the high-level view of the system, onto a subgraph of the software system graph. A successful match yields a restructured system that conforms to the given pattern graph. A failed match indicates the points where the system violates specific constraints. The pattern graph generation and the incrementality of the recovery process are the important issues to be addressed. The approach is evaluated through case studies using a prototype toolkit that implements the proposed interactive recovery environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An industrialized restructuring service for legacy systems

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (185 KB)  

    Summary form only given. The article presents RainCodeOnline (http://www.raincode.com/online), Belgium-based initiative to provide COBOL restructuring services through the Web. The article details a number of issues: the underlying technology; the process; the pros and cons of automated restructuring (dialects, process, perception, pricing, security, cost, performance, quality, etc.); and the perception of quality and how it can be further improved. A number of results are presented as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing a multi-billion dollar IT budget

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    Summary form only given. The ING Group is a global financial services institution offering banking, insurance and asset management to 60 million private, corporate and institutional clients world-wide. ING has a market capitalization of 23 billion euros and total assets of over 700 billion euros. Similar to other global financial institutions, ING depends on information technology for delivering its services. Obviously, allocating an IT budget of 2.6 billion euros (2003) requires a sound IT governance. The corporate governance principles of ING assure value creation and protection of stakeholder interests through managing business opportunities and risks. Similarly, IT governance assures the delivery of the expected benefits of IT to help enhance the long term sustainable success of the company. ING's global IT governance structure meshes with the overall corporate governance structure. This helps to align IT strategy with the business goal. The IT Roadmap, as part of the annual planning exercise, is an important instrument to ensure that IT matches the needs of the business. The IT Roadmap outlines the current high level IT issues and the current priorities for IT investments. ING considers IT to be an investment center that drives value creation rather than a typical cost center with a narrow focus on budget controls. The IT and shareholder return project undertaken by IBM in 2001 and taken further by ING and IBM jointly in 2002 sheds a light on how IT helps to increase the shareholder value. One of the key findings of the project is that the best-performing insurers better control their IT maintenance costs and hence create more room for new software development and enhancement. This conclusion matches the belief that striving for better and complete functionality has a favorable effect on software maintenance cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.