By Topic

Software Engineering, 2003. Proceedings. 25th International Conference on

Date 3-10 May 2003

Filter Results

Displaying Results 1 - 25 of 126
  • Toward an understanding of the motivation of open source software developers

    Page(s): 419 - 429
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (285 KB) |  | HTML iconHTML  

    An Open Source Software (OSS) project is unlikely to be successful unless there is an accompanied community that provides the platform for developers and users to collaborate. Members of such communities are volunteers whose motivation to participate and contribute is of essential importance to the success of OSS projects. In this paper, we aim to create an understanding of what motivates people to participate in OSS communities. We theorize that learning is one of the motivational forces. Our theory is grounded in the learning theory of Legitimate Peripheral Participation, and is supported by analyzing the social structure of OSS communities and the co-evolution between OSS systems and communities. We also discuss practical implications of our theory for creating and maintaining sustainable OSS communities as well as for software engineering research and education. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tolerance in a distributed management system: a case study

    Page(s): 478 - 483
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB) |  | HTML iconHTML  

    Our case study provides the most important conceptual lessons learned from the implementation of a Distributed Telecommunication Management System (DTMS), which controls a networked voice communication system. Major requirements for the DTMS are fault-tolerance against site or network failures, transactional safety, and reliable persistence. In order to provide distribution and persistence both transparently and fault-tolerant we introduce a two-layer architecture facilitating an asynchronous replication algorithm. Among the lessons learned are: component based software engineering poses a significant initial overhead but is worth it in the long term; a fault-tolerant naming service is a key requirement for fail-safe distribution; the reasonable granularity for persistence and concurrency control is one whole object; asynchronous replication on the database layer is superior to synchronous replication on the instance level in terms of robustness and consistency; semi-structured persistence with XML has drawbacks regarding consistency, performance and convenience; in contrast to an arbitrarily meshed object model, a accentuated hierarchical structure is more robust and feasible; a query engine has to provide a means for navigation through the object model; finally the propagation of deletion operation becomes more complex in an object-oriented model. By incorporating these lessons learned we are well underway to provide a highly available, distributed platform for persistent object systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A compositional formalization of connector wrappers

    Page(s): 374 - 384
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1072 KB)  

    Increasingly systems are composed of parts: software components, and the interaction mechanisms (connectors) that enable them to communicate. When assembling systems front independently developed and potentially mismatched parts, wrappers may be used to overcome mismatch as well as to remedy extra-functional deficiencies. Unfortunately the current practice of wrapper creation and use is ad hoc, resulting in artifacts that are often hard to reuse or compose, and whose impact is difficult to analyze. What is needed is a more principled basis for creating, understanding, and applying wrappers. Focusing on the class of connector wrappers (wrappers that address issues related to communication and compatibility), we present a means of characterizing connector wrappers as protocol transformations, modularizing them, and reasoning about their properties. Examples are drawn from commonly practiced dependability enhancing techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tools for understanding the behavior of telecommunication systems

    Page(s): 430 - 441
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1030 KB) |  | HTML iconHTML  

    Many methods and tools for the reengineering of software systems have been developed so far However, the domain-specific requirements of telecommunication Systems have not been addressed sufficiently. These systems are designed in a process- rather than in a data-centered way. Furthermore, analyzing and visualizing dynamic behavior is a key to system understanding. In this paper, we report on tools for the reengineering of telecommunication systems which we have developed in close cooperation with an industrial partner These tools are based on a variety of techniques for understanding behavior such as visualization of link chains, recovery of state diagrams from the source code, and visualization of traces by different kinds of diagrams. Tool support has been developed step by step in response to the requirements and questions stated by telecommunication experts at Ericsson Eurolab Germany. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hipikat: recommending pertinent software development artifacts

    Page(s): 408 - 418
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (458 KB) |  | HTML iconHTML  

    A newcomer to a software project must typically come up-to-speed on a large, varied amount of information about the project before becoming productive. Assimilating this information in the open-source context is difficult because a newcomer cannot rely on the mentoring approach that is commonly used in traditional software developments. To help a newcomer to an open-source project become productive faster, we propose Hipikat, a tool that forms an implicit group memory from the information stored in a project's archives, and that recommends artifacts from the archives that are relevant to a task that a newcomer is trying to perform. To investigate this approach, we have instantiated the Hipikat tool for the Eclipse open-source project. In this paper we describe the Hipikat tool, we report on a qualitative study conducted with a Hipikat mock-up on a medium-sized in-house project, and we report on a case study in which Hipikat recommendations were evaluated for a task on Eclipse. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Palantir: raising awareness among configuration management workspaces

    Page(s): 444 - 454
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (357 KB) |  | HTML iconHTML  

    Current configuration management systems promote workspaces that isolate developers from each other. This isolation is both good and bad It is good, because developers make their changes without any interference from changes made concurrently by other developers. It is bad, because not knowing which artifacts are changing in parallel regularly leads to problems when changes are promoted from workspaces into a central configuration management repository. Overcoming the bad isolation, while retaining the good isolation, is a matter of raising awareness among developers, an issue traditionally ignored by the discipline of configuration management. To fill this void, we have developed Palantir, a novel workspace awareness tool that complements existing configuration management systems by providing developers with insight into other workspaces. In particular, the tool informs a developer of which other developers change which other artifacts, calculates a simple measure of severity of those changes, and graphically displays the information in a configurable and generally non-obtrusive manner. To illustrate the use of Palantir, we demonstrate how it integrates with two representative configuration management systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computer-assisted assume/guarantee reasoning with VeriSoft

    Page(s): 138 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (421 KB) |  | HTML iconHTML  

    We show how the state space exploration tool VeriSoft can be used to analyze parallel C/C++ programs compositionally. VeriSoft is used to check assume/guarantee specifications of parallel processes automatically. The analysis is meant to complement standard assume/guarantee reasoning which is usually carried out solely with "pencil and paper". While a successful analysis does not always imply the general correctness of the specification, it increases the confidence in the verification effort. An unsuccessful analysis always produces a counterexample which can be used to correct the specification or the program. VeriSoft's optimization and visualization techniques make the analysis relatively efficient and effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quality of service engineering with UML, .NET, and CORBA

    Page(s): 759 - 760
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB)  

    The concern for non-functional properties of software components and distributed applications has increased significantly in recent years. Non-functional properties are often subsumed under the term Quality of Service (QoS). It refers to quality aspects of a software component or service such as real-time response guarantees, availability and fault-tolerance, the degree of data consistency, the precision of some computation, or the level of security. Consequently, the specification and implementation of QoS mechanisms has become an important concern in the engineering of distributed applications. In this tutorial the attendees will learn how non-functional requirements can be engineered in a systematic way into applications on top of distribution platforms such as CORBA and .NET The tutorial focuses on two major subjects areas: (1) Specification of QoS properties and (2) implementation of QoS mechanisms in middleware. We present a comprehensive, model-driven approach. It starts with a platform-independent model (PIM) in UML that captures the application QoS requirements. This model is mapped by a tool to a platform-specific model (PSM) tailored for a specific middleware, which is extended with the corresponding QoS mechanisms. Finally, the PSM is translated to code. Participants in this tutorial will get a thorough understanding of general QoS requirements, QoS modeling alternatives and QoS mechanism integration in respect to popular distributed object middleware. Furthermore, we will discuss the pros and cons of CORBA and .NET for QoS engineering. A tool will be demonstrated that eases substantially the modeling stages and the code generation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating individual contribution toward group software engineering projects

    Page(s): 622 - 627
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    It is widely acknowledged that group or team projects are a staple of undergraduate and graduate software engineering courses. Such projects provide students with experiences that better prepare them for their careers, so teamwork is often required or strongly encouraged by accreditation agencies. While there are a multitude of educational benefits of group projects, they also pose considerable challenge in fairly and accurately discerning individual contribution for evaluation purposes. Issues, approaches, and best practices for evaluating individual contribution are presented from the perspectives of the University of Kentucky, University of Ottawa, University of Southern California, and others. The techniques utilized within a particular course generally are a mix of (1) the group mark is everybody's mark, (2) everybody reports what they personally did, (3) other group members report the relative contributions of other group members, (4) pop quizzes on project details, and (5) cross-validating with the results of individual work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ICSE workshop on remote analysis and measurement of software systems (RAMSS)

    Page(s): 791 - 792
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (182 KB) |  | HTML iconHTML  

    The goal of this workshop is to bring together researchers and practitioners interested in exploring how the characteristics of today's area of computing (e.g., high connectivity substantial computing power for the average user, higher demand for and expectation of frequent software updates) can be leveraged to improve software quality and performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistency management with repair actions

    Page(s): 455 - 464
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (455 KB) |  | HTML iconHTML  

    Comprehensive consistency management requires a strong mechanism for repair once inconsistencies have been detected In this paper we present a repair framework for inconsistent distributed documents. The core piece of the framework is a new method for generating interactive repairs from full first order logic formulae that constrain these documents. We present a full implementation of the components in our repair framework, as well as their application to the UML and related heterogeneous documents such as EJB deployment descriptors. We describe how our approach can be used as an infrastructure for building higher-level, domain specific frameworks and provide an overview of related work in the database and software development environment community. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated support for classifying software failure reports

    Page(s): 465 - 475
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (314 KB) |  | HTML iconHTML  

    This paper proposes automated support for classifying reported software failures in order to facilitate prioritizing them and diagnosing their causes. A classification strategy is presented that involves the use of supervised and unsupervised pattern classification and multivariate visualization. These techniques are applied to profiles of failed executions in order to group together failures with the same or similar causes. The resulting classification is then used to assess the frequency and severity of failures caused by particular defects and to help diagnose those defects. The results of applying the proposed classification strategy to failures of three large subject programs are reported These results indicate that the strategy can be effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New directions on agile methods: a comparative analysis

    Page(s): 244 - 254
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited. Based on the results, new directions are suggested In principal, it is suggested to place emphasis on methodological quality - not method quantity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DADO: enhancing middleware to support crosscutting features in distributed, heterogeneous systems

    Page(s): 174 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1486 KB) |  | HTML iconHTML  

    Some "non-" or "extra-functional" features, such as reliability, security, and tracing, defy modularization mechanisms in programming languages. This makes such features hard to design, implement, and maintain. Implementing such features within a single platform, using a single language, is hard enough With distributed, heterogeneous (DH) systems, these features induce complex implementations which cross-cut different languages, OSs, and hardware platforms, while still needing to share data and events. Worse still, the precise requirements for such features are often locality-dependent and discovered late (e.g., security policies). The DADO1 approach helps program cross-cutting features by improving DH middleware. A DADO service comprises pairs of adaplets which are explicitly modeled in IDL. Adaplets may be implemented in any language compatible with the target application, and attached to stubs and skeletons of application objects in a variety of ways. DADO supports flexible and type-checked interactions (using generated stubs and skeletons) between adaplets and between objects and adaplets. Adaplets can be attached at run-time to an application object. We describe the approach and illustrate its use for several cross-cutting features, including performance monitoring, caching, and security. We also discuss software engineering process, as well as run-time performance implications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tricks and traps of initiating a product line concept in existing products

    Page(s): 520 - 525
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    Many industries are hampered with introducing the product line concept into already existing products. Though appealing, the concept is very difficult to introduce specifically into a legacy environment. All too often the impacts and risks are not considered adequately. This article describes the introduction of a product line approach in Alcatel's S12 Voice Switching System Business Unit. Practical impacts during the introduction are described as well as tricks and traps. The article not only summarizes the key software engineering principles, but also provides empirical evidence and practical techniques on which to build. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The grand challenge of trusted components

    Page(s): 660 - 667
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (733 KB)  

    Reusable components equipped with strict guarantees of quality can help reestablish software development on a stronger footing, by taking advantage of the scaling effect of reuse to justify the extra effort of ensuring impeccable quality. This discussion examines work intended to help the concept of Trusted Component brings its full potential to the software industry, along two complementary directions: a "low road" leading to qualification of existing components, and a "high road" aimed at the production of components with fully proved correctness properties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Whole program path-based dynamic impact analysis

    Page(s): 308 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (792 KB) |  | HTML iconHTML  

    Impact analysis, determining when a change in one part of a program affects other parts of the program, is time-consuming and problematic. Impact analysis is rarely used to predict the effects of a change, leaving maintainers to deal with consequences rather than working to a plan. Previous approaches to impact analysis involving analysis of call graphs, and static and dynamic slicing, exhibit several tradeoffs involving computational expense, precision, and safety, require access to source code, and require a relatively large amount of effort to re-apply as software evolves. This paper presents a new technique for impact analysis based on whole path profiling, that provides a different set of cost-benefits tradeoffs - a set which can potentially be beneficial for an important class of predictive impact analysis tasks. The paper presents the results of experiments that show that the technique can predict impact sets that are more accurate than those computed by call graph analysis, and more precise (relative to the behavior expressed in a program's profile) than those computed by static slicing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study of an informal knowledge repository in a medium-sized software consulting company

    Page(s): 84 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (758 KB) |  | HTML iconHTML  

    Numerous studies have been conducted on design and architecture of knowledge repositories. This paper addresses the need for looking at practices where knowledge repositories are actually used in concrete work situations. This insight should be used when developing knowledge repositories in the future. Through methods inspired by ethnography this paper investigates how an unstructured knowledge repository is used for different purposes by software developers and managers in a medium-sized software consulting company. The repository is a part of the company's knowledge management tool suite on the Intranet. We found five distinct ways of using the tool, from solving specific technical problems to getting an overview of competence in the company. We highlight the importance of informal organization and the social integration of the tool in the daily work practices of the company. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automotive software engineering

    Page(s): 719 - 720
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (295 KB)  

    Information technology has become the driving force of innovation in many areas of technology and also in cars. Embedded software controls the functions of cars, supports and assists the driver and realizes systems for information and entertainment. Software in automobiles is today one of the great challenges for software engineering. On modem cars we find all issues of software systems in a nutshell. It is a challenge for software and systems engineering. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component technology - what, where, and how?

    Page(s): 684 - 693
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (838 KB)  

    Software components, if used properly, offer many software engineering benefits. Yet, they also pose many original challenges starting from quality assurance and ranging to architectural embedding and composability. In addition, the recent movement towards services, as well as the established world of objects, causes many to wonder what purpose components might have. This extended abstract summarizes the main points of my Frontiers of Software Practice (FOSP) talk at ICSE 2003. The topics covered aim to offer an end-to-end overview of what role components should play, where they should be used, and how this can be achieved Some key open problems are also pointed out. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of two component frameworks: the FIPA-compliant multi-agent system and the web-centric J2EE platform

    Page(s): 341 - 351
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    This work compares and contrasts two component frameworks: (1) the web-centric Java 2 Enterprise Edition (J2EE) framework and (2) the HPA-compliant multi-agent system (MAS). FIPA, the Foundation for Intelligent Physical Agents, provides specifications for agents and agent platforms. Both frameworks are component frameworks; servlets and Enterprise Java Beans (EJBs) in the case of J2EE and software agents in the case of MAS. Both frameworks are specification based. Both frameworks mandate platform responsibilities towards their respective component(s). We develop a framework with which to structure the comparison of the component frameworks. We apply this comparison structure in the context of a 'Data Access' scenario to application development in the respective component frameworks. Furthermore, we have prototyped this scenario in each of the two component frameworks. We conclude with a discussion of the benefits, drawbacks, and issues of developing new applications in each of the component frameworks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Patterns, frameworks, and middleware: their synergistic relationships

    Page(s): 694 - 704
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1068 KB)  

    The knowledge required to develop complex software has historically existed in programming folklore, the heads of experienced developers, or buried deep in the code. These locations are not ideal since the effort required to capture and evolve this knowledge is expensive, time-consuming, and error-prone. Many popular software modeling methods and tools address certain aspects of these problems by documenting how a system is designed However they only support limited portions of software development and do not articulate why a system is designed in a particular way, which complicates subsequent software reuse and evolution. Patterns, frameworks, and middleware are increasingly popular techniques for addressing key aspects of the challenges outlined above. Patterns codify reusable design expertise that provides time-proven solutions to commonly occurring software problems that arise in particular contexts and domains. Frameworks provide both a reusable product-line architecture [1] - guided by patterns - for a family of related applications and an integrated set of collaborating components that implement concrete realizations of the architecture. Middleware is reusable software that leverages patterns and frameworks to bridge the gap between the functional requirements of applications and the underlying operating systems, network protocol stacks, and databases. This paper presents an overview of patterns, frameworks, and middleware, describes how these technologies complement each other to enhance reuse and productivity, and then illustrates how they have been applied successfully in practice to improve the reusability and quality of complex software systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component rank: relative significance rank for software component search

    Page(s): 14 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (823 KB)  

    Collections of already developed programs are important resources for efficient development of reliable software systems. In this paper, we propose a novel method of ranking software components, called Component Rank, based on analyzing actual use relations among the components and propagating the significance through the use relations. We have developed a component-rank computation system, and applied it to various Java programs. The result is promising such that non-specific and generic components are ranked high. Using the Component Rank system as a core part, we are currently developing Software Product Archiving, analyzing, and Retrieving System named SPARS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving web application testing with user session data

    Page(s): 49 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    Web applications have become critical components of the global information infrastructure, and it is important that they be validated to ensure their reliability. Therefore, many techniques and tools for validating web applications have been created. Only a few of these techniques, however, have addressed problems of testing the functionality of web applications, and those that do have not fully considered the unique attributes of web applications. In this paper we explore the notion that user session data gathered as users operate web applications can be successfully employed in the testing of those applications, particularly as those applications evolve and experience different usage profiles. We report results of an experiment comparing new and existing test generation techniques for web applications, assessing both the adequacy of the generated tests and their ability to detect faults on a point-of-sale web application. Our results show that user session data can produce test suites as effective overall as those produced by existing white-box techniques, but at less expense. Moreover the classes of faults detected differ somewhat across approaches, suggesting that the techniques may be complimentary. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Industrial-strength software product-line engineering

    Page(s): 751 - 752
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    Software product-line engineering is one of the few approaches to software engineering that shows promise of improving software productivity by factors of 5 to 10. There are still few examples of its successful application on a large scale, partly because of the complexity of initiating a product-line engineering project and the many factors that must be addressed for such a project to be successful. This tutorial draws on experiences in introducing and sustaining product-line engineering in Lucent Technologies and in Avaya. The objective is to convey to participants the obstacles involved in transitioning to product line engineering and how to overcome such obstacles, particularly in large software development organizations. Participants will learn both technical and organizational aspects of the problem. Participants will leave the tutorial with many ideas on how to introduce product line engineering into an organization in a systematic way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.