By Topic

Software Engineering, 1996., Proceedings of the 18th International Conference on

Date 25-30 March 1996

Filter Results

Displaying Results 1 - 25 of 55
  • Proceedings of IEEE 18th International Conference on Software Engineering

    Save to Project icon | Request Permissions | PDF file iconPDF (349 KB)  
    Freely Available from IEEE
  • Author index

    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • Beyond structured programming

    Page(s): 268 - 277
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB)  

    Structured programming principles are not strong enough to control complexity and guarantee high reliability of software at the module level. Stronger organizing principles and stronger properties of components are needed to make significant gains in the quality of software. Practical proposals, based on the definition of normal forms which have a mathematical/logical foundation, are suggested as a vehicle for constructing software that is both simpler and of higher quality with regard to clearly defined and justifiable criteria View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scene: using scenario diagrams and active text for illustrating object-oriented programs

    Page(s): 366 - 375
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (812 KB)  

    Scenario diagrams are a well-known notation for visualizing the message flow in object-oriented systems. Traditionally, they are used in the analysis and design phases of software development to prototype the expected behavior of a system. We show how they can be used in reverse for understanding and browsing existing software. We have implemented a tool called Scene (SCENario Environment) that automatically produces scenario diagrams for existing object-oriented systems. The tool makes extensive use of an active text framework providing the basis for various hypertext-like facilities. It allows the user to browse not only scenarios but also various kinds of associated documents, such as source code (method definitions and calls), class interfaces, class diagrams and call matrices View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experience assessing an architectural approach to large-scale systematic reuse

    Page(s): 220 - 229
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    Systematic reuse of large-scale software components promises rapid, low cost development of high-quality software through the straightforward integration of existing software assets. To date this promise remains largely unrealized, owing to technical, managerial, cultural, and legal barriers, One important technical barrier is architectural mismatch. Recently, several component integration architectures have been developed that purport to promote large-scale reuse. Microsoft's OLE technology and associated applications are representative of this trend. To understand the potential of these architectures to enable large-scale reuse, we evaluated OLE by using it to develop a novel fault-tree analysis tool. Although difficulties remain, the approach appears to overcome architectural impediments that have hindered some previous large-scale reuse attempts, to be practical for use in many domains, and to represent significant progress towards realizing the promise of barge-scale systematic reuse View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A specification-based adaptive test case generation strategy for open operating system standards

    Page(s): 81 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (808 KB)  

    The paper presents a specification based adaptive test case generation (SBATCG) method for integration testing in an open operating system standards environment. In the SBATCG method, templates describing abstract state transitions are derived from a model based specification, and the templates are refined to the internal structure of each implementation. We adopt the Z notation, one of the most widely used formal specification languages. We conducted mutation analysis to study the fault exposure abilities of the SBATCG method and that of a strategy based only on a specification. In our experiment, we used a Z version of the ITRON2 real time multi task operating system specification and two commercially available ITRON2 implementations. The results of this equipment show that the SBATCG method can achieve a higher fault detecting ability than can the strategy using only a specification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generic, peer-to-peer repository for distributed configuration management

    Page(s): 308 - 317
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (960 KB)  

    Distributed configuration management is intended to support the activities of projects that span multiple sites. NUCM (Network-Unified Configuration Management) is a testbed that we are developing to help us explore the issues of distributed configuration management. NUCM separates configuration management repositories (i.e. the stores for versions of artifacts) from configuration management policies (i.e. the procedures by which the versions are manipulated) by providing a generic model of a distributed repository and an associated programmatic interface. This paper describes the model and the interface, presents an initial repository distribution mechanism, and sketches how NUCM can be used to implement two rather different configuration management policies, namely check-in/check-out and change sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User interface prototyping-concepts, tools, and experience

    Page(s): 532 - 541
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    In recent years the development of highly interactive software systems with graphical user interfaces has become increasingly common. The acceptance of such a system depends to a large degree on the quality of its user interface. Prototyping is an excellent means for generating ideas about how a user interface can be designed, and it helps to evaluate the quality of a solution at an early stage. We present the basic concepts behind user interface prototyping, a classification of tools supporting it and a case study of nine major industrial projects. Based on our analysis of these projects we present the following conclusions: prototyping is used more consciously than in recent years. No project applied a traditional life-cycle approach, which is one of the reasons why most of them were successful. Prototypes are increasingly used as a vehicle for developing and demonstrating visions of innovative systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • OPSIS: a view mechanism for software processes which supports their evolution and reuse

    Page(s): 38 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    The paper describes Opsis, a view mechanism applied to graph based process modelling languages of type Petri net. A view is a sub model which can be mechanistically constructed from another model by application of a perspective which: identifies all parts of the original model that are contained in the submodel; identifies and transforms all parts that constitute the interface to other sub models; adds new link relations to describe the behaviour of the sub model in interaction with the other sub models. Sub models are more easy to grasp and can be limited in scope to some well defined aspects of a global model, such as the view point ofa single role player. Composition of sub models is achieved through a merge operation on interface elements of sub models. The intended use of Opsis is: 1) process evolution-changes can be localised to certain views, which largely reduces the complexity of applying change; and 2) process reuse-libraries can contain reusable fragments of type view that can be combined using the composition operators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A case study in applying a systematic method for COTS selection

    Page(s): 201 - 209
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB)  

    This paper describes a case study that used and evaluated key aspects of a method developed for systematic reusable off-the-shelf software selection. The paper presents a summary of the common problems in reusable off-the-shelf software selection, describes the method used and provides details about the case study carried out. The case study indicated that the evaluated aspects of the method are feasible, improve the quality and efficiency of reusable software selection and the decision makers have more confidence in the evaluation results, compared to traditional approaches. Furthermore, the case study also showed that the choice of evaluation data analysis method can influence the evaluation results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System dynamics modeling of an inspection-based process

    Page(s): 376 - 386
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1016 KB)  

    A dynamic simulation model of an inspection-based software lifecycle process has been developed to support quantitative process evaluation. The model serves to examine the effects of inspection practices on cost, scheduling and quality throughout the lifecycle. It uses system dynamics to model the interrelated flows of tasks, errors and personnel throughout different development phases and is calibrated to industrial data. If extends previous software project dynamics research by examining an inspection-based process with an original model, integrating it with a knowledge-based method for risk assessment and cost estimation, and using an alternative modeling platform. While specific enough to investigate inspection practices, it is sufficiently general to incorporate changes for other phenomena. It demonstrates the effects of performing inspections or not, the effectiveness of varied inspection policies, and the effects of other managerial policies such as manpower allocation. The results of testing indicate a valid model that can be used for process evaluation and project planning, and can serve as a framework for incorporating other dynamic process factors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A software engineering experiment in software component generation

    Page(s): 542 - 552
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB)  

    The paper presents results of a software engineering experiment in which a new technology for constructing program generators from domain-specific specification languages has been compared with a reuse technology that employs sets of reusable Ada program templates. Both technologies were applied to a common problem domain, constructing message translation and validation modules for military command, control, communications and information systems (C3I). The experiment employed four subjects to conduct trials of use of the two technologies on a common set of test examples. The experiment was conducted with personnel supplied and supervised by an independent contractor. Test cases consisted of message specifications taken from Air Force C3I systems. The main results are that greater productivity was achieved and fewer error were introduced when subjects used the program generator than when they used Ada templates to implement software modules from sets of specifications. The differences in the average performance of the subjects are statistically significant at confidence levels exceeding 99 percent View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The role of experimentation in software engineering: past, current, and future

    Page(s): 442 - 449
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB)  

    Software engineering needs to follow the model of other physical sciences and develop an experimental paradigm for the field. This paper proposes the approach towards developing an experimental component of such a paradigm. The approach is based upon a quality improvement paradigm that addresses the role of experimentation and process improvement in the content of industrial development. The paper outlines a classification scheme for characterizing such experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to consistency control in software engineering

    Page(s): 289 - 297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB)  

    Quality assurance methods as suggested by standards like ISO 9000 focus on the principle of review and feedback loops, which may be implemented by computer-based software process management including life cycle models, version control, and change tracking. Provided that the software process is modelled independently of concrete design methods, development tools, and software representations, a general representation of quality assurance methods can be obtained. In our paper we introduce such a high-level formalism, heavily exploiting some remarkable analogy between the software development process and distributed computations. Our approach is based on labelling each software element and product version during development. By using these labels one can coordinate versions, variant designs, and reconstruct elements of old versions automatically. Though our model is independent of particular design methods or programming formalisms, it can be parameterized with tools and compilers in order to be tailored to specific projects. Some applications are demonstrated for important problems of software project management that cannot be solved or even detected with nowadays standard methods, but that can easily be dealt with by using our new model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An object-oriented implementation of B-ISDN signalling. 2. Extendability stands the test

    Page(s): 125 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB)  

    The article discusses the extension to an existing object-oriented implementation for B-ISDN signalling. After a brief overview of the existing implementation it is shown where changes in the existing software were made to meet the new requirements (in particular, features for intelligent networks), what mechanisms were available or introduced to make this adaptation as easy as possible, and what experiences in software reuse resulted in this proceeding. The conclusions confirm the predictions concerning extendibility stated in an earlier article, giving reasons for emphasizing again the values and merits of the Call Model and an object-oriented approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing and implementing COO: design process, architectural style, lessons learned

    Page(s): 342 - 352
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (940 KB)  

    This paper reports on the design and implementation of a software development framework named COO (which stands for COOperation and COOrdination in the software process). Its design process is first detailed and justified. Then, the paper emphasizes its layered and subject-oriented architecture. Particularly, it is shown how this architectural style leads to a very flexible and powerful way of defining, integrating and combining services in a software development environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GRIDS-GRaph-based, integrated development of software: integrating different perspectives of software engineering

    Page(s): 48 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1088 KB)  

    The paper presents a multi dimensional software engineering model, based on a formal graph specification. In contrast to other software engineering approaches, we concentrate on the integration of the “partial” models of software processes, system architectures and views onto the system into one consistent project framework, in order to enhance large scale software development. We first introduce the static part of the so called three dimensional model of software engineering (3DM), which meta models partial models and integrated project frameworks. We further describe the dynamic part of the 3DM, which defines the necessary actions to generate, manipulate and maintain the entities of the static part. Using the programmed graph rewriting system PROGRES gives us a powerful means to formally specify our conceptual model. We show how we apply PROGRES to formalize the 3DM, and present the prototype of a project supporting tool, generated from the formal specification of the static and dynamic parts of the 3DM View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System acquisition based on software product assessment

    Page(s): 210 - 219
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (960 KB)  

    The procurement of complex software product involves many risks. To properly assess and manage those risks, Bell Canada has developed methods and tools that combine process capability assessment with a static analysis based software product assessment. This paper describes the software product assessment process that is part of our risk management approach. The process and the tools used to conduct a product assessment are described. The assessment is in part based on static source code metrics and inspections. A summary of the lessons learned since the initial implementation in 1993 is provided. Over 20 products totalling more than 100 million lines of code have gone through this process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monitoring compliance of a software system with its high-level design models

    Page(s): 387 - 396
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1104 KB)  

    As a complex software system evolves, its implementation tends to diverge from the intended or documented design models. Such undesirable deviation makes the system hard to understand, modify and maintain. This paper presents a hybrid computer-assisted approach for confirming that the implementation of a system maintains its expected design models and rules. Our approach closely integrates logic-based static analysis and dynamic visualization, providing multiple code views and perspectives. We show that the hybrid technique helps determine design-implementation congruence at various levels of abstraction: concrete rules like coding guidelines, architectural models like design patterns or connectors, and subjective design principles like low coupling and high cohesion. The utility of our approach has been demonstrated in the development of μChoices, a new multimedia operating system which inherits many design decisions and guidelines learned from experience in the construction and maintenance of its predecessor, Choices View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A systematic survey of CMM experience and results

    Page(s): 323 - 330
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    The capability maturity model (CMM) for software has become very influential as a basis for software process improvement (SPI). Most of the evidence to date showing the results of these efforts has consisted of case studies. We present a systematic survey of organizations that have undertaken CMM-based SPI to get more representative results. We found evidence that process maturity is in fact associated with better organizational performance, and that software process appraisals are viewed, in retrospect, as extremely valuable and accurate guides for the improvement effort. The path was not always smooth, however, and efforts generally took longer and cost more than expected. A number of factors that distinguished highly successful from unsuccessful efforts are identified. Most of these factors are under management control, suggesting that a number of specific management decisions are likely to have a major impact on the success of the effort View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The program understanding problem: analysis and a heuristic approach

    Page(s): 6 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (768 KB)  

    Program understanding is the process of making sense of a complex source code. This process has been considered as computationally difficult and conceptually complex. So far no formal complexity results have been presented, and conceptual models differ from one researcher to the next. We formally prove that program understanding is NP hard. Furthermore, we show that even a much simpler subproblem remains NP hard. However we do not despair by this result, but rather offer an attractive problem solving model for the program understanding problem. Our model is built on a framework for solving constraint satisfaction problems, or CSPs, which are known to have interesting heuristic solutions. Specifically, we can represent and heuristically address previous and new heuristic approaches to the program understanding problem with both existing and specially designed constraint propagation and search algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilanguage interoperability in distributed systems. Experience report

    Page(s): 451 - 463
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1188 KB)  

    The Q system provides interoperability support for multilingual, heterogeneous component-based software systems. Initial development of Q began in 1988, and was driven by the very pragmatic need for a communication mechanism between a client program written in Ada and a server written in C. The initial design was driven by language features present in C, but not in Ada, or vice-versa. In time our needs and aspirations grew and Q evolved to support other languages, such as C++, Lisp, Prolog, Java, and Tcl. As a result of pervasive usage by the Arcadia SDE research project, usage levels and modes of the Q system grew and so more emphasis was placed upon portability, reliability, and performance. In that context we identified specific ways in which programming language support systems can directly impede effective interoperability. This necessitated extensive changes to both our conceptual model and our implementation of the Q system. We also discovered the need to support modes of interoperability far more complex than the usual client-server. The continued evolution of Q has allowed the architecture of Arcadia software to become highly distributed and component-based, exploiting components written in a variety of languages. In addition to becoming an Arcadia project mainstay, and has also been made available to over 100 other sites, and it is currently in use in a variety of other projects. This paper summarizes key points that have been learned from this considerable base of experience View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Configuration management with logical structures

    Page(s): 298 - 307
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1224 KB)  

    When designing software, programmers usually think in terms of modules that are represented as functions and classes, but using existing configuration management systems, programmers have to deal with versions and configurations that are organized by files and directories. This is inconvenient and error-prone, since there is a gap between handling source code and managing configurations. We present a framework for programming environments that handles versions and configurations directly in terms of the functions and classes in source code. We show that with this framework, configuration management issues in software reuse and cooperative programming become easier. We also present a prototype environment that has been developed to verify our ideas View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study of static call graph extractors

    Page(s): 90 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB)  

    Informally, a call graph represents calls between entities in a given program. The call graphs that compilers compute to determine the applicability of an optimization must typically be conservative: a call may be omitted only if it can never occur an any execution of the program. Numerous software engineering tools also extract call graphs, with the expectation that they will help software engineers increase their understanding of a program. The requirements placed on software engineering tools when computing call graphs are typically more related than for compilers. For example, some false negatives-calls that can in fact take place in some execution of the program, but which are omitted from the call graph-may be acceptable, depending on the understanding task at hand. In this paper we empirically show a consequence of this spectrum of requirements by comparing the C call graphs extracted from three software systems (mapmaker, mosaic, and gee) by five extraction tools (cflow, CIA, Field, mk-functmap, and rigiparse). A quantitative analysis of the call graphs extracted for each system shows considerable variation, a result that is counterintuitive to many experienced software engineers. A qualitative analysis of these results reveals a number of reasons for this variation: differing treatments of macros, function pointers, input formats, etc. We describe and discuss the study, sketch the design space, and discuss the impact of our study on practitioners, tool developers, and researchers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Requirements for a layered software architecture supporting cooperative multi-user interaction

    Page(s): 408 - 417
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (996 KB)  

    Layered interactive systems lend themselves to be adapted for cooperation if inter-layer communication is charged to separated connectors. Point-to-point connectors can be replaced with cooperative connectors multiplexing and demultiplexing I/O between a particular layer and multiple instances of the next lower one. For this technique to be most effective, some general guidelines should be followed that support the design of good quality software where discrimination between heterogeneous functionality at the architectural level allows multiple interacting users to exploit different system features based on their role in the cooperation. This provides a sound basis for augmenting collaboration-transparent layered systems with powerful collaboration support (e.g. complex coordination policies) yet preserving separation of concerns between applicative and cooperative functionality. The paper discusses these issues both in general and with reference to their application within the CSDL framework for cooperative systems design View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.