By Topic

Automated Software Engineering, 1997. Proceedings., 12th IEEE International Conference

Date 1-5 Nov. 1997

Filter Results

Displaying Results 1 - 25 of 49
  • Proceedings 12th IEEE International Conference Automated Software Engineering

    Save to Project icon | Request Permissions | PDF file iconPDF (219 KB)  
    Freely Available from IEEE
  • Index of authors

    Page(s): 321
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • Enhancing the component reusability in data-intensive business programs through interface separation

    Page(s): 313 - 314
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    Visual development environments have provided good support in the reuse of graphical user interface, report and query generation, and simpler database retrieval and updating. However, many commonly used components for computation and database processing and updating are still required to be repeatedly designed and developed. The main problem is that current methods do not support the separation of component interface from a component. Component interface is not an intrinsic property of the component. Incorporating it into the component affects the reusability of the component adversely. A program representation is proposed in this paper to address the problem. The representation enhances the reusability of a component through separating the component interface front the component View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An automated object-oriented testing for C++ inheritance hierarchy

    Page(s): 315 - 316
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    This paper proposes a concept named unit repeated inheritance (URI) in Z notation to realize object-oriented testing of an inheritance hierarchy. Based on this unit, an inheritance level technique (ILT) method as a guide to test object-oriented software errors in the inheritance hierarchy is described. In addition, two testing criteria, intralevel first and interlevel first, are formed based on the proposed mechanism. Moreover, in order to make the test process automatic, we use LEX and YACC to automatically generate a lexical analyzer and a parser to demonstrate a declaration of C++ source code. And, we also construct a windowing tool used in conjunction with a conventional C++ programming environment to assist a programmer to analyze and test his/her C++ programs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Notes on refinement, interpolation and uniformity

    Page(s): 108 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB)  

    The connection between some modularity properties and interpolation is revisited and restated in a general “logic-independent” framework. The presence of uniform interpolants is shown to assist in certain proof obligations, which suffice to establish the composition of refinements. The absence of the desirable interpolation properties from many logics that have been used in refinement motivates a thorough investigation of methods to expand a specification formalism orthogonally, so that the critical uniform interpolants become available. A potential breakthrough is outlined in this paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Specification and verification of the Co4 distributed knowledge system using LOTOS

    Page(s): 63 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB)  

    This paper relates the formal specification and verification of a consensual decision protocol based on Co4, a computer environment dedicated to the building of a distributed knowledge base. This protocol has been specified in the ISO formal description technique LOTOS. The CADP tools from the EUCALYPTUS LOTOS toolset have been used to verify different safety and liveness properties. The verification work has confirmed an announced violation of knowledge consistency and has put forth a case of inconsistent hierarchy, four cases of unexpected message reception and some further local corrections in the definition of the protocol View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formally specifying engineering design rationale

    Page(s): 317 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    This paper briefly describes our initial experiences in applied research of formal approaches to the generation and maintenance of software systems supporting structural engineering tasks. We describe the business context giving rise to this activity, and give an example of the type of engineering problem we have focused on. We briefly describe our approach to software generation and maintenance, and point out the challenges that we appear to face in transferring this technology into actual practice View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive component-based software development with Espresso

    Page(s): 293 - 294
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    Most component models in use today are language-independent, but also platform-dependent and not designed specifically to support a tool-based visual development paradigm. Espresso is a new component model that was designed with the goal of supporting software development through tool-based visual component composition. Being implemented in Java, Espresso components can run on any Java-enabled platform View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modular flow analysis for concurrent software

    Page(s): 264 - 273
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB)  

    Modern software systems are designed and implemented in a modular fashion by composing individual components. The advantages of early validation are widely accepted in this context, i.e., that defects in individual module designs and implementations may be detected and corrected prior to system-level validation. This is particularly true for errors related to interactions between system components. In this paper, we describe how a whole-program automated static analysis technique can be adapted to the validation of individual components, or groups of components, of sequential or concurrent software systems. This work builds off of an existing approach, FLAVERS, that uses program flow analysis to verify explicitly stated correctness properties of software systems. We illustrate our modular analysis approach and some of its benefits by describing part of a case-study with a realistic concurrent multi-component system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed cooperative formal methods tools

    Page(s): 55 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    This paper describes some tools to support formal methods, and conversely some formal methods for developing such tools. We focus on distributed cooperative proving over the web. Our tools include a proof editor/assistant, servers for remote proof execution, a distributed truth protocol, an editor generator; and a new method for interface design called algebraic semiotics, which combines semiotics with algebraic specification. Some examples are given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling software processes by using process and object ontologies

    Page(s): 319 - 320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    In order to model software processes based on ontologies engineering techniques, this paper presents a methodology to manually construct the following ontologies: an object ontology based on constituent elements for objects, and a process ontology based on relationships between inputs and outputs, such as subsumption relationships. Next, using the constructed ontologies, software process plans are generated for user queries, with both user interaction and constraint satisfaction by generate and test paradigm. Furthermore, experimental results show that the methodology works well in generating software process plans for a query about a software process plan from a basic design and to a detailed design View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A static analysis for program understanding and debugging

    Page(s): 297 - 298
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    The paper presents a static pointer analysis technique for a subset of C. The tool supports user-defined assertions inserted in the body of the program. Assertions are of two kinds: static assertions automatically verified by the analyser, and hypothetical assertions treated as assumptions by the analyser. The technique deals with recursive data structures and it is accurate enough to handle circular structures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying concept formation methods to object identification in procedural code

    Page(s): 210 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB)  

    Legacy software systems present a high level of entropy combined with imprecise documentation. This makes their maintenance more difficult, more time consuming, and costlier. In order to address these issues, many organizations have been migrating their legacy systems to new technologies. In this paper, we describe a computer-supported approach aimed at supporting the migration of procedural software systems to the object-oriented (OO) technology, which supposedly fosters reusability, expandability, flexibility, encapsulation, information hiding, modularity, and maintainability. Our approach relies heavily on the automatic formation of concepts based on information extracted directly from code to identify objects. The approach tends, thus, to minimize the need for domain application experts. We also propose rules for the identification of OO methods from routines. A well known and self-contained example is used to illustrate the approach. We have applied the approach on medium/large procedural software systems, and the results show that the approach is able to find objects and to identify their methods from procedures and functions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • From formal specifications to natural language: a case study

    Page(s): 309 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    Because software specifications often serve as a formal contract between the developer and the customer, systems have been proposed that help the software client better understand specifications by automatically paraphrasing them in natural language. The REVIEW system applies natural language generation within Metaview, a metasystem that facilitates the construction of CASE environments to support software specification tasks. This paper summarizes a technical report that presents REVIEW through a case study involving the Object Model of Rumbaugh's OMT specification methodology (1991) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data flow analysis within the ITOC information system design recovery tool

    Page(s): 227 - 236
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB)  

    Most contemporary fourth-generation languages (4GL) are tightly coupled with the database server, and other subsystems, that are provided by the vendor. As a result organizations that wish to change database vendors are typically forced to rewrite their applications using the new vendor's 4GL. The anticipated cost of this redevelopment can deter an organization from changing vendors, hence denying it the benefits that would otherwise result, e.g., the exploitation of more sophisticated database server technology. If tools existed that could reduce the rewriting effort, the large upfront cost of migrating the organization's applications would also be reduced, which could make the transition economically feasible. The ITOC project is part of a large collaborative research initiative between the Centre for Software Maintenance at the University of Queensland and Oracle Corporation. The objective of this project is to design and implement a tool that automatically recovers both the application structure and the static schema definition of 4GL information system applications. These recovered system components are transformed into constructs that populate Oracle's Designer 2000 CASE repository. An essential component of the ITOC process is to determine the relationships between different columns in the database and between references to those columns and fields that appear within the user interface. This in turn requires analysis of data flow between variables in the 4GL programs. While data flow analysis has been applied in many applications, for example, code optimization and program slicing, this paper presents the results of using data flow analysis in the construction of a novel design recovery tool for 4GL-based information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting domain-specific knowledge to refine simulation specifications

    Page(s): 117 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1212 KB)  

    Discusses our approach to the problem of refining high-level simulation specifications. Our domain is simulated combat training for tank platoon members. Our input is a high-level specification for a training scenario and our output is an executable specification for the behavior of a network-based combat simulator. Our approach combines a detailed model of the tank training domain with nonlinear planning and constraint satisfaction techniques. Our initial implementation is successful in large part because of our use of domain knowledge to limit the branching factor of the planner and the constraint satisfaction engine View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the verification of VDM specification and refinement with PVS

    Page(s): 280 - 289
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB)  

    Although the formal method VDM has been in existence since the 1970s, there are still no satisfactory tools to support verification in VDM. The paper deals with one possible means of approaching this problem by using the PVS theorem-prover. It describes a translation of a VDM-SL specification into the PVS specification language using, essentially, the very transparent translation methods described by Agerholm (1996). PVS was used to typecheck the specification and to prove some non-trivial validation conditions. Next, a more abstract specification of the same system was also expressed in PVS, and the original specification was shown to be a refinement of this one. The drawbacks of the translation are that it must be done manually (though automation may be possible), and that the “shallow embedding” technique which is used does not accurately capture the proof rules of VDM-SL. The benefits come from the facts that the portion of VDM-SL which can be represented is substantial and that it is a great advantage to be able to use the powerful PVS proof-checker View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A metric-based approach to detect abstract data types and state encapsulations

    Page(s): 82 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    This article presents an approach to identify abstract data types (ADT) and abstract state encapsulations (ASE, also called abstract objects) in source code. This approach groups together functions, types, and variables into ADT and ASE candidates according to the proportion of features they share. The set of features considered includes the context of these elements, the relationships to their environment, and informal information. A prototype tool has been implemented to support this approach. It has been applied to three C systems (each between 30-38 Kloc). The ADTs and ASEs identified by the approach are compared to those identified by software engineers who did not know the proposed approach. In a case study, this approach has been shown to identify, in most cases, more ADTs and ASEs than five published techniques applied on the same systems. This is important when trying to identify as many ADTs and ASEs as possible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NORA/HAMMR: making deduction-based software component retrieval practical

    Page(s): 246 - 254
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (808 KB)  

    Deduction-based software component retrieval uses pre- and postconditions as indexes and search keys and an automated theorem prover (ATP) to check whether a component matches. This idea is very simple but the vast number of arising proof tasks makes a practical implementation very hard. We thus pass the components through a chain of filters of increasing deductive power. In this chain, rejection filters based on signature matching and model checking techniques are used to rule out non-matches as early as possible and to prevent the subsequent ATP from “drowning”. Hence, intermediate results of reasonable precision are available at (almost) any time of the retrieval process. The final ATP step then works as a confirmation filter to lift the precision of the answer set. We implemented a chain which runs fully automatically and uses SETHEO for model checking and the automated prover SETHEO as confirmation filter. We evaluated the system over a medium-sized collection of components. The results encourage our approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feedback handling in dynamic task nets

    Page(s): 301 - 302
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    While a software process is being executed, many errors and problems occur which require to reconsider previously executed process steps. In order to handle feedback in a process management system, several requirements need to be addressed: adaptability, human intervention, impact analysis, change propagation, restoration of the work context, and traceability. Feedback management in DYNAMITE meets these requirements. DYNAMITE is based on dynamic task nets and specifically supports feedback through feedback relations, task versions, and customized semantics of data flows. A methodology for feedback handling is also represented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A structured approach for synthesizing planners from specifications

    Page(s): 18 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB)  

    Plan synthesis approaches in AI fall into two categories: domain-independent and domain-dependent. The domain-independent approaches are applicable across a variety of domains, but may not be very efficient in any one given domain. The domain-dependent approaches can be very efficient for the domain for which they are designed, but would need to be written separately for each domain of interest. The tediousness and the error-proneness of manual coding have hither-to inhibited work on domain-dependent planners. In this paper we describe a novel way of automating the development of domain dependent planners using knowledge-based software synthesis tools. Specifically, we describe an architecture called CLAY in which the Kestrel Interactive Development System (KIDS) is used in conjunction with a declarative theory of domain independent planning, and the declarative control knowledge specific to a given domain, to semi-automatically derive customized planning code. We discuss what it means to write declarative theory of planning and control knowledge for KIDS, and illustrate it by generating a range of domain-specific planners using state space and plan space refinements. We demonstrate that the synthesized planners can have superior performance compared to classical refinement planners using the same control knowledge View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tools supporting the creation and evolution of software development knowledge

    Page(s): 46 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1012 KB)  

    Software development is a knowledge-intensive activity involving the integration of diverse knowledge sources that undergo constant change. The volatility of knowledge in software development requires that knowledge bases are able to support a continuous knowledge acquisition process where tools are available that can make use of partial knowledge. To address these issues, case-based technology is used in combination with an organizational learning process to create an approach that turns Standard Development Methodologies (SDM) into living documents that capture project experiences and emerging requirements as they are encountered in an organization. A rule-based system is used to tailor the SDM to meet the characteristics of individual projects and provide relevant development knowledge throughout the development lifecycle View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards semantic-based object-oriented CASE tools

    Page(s): 295 - 296
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    Despite their strengths, object-oriented methods (OOMs) and their supporting CASE tools often do not produce model that are amenable to rigorous semantic analysis. This is a direct result of their loosely defined semantics. The authors outline their ongoing work on providing a semantic base for OOMs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Declarative specification of software architectures

    Page(s): 201 - 208
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    Scaling formal methods to large, complex systems requires methods of modeling systems at high levels of abstraction. In this paper, we describe such a method for specifying system requirements at the software architecture level. An architecture represents a way breaking down a system into a set of interconnected components. We use architecture theories to specify the behavior of a system in terms of the behavior of its components via a collection of axioms. The axioms describe the effects and limits of component variation and the assumptions a component can make about the environment provided by the architecture. As a result of the method the verification of the basic architecture can be separated from the verification of the individual component instantiations. We present an example of using architecture theories to model the task coordination architecture of a multi-threaded plan execution system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Moving proofs-as-programs into practice

    Page(s): 10 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB)  

    Proofs in the Nuprl system, an implementation of a constructive type theory, yield “correct-by-construction” programs. In this paper a new methodology is presented for extracting efficient and readable programs from inductive proofs. The resulting extracted programs are in a form suitable for use in hierarchical verifications in that they are amenable to clean partial evaluation via extensions to the Nuprl rewrite system. The method is based on two elements: specifications written with careful use of the Nuprl set-type to restrict the extracts to strictly computational content; and on proofs that use induction tactics that generate extracts using familiar fixed-point combinators of the untyped lambda calculus. In this paper the methodology is described and its application is illustrated by example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.