By Topic

Computer Assurance, 1994. COMPASS '94 Safety, Reliability, Fault Tolerance, Concurrency and Real Time, Security. Proceedings of the Ninth Annual Conference on

Date June 27 1994-July 1 1994

Filter Results

Displaying Results 1 - 25 of 27
  • State minimization for concurrent system analysis based on state space exploration

    Publication Year: 1994, Page(s):123 - 134
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (874 KB)

    A fundamental issue in the automated analysis of concurrent systems is the efficient generation of the reachable state space. Since it is not possible to explore all the reachable states of a system if the number of states is very large or infinite, we need to develop techniques for minimizing the state space. This paper presents our approach to cluster subsets of states into equivalent classes. W... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proceedings of COMPASS'94 - 1994 IEEE 9th Annual Conference on Computer Assurance

    Publication Year: 1994
    Request permission for commercial reuse | PDF file iconPDF (185 KB)
    Freely Available from IEEE
  • Causality as a means for the expression of requirements for safety critical systems

    Publication Year: 1994, Page(s):223 - 231
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (580 KB)

    The development of requirements for software systems has long been identified as an important and difficult part of software development. This is much more so for safety-critical systems. In this paper we identify one approach which we believe, forces the developer to concentrate upon requirements rather than initial design concepts (as often happens). This approach uses causality as its main abst... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Case study: Applying formal methods to the Traffic Alert and Collision Avoidance System (TCAS) II

    Publication Year: 1994, Page(s):39 - 51
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1060 KB)

    Requirements State Machine Language (RSML) evolved from statecharts during the development of the Traffic Alert and Collision Avoidance System (TCAS) II system requirements specification. This paper describes RSML and the TCAS II system requirements specification, which was reverse-engineered from pseudocode. This case study illustrates how formal methods have been applied to a safety-critical sys... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Covert channels-here to stay?

    Publication Year: 1994, Page(s):235 - 243
    Cited by:  Papers (43)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (640 KB)

    We discuss the difficulties of satisfying high-assurance system requirements without sacrificing system capabilities. To alleviate this problem, we show how trade-offs can be made to reduce the threat of covert channels. We also clarify certain concepts in the theory of covert channels. Traditionally, a covert channel's vulnerability was measured by the capacity. We show why a capacity analysis al... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A development of hazard analysis to aid software design

    Publication Year: 1994, Page(s):17 - 25
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (708 KB)

    This paper describes a technique for software safety analysis which has been developed with the specific aim of feeding into and guiding design development. The method draws on techniques from the chemical industries' Hazard and Operability (HAZOP) analysis, combining this with work on software failure classification to provide a structured approach to identifying the hazardous failure modes of ne... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experience applying the CoRE method to the Lockheed C-130J software requirements

    Publication Year: 1994, Page(s):3 - 8
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (812 KB)

    For safety-critical systems, regulatory and human concerns make assurance of requirements correctness a necessity. Most popular requirements methods rely heavily on expensive after-the-fact verification, validation and correction activities to attain a desired level of correctness. In cooperation with its industrial partners, the Software Productivity Consortium (the Consortium) has developed a ri... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal methods and dependability assessment

    Publication Year: 1994, Page(s):53 - 66
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1208 KB)

    Formal methods are increasingly used for system development and their potential advantages for dependability assurance have been recognized. However, there has so far been no hard evidence to either support or refute the efficacy of formal methods in this respect. This paper discusses how the dependability of systems can be affected by the tree of formal methods in two respects. First, how and why... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testability: an introduction for COMPASS94

    Publication Year: 1994, Page(s):173 - 174
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (120 KB)

    Testability is the probability that software will fail during random testing if it contains a fault. Reliability and correctness are distinct from testability, though all three ideas are closely related. It is theoretically possible to have reliable and even correct software that is not very testable, but you would be hard-pressed to give a convincing demonstration that such software has attained ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of an informal program verification method to Ada

    Publication Year: 1994, Page(s):81 - 89
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (816 KB)

    The QDA informal program verifcation method is a comments analysis technique in which an analyst's assumptions about a program are expressed in the form of structured comments in the program and are checked by an analyzer. Previous work has shown QDA to be effective for detecting errors in assembly language programs. An experiment was performed to determine how well QDA would scale to high-level l... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experiences formally verifying a network component

    Publication Year: 1994, Page(s):183 - 193
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (912 KB)

    Errors in network components can have disastrous effects so it is important that all aspects of the design are correct. We describe our experiences formally verifying an implementation of an Asynchronous Transfer Mode (ATM) network switching fabric using the HOL90 theorem proving system. The design has been fabricated and is in use in the Cambridge Fairisle Network. It was designed and implemented... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of coverage probabilities for dependability validation of fault-tolerant computing systems

    Publication Year: 1994, Page(s):101 - 106
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (392 KB)

    Dependability validation is a major step toward development of high-assurance computing systems. This paper addresses the problem of estimating the coverage probabilities by statistically processing the information collected through physical or simulated fault injection. 3-stage random sampling is employed to derive the means, variances and confidence intervals of the coverage probabilities. The s... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experience modeling critical requirements

    Publication Year: 1994, Page(s):245 - 255
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (860 KB)

    Previous work at NRL demonstrated the benefits of a security modeling approach for building high assurance systems for particular application domains. This paper introduces an application domain called selective bypass that is prominent in certain network security solutions. We present a parameterized modeling framework for the domain and then instantiate a confidentiality model for a particular a... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compositional model checking of Ada tasking programs

    Publication Year: 1994, Page(s):135 - 147
    Cited by:  Papers (2)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (800 KB)

    Model checking has proved to be an effective analysis tool for domains such as hardware circuits and communication protocols. However, it has not yet been widely applied to more general concurrent systems, such as those realized by Ada multitasking programs. A major impediment to the use of model checking in such systems is the exponential growth of the state-space, which results from the parallel... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach for the risk analysis of safety specifications

    Publication Year: 1994, Page(s):209 - 221
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (964 KB)

    Experience in safety-critical systems has shown that faults introduced during requirements analysis can and do cause accidents. Within a methodology for the systematic production of requirements specifications for safety, based on a framework to structure the analysis and the application of formal techniques, we focus, in this paper on the risk analysis of the specifications. This has the aim to l... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal methods in the design of Ada 9X

    Publication Year: 1994, Page(s):29 - 37
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (728 KB)

    Several advisory groups have been established to provide suggestions and criticism to the Ada 9X Mapping Revision Team, the small design team that is revising the definition of the Ada programming language. One such group, the Language Precision Team, based its criticisms on attempts to construct formal mathematical models of the design. This paper reports on the first phase of that work View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AeSOP: an interactive failure mode analysis tool

    Publication Year: 1994, Page(s):9 - 16
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (552 KB)

    AeSOP (Aerospace Safety Oriented Petri Net) is an interactive failure mode analysis tool developed at The Aerospace Corporation. It automates a Petri net-based safety analysis technique developed by Leveson and Stolzy (1987) in which a reachability graph is analyzed backwards to identify potential failure modes of the system. AeSOP provides a flexible analytical environment where a user can arbitr... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testability, failure rates, detectability, trustability and reliability

    Publication Year: 1994, Page(s):171 - 172
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (128 KB)

    Discusses the relationship between several statistical measures of program dependability, including failure rates and testability. This is done by describing these concepts within the framework of a confidence-based measure called trustability. Suppose that M is a testing method, F is a class of faults and P is a class of programs. Suppose that the probability of a fault from F causing a failure i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using formal methods to derive test frames in category-partition testing

    Publication Year: 1994, Page(s):69 - 79
    Cited by:  Papers (39)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (776 KB)

    Testing is a standard method of assuring that software performs as intended. We extend the category-partition method, which is a specification-based testing method. An important aspect of category-partition testing is the construction of test specifications as an intermediate between functional specifications and actual tests. We define a minimal coverage criterion for category-partition test spec... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A formal model of several fundamental VHDL concepts

    Publication Year: 1994, Page(s):177 - 181
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (388 KB)

    This paper presents a formal model of several fundamental concepts in VHDL including the semantics of individual concurrent statements, and groups of those statements, resolution functions, delta delays, and hierarchical component structuring. Based on this model, several extensions to VHDL are proposed including nondeterministic assignments and unbounded asynchrony. Nondeterminism allows the spec... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Centurion software fault tolerance design and analysis tool

    Publication Year: 1994, Page(s):93 - 100
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (544 KB)

    Describes the Centurion computer-aided software fault tolerance design and analysis tool. The tool is a product of a research and development project focused on automated tools for use in design, assessment, and insertion of software fault tolerance techniques into Air Force systems. The Centurion tool allows users to analyze developmental and fielded software, and the associated computer and comm... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating software for safety systems in nuclear power plants

    Publication Year: 1994, Page(s):197 - 207
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (768 KB)

    This paper presents the results of work performed by Lawrence Livermore National Laboratory to assist the U.S. Nuclear Regulatory Commission in understanding the state of the art in software reliability for computer-based reactor protection systems. The activities reported upon summarize advice from technical experts in software reliability and safety, and identify the best current software develo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • What is software reliability?

    Publication Year: 1994, Page(s):169 - 170
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (148 KB)

    Reliability refers to statistical measures an engineer uses to quantify imperfection in practice. Often we speak imprecisely of an object having “high reliability”, but technically, unless the object cannot fail at all, its reliability is arbitrarily close to zero for a long enough period of operation. This is merely an expression of the truism that an imperfect object must eventually ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal verification of an interactive consistency algorithm for the Draper FTP architecture under a hybrid fault model

    Publication Year: 1994, Page(s):107 - 120
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1136 KB)

    Fault-tolerant systems for critical applications should tolerate as many kinds of faults and as large a number of faults as possible, while using as little hardware as is feasible, and they should be provided with strong assurances for their correctness. Byzantine fault-tolerant architectures are attractive because they tolerate any kind fault, but they are rather expensive: at least 3m+1 processo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On measurement of operational security [software reliability]

    Publication Year: 1994, Page(s):257 - 266
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1000 KB)

    Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of `the ability of the system to resist attack'. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the ext... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.