By Topic

Computer Assurance, 1994. COMPASS '94 Safety, Reliability, Fault Tolerance, Concurrency and Real Time, Security. Proceedings of the Ninth Annual Conference on

Date June 27 1994-July 1 1994

Filter Results

Displaying Results 1 - 25 of 27
  • State minimization for concurrent system analysis based on state space exploration

    Publication Year: 1994 , Page(s): 123 - 134
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (874 KB)  

    A fundamental issue in the automated analysis of concurrent systems is the efficient generation of the reachable state space. Since it is not possible to explore all the reachable states of a system if the number of states is very large or infinite, we need to develop techniques for minimizing the state space. This paper presents our approach to cluster subsets of states into equivalent classes. We assume that concurrent systems are specified as communicating state machines with arbitrary data space. We describe a procedure for constructing a minimal reachability state graph from communicating state machines. As an illustration of our approach, we analyze a producer-consumer program written in Ada. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proceedings of COMPASS'94 - 1994 IEEE 9th Annual Conference on Computer Assurance

    Publication Year: 1994
    Save to Project icon | Request Permissions | PDF file iconPDF (185 KB)  
    Freely Available from IEEE
  • Formal methods in the design of Ada 9X

    Publication Year: 1994 , Page(s): 29 - 37
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    Several advisory groups have been established to provide suggestions and criticism to the Ada 9X Mapping Revision Team, the small design team that is revising the definition of the Ada programming language. One such group, the Language Precision Team, based its criticisms on attempts to construct formal mathematical models of the design. This paper reports on the first phase of that work View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A development of hazard analysis to aid software design

    Publication Year: 1994 , Page(s): 17 - 25
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB)  

    This paper describes a technique for software safety analysis which has been developed with the specific aim of feeding into and guiding design development. The method draws on techniques from the chemical industries' Hazard and Operability (HAZOP) analysis, combining this with work on software failure classification to provide a structured approach to identifying the hazardous failure modes of new software View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating software for safety systems in nuclear power plants

    Publication Year: 1994 , Page(s): 197 - 207
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (768 KB)  

    This paper presents the results of work performed by Lawrence Livermore National Laboratory to assist the U.S. Nuclear Regulatory Commission in understanding the state of the art in software reliability for computer-based reactor protection systems. The activities reported upon summarize advice from technical experts in software reliability and safety, and identify the best current software development practices used in industry for safety-critical software. The research reported here has identified a number of positive and negative design factors that can serve as the basis for a safety assessment. The results of the interviews and discussions were combined into a set of principles which were termed “design factors”. Although the areas of emphasis among the three sources of information (standards, experts and organizations) tend to be quite different, no substantial areas of disagreement were found. Many of the factors contributing to the success or failure of software may be attributed to the knowledge, understanding, intelligence, and care of the individuals and companies involved in the development of safety-critical software. By combining the best from theory and practice it is possible to isolate a number of factors that distinguish the good from the bad View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compositional model checking of Ada tasking programs

    Publication Year: 1994 , Page(s): 135 - 147
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB)  

    Model checking has proved to be an effective analysis tool for domains such as hardware circuits and communication protocols. However, it has not yet been widely applied to more general concurrent systems, such as those realized by Ada multitasking programs. A major impediment to the use of model checking in such systems is the exponential growth of the state-space, which results from the parallel composition of component tasks. Various compositional approaches have been proposed to address this problem, in which the parts of a system are analyzed separately, and then the results are combined into inferences about the whole. One of the more promising of these techniques is called compositional minimization, which eliminates each component's “uninteresting” states as the model checking proceeds; this in turn can lead to a significant reduction in the composite state-space. In this paper we evaluate the application of this approach to Ada multitasking programs, particularly highlighting the design choices made to accommodate Ada's semantics. We also discuss the types of systems (and properties) for which this method produces significant time/space savings, as well as those for which the savings are less pronounced View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Causality as a means for the expression of requirements for safety critical systems

    Publication Year: 1994 , Page(s): 223 - 231
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    The development of requirements for software systems has long been identified as an important and difficult part of software development. This is much more so for safety-critical systems. In this paper we identify one approach which we believe, forces the developer to concentrate upon requirements rather than initial design concepts (as often happens). This approach uses causality as its main abstraction, primarily because causality is intrinsic to many systems, and is intuitive to developers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testability, testing, and critical software assessment

    Publication Year: 1994 , Page(s): 165 - 167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    Although the phrases “critical system” and “critical software” encompass different degrees of “criticality” based on the user and application, I consider critical software to be that which performs a task whose success is necessary to avoid a loss of property or life. Software testability is a software characteristic that refers to the ease with which some formal or informal testing criteria can be satisfied. There are varying metrics that can be applied to this measurement. Software validation generally refers to the process of showing that software is computing an expected function. Software testing is able to judge the quality of the code produced. Software testability, on the other hand, is not able to do so, because it has no information concerning whether the code is producing correct or incorrect results. It is only able to predict the likelihood of incorrect results occurring if a fault or faults exist in the code. Software testability is a validation technique, but in a different definition of the term ≫OPEN validation” that the IEEE Standard Glossary of Software Engineering Terminology allows for. Software testability is assessing behavioral characteristics that are not related to whether the code is producing correct output View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experiences formally verifying a network component

    Publication Year: 1994 , Page(s): 183 - 193
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (912 KB)  

    Errors in network components can have disastrous effects so it is important that all aspects of the design are correct. We describe our experiences formally verifying an implementation of an Asynchronous Transfer Mode (ATM) network switching fabric using the HOL90 theorem proving system. The design has been fabricated and is in use in the Cambridge Fairisle Network. It was designed and implemented with no consideration for formal specification or verification. This case study gives an indication of the difficulties in formally verifying real designs. We discuss the time spent on the verification. This was comparable to the time spent designing and testing the fabric. We also describe the problems encountered and the errors discovered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AeSOP: an interactive failure mode analysis tool

    Publication Year: 1994 , Page(s): 9 - 16
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    AeSOP (Aerospace Safety Oriented Petri Net) is an interactive failure mode analysis tool developed at The Aerospace Corporation. It automates a Petri net-based safety analysis technique developed by Leveson and Stolzy (1987) in which a reachability graph is analyzed backwards to identify potential failure modes of the system. AeSOP provides a flexible analytical environment where a user can arbitrarily assume the occurrence of “unpredictable” events and analyze their impacts to system behavior. It also implements several features designed to assist safety analysis on complex systems. This paper describes the use of AeSOP in performing failure-mode analysis using a simplified shuttle orbiter model where the impacts of a potential engine failure and the astronauts' selection of a recovery mechanism are analyzed. Finally, it describes enhancement plans to AeSOP View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On measurement of operational security [software reliability]

    Publication Year: 1994 , Page(s): 257 - 266
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB)  

    Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of `the ability of the system to resist attack'. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit `more secure behaviour' in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of `operational security' similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf. rate of occurrence of failures in reliability), or the probability that a specified `mission' can be accomplished without a security breach (cf. reliability function). This new approach is based on the analogy between system failure and security breach, but it raises several issues which invite empirical investigation. We briefly describe a pilot experiment that we have conducted to judge the feasibility of collecting data to examine these issues View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testability: an introduction for COMPASS94

    Publication Year: 1994 , Page(s): 173 - 174
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB)  

    Testability is the probability that software will fail during random testing if it contains a fault. Reliability and correctness are distinct from testability, though all three ideas are closely related. It is theoretically possible to have reliable and even correct software that is not very testable, but you would be hard-pressed to give a convincing demonstration that such software has attained that reliability or correctness. Three things have to happen before a fault in software becomes known during testing: the fault must be executed, that execution has to change the data state adversely, and that “infected” data state must cause an incorrect output. The three parts of this process are called execution, infection, and propagation. This three-part fault/failure process forms the basis of testability analysis. Testability analysis predicts for a given piece of software how likely it is that a fault in that software (if it exists) will cause a failure during random testing. We estimate this likelihood using sensitivity analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An ounce of prevention is worth a pound of cure. Towards physically-correct specifications of embedded real-time systems

    Publication Year: 1994 , Page(s): 149 - 162
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    Predictability-the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements-is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is a formalism that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Unrealistic systems-possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing-cannot even be specified. We argue that this “ounce of prevention” at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems-not to mention the elimination of potential hazards that would have gone otherwise unnoticed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • What is software reliability?

    Publication Year: 1994 , Page(s): 169 - 170
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    Reliability refers to statistical measures an engineer uses to quantify imperfection in practice. Often we speak imprecisely of an object having “high reliability”, but technically, unless the object cannot fail at all, its reliability is arbitrarily close to zero for a long enough period of operation. This is merely an expression of the truism that an imperfect object must eventually fail. At first sight, it seems that software should have a sensible reliability, as other engineered objects do. But the application of the usual mathematics is not justified. Reliability theory applies to random (as opposed to systematic) variations in a population of similar objects, whereas software defects are all design flaws, not at all random, in a unique object. The traditional cause of failure is a random process of wear and tear, while software is forever as good (or as bad!) as new. However, software defects can be thought of as lurking in wait for the user requests that excite them, like a minefield through which the user must walk View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experience modeling critical requirements

    Publication Year: 1994 , Page(s): 245 - 255
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    Previous work at NRL demonstrated the benefits of a security modeling approach for building high assurance systems for particular application domains. This paper introduces an application domain called selective bypass that is prominent in certain network security solutions. We present a parameterized modeling framework for the domain and then instantiate a confidentiality model for a particular application, called the External COMSEC Adaptor (ECA), within the framework. We conclude with lessons we learned from modeling, implementing and verifying the ECA. Our experience supports the use of the application-based security modeling approach for high assurance systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using formal methods to derive test frames in category-partition testing

    Publication Year: 1994 , Page(s): 69 - 79
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB)  

    Testing is a standard method of assuring that software performs as intended. We extend the category-partition method, which is a specification-based testing method. An important aspect of category-partition testing is the construction of test specifications as an intermediate between functional specifications and actual tests. We define a minimal coverage criterion for category-partition test specifications identify a mechanical process to produce a test specification that satisfies the criterion, and discuss the problem of resolving infeasible combinations of choices for categories. Our method uses formal schema-based functional specifications and is shown to be feasible with an example study of a simple file system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of coverage probabilities for dependability validation of fault-tolerant computing systems

    Publication Year: 1994 , Page(s): 101 - 106
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    Dependability validation is a major step toward development of high-assurance computing systems. This paper addresses the problem of estimating the coverage probabilities by statistically processing the information collected through physical or simulated fault injection. 3-stage random sampling is employed to derive the means, variances and confidence intervals of the coverage probabilities. The statistical experiments are carried out in a 3D fault space that accounts for system inputs, fault injection times and fault locations. In the case of real-time systems, the inputs and the injection times also provide useful information about the workload to be executed. The proposed solution technique is tested against the data generated by a program that mimics a fault environment. Two application examples are considered. Several working rules for designing 3-stage random sampling experiments are also provided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Covert channels-here to stay?

    Publication Year: 1994 , Page(s): 235 - 243
    Cited by:  Papers (34)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    We discuss the difficulties of satisfying high-assurance system requirements without sacrificing system capabilities. To alleviate this problem, we show how trade-offs can be made to reduce the threat of covert channels. We also clarify certain concepts in the theory of covert channels. Traditionally, a covert channel's vulnerability was measured by the capacity. We show why a capacity analysis alone is not sufficient to evaluate the vulnerability and introduce a new metric referred to as the “small message criterion” View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach for the risk analysis of safety specifications

    Publication Year: 1994 , Page(s): 209 - 221
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (964 KB)  

    Experience in safety-critical systems has shown that faults introduced during requirements analysis can and do cause accidents. Within a methodology for the systematic production of requirements specifications for safety, based on a framework to structure the analysis and the application of formal techniques, we focus, in this paper on the risk analysis of the specifications. This has the aim to locate and remove faults during the requirements phase, rather than later in development or during the operational lifetime of the system. The applicability of the proposed approach is demonstrated by conducting the risk analysis of an example based on a train set crossing. The example illustrates how the approach to risk analysis supports verification within a formal model and how the validation of the formal model is performed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A formal model of several fundamental VHDL concepts

    Publication Year: 1994 , Page(s): 177 - 181
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB)  

    This paper presents a formal model of several fundamental concepts in VHDL including the semantics of individual concurrent statements, and groups of those statements, resolution functions, delta delays, and hierarchical component structuring. Based on this model, several extensions to VHDL are proposed including nondeterministic assignments and unbounded asynchrony. Nondeterminism allows the specification of environments and of classes of devices. This model naturally captures the meaning of composition of VHDL programs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testability, failure rates, detectability, trustability and reliability

    Publication Year: 1994 , Page(s): 171 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    Discusses the relationship between several statistical measures of program dependability, including failure rates and testability. This is done by describing these concepts within the framework of a confidence-based measure called trustability. Suppose that M is a testing method, F is a class of faults and P is a class of programs. Suppose that the probability of a fault from F causing a failure is at least D when a program p∈P is tested according to M, if in fact p contains a fault of type F. Then D is called the detectability of M with respect to F and P. If we test a program using a method with detectability D, and see no faults, then we can conclude with risk at most 1-D that the program has no faults, i.e. we can have confidence at least C=D that the program is fault-free for the associated fault class F. If we have confidence at least C that a program has no faults, then we say that the program has trustability C with respect to F. More refined measures of trustability can be defined which also take fault class frequencies into account. Testability is defined to be the probability of finding a fault in a program p, if p contains a fault. The probability that a program will fail when it is tested over its operational distribution is called its failure rate. Trustability is confidence in the absence of faults and reliability is the probability of a program operating without failure. Trustability and reliability coincide if the class of faults for which we have a certain level of trustability is the class of common case faults View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of an informal program verification method to Ada

    Publication Year: 1994 , Page(s): 81 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB)  

    The QDA informal program verifcation method is a comments analysis technique in which an analyst's assumptions about a program are expressed in the form of structured comments in the program and are checked by an analyzer. Previous work has shown QDA to be effective for detecting errors in assembly language programs. An experiment was performed to determine how well QDA would scale to high-level languages. The implementation and use of a prototype analyzer for Ada indicated both the usefulness of QDA for high level languages and the desirability for further development of the prototype View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Centurion software fault tolerance design and analysis tool

    Publication Year: 1994 , Page(s): 93 - 100
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    Describes the Centurion computer-aided software fault tolerance design and analysis tool. The tool is a product of a research and development project focused on automated tools for use in design, assessment, and insertion of software fault tolerance techniques into Air Force systems. The Centurion tool allows users to analyze developmental and fielded software, and the associated computer and communications hardware, to identify fault tolerance requirements and evaluate alternative fault tolerant designs. The Centurion capabilities include interactive graphic construction software, hardware, and fault tolerance models; storage and retrieval of template and model libraries; simulation of the constructed models, with data logging and run-time user inputs permitted; and post-processing with tabular and graphic output formats available. Actual software modules can be associated with nodes within Centurion graphs and linked into the model simulation. The current Centurion tool is available on Sun SPARCStations, and is currently being ported to DEC Alpha workstations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal verification of an interactive consistency algorithm for the Draper FTP architecture under a hybrid fault model

    Publication Year: 1994 , Page(s): 107 - 120
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB)  

    Fault-tolerant systems for critical applications should tolerate as many kinds of faults and as large a number of faults as possible, while using as little hardware as is feasible, and they should be provided with strong assurances for their correctness. Byzantine fault-tolerant architectures are attractive because they tolerate any kind fault, but they are rather expensive: at least 3m+1 processors are required to withstand m arbitrary faults. Two recent developments mitigate some of the costs: algorithms that operate under a hybrid fault model tolerate more faults for a given number of processors than classical Byzantine fault-tolerant algorithms, and asymmetric architectures tolerate a given number of faults with less hardware than conventional architectures. In this paper, we combine these two developments and present an algorithm for achieving interactive consistency (the problem of distributing sensor samples consistently in the presence of faults) under a hybrid fault model on an asymmetric architecture. The extended fault model and asymmetric architecture complicate the arguments for the correctness and the number of faults tolerated by the algorithm. To increase assurance, we have formally verified these properties and checked the proofs mechanically using the PVS verification system. We argue that mechanically supported formal methods allow for effective reuse of intellectual resources, such as specifications and proofs, and that exercises such as this can now be performed very economically View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Case study: Applying formal methods to the Traffic Alert and Collision Avoidance System (TCAS) II

    Publication Year: 1994 , Page(s): 39 - 51
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1060 KB)  

    Requirements State Machine Language (RSML) evolved from statecharts during the development of the Traffic Alert and Collision Avoidance System (TCAS) II system requirements specification. This paper describes RSML and the TCAS II system requirements specification, which was reverse-engineered from pseudocode. This case study illustrates how formal methods have been applied to a safety-critical system, improving the assurance of safety in three areas: product review, process and personnel certification, and functional testing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.