By Topic

High Assurance Systems Engineering, 2000, Fifth IEEE International Symposim on. HASE 2000

Date 17-17 Nov. 2000

Filter Results

Displaying Results 1 - 25 of 47
  • Proceedings. Fifth IEEE International Symposium on High Assurance Systems Engineering (HASE 2000)

    Save to Project icon | Request Permissions | PDF file iconPDF (227 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 331
    Save to Project icon | Request Permissions | PDF file iconPDF (66 KB)  
    Freely Available from IEEE
  • Quantitative analysis of dependability critical systems based on UML statechart models

    Page(s): 83 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB)  

    The paper introduces a method which allows quantitative performance and dependability analysis of systems modeled by using UML statechart diagrams. The analysis is performed by transforming the UML model to Stochastic Reward Nets (SRN). A large subset of statechart model elements is supported including event processing, state hierarchy and transition priorities. The transformation is presented by a set of SRN design patterns. Performance measures can be directly derived using SRN tools, while dependability analysis requires explicit modeling of erroneous states and faulty behavior View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safety analysis of an evolving software architecture

    Page(s): 159 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB)  

    The safety analysis of an evolving software system has to consider the impact that changes might have on the software components, and to provide confidence that the risk is acceptable. If the impact of a change is not thoroughly analysed, accidents can occur as a result of faulty interactions between components, for example. However, the process of safety analysis can be enhanced if appropriate abstractions are provided for modelling and analysing software components and their interactions. Instead of considering components as the locus of change, the proposed approach assumes that components remain unchanged while their interactions (i.e. connectors) adapt to the different requirements needs. The safety analysis is then performed using model checking to verify whether safe behaviour is maintained when interactions between components change. The feasibility of the approach is demonstrated in terms of a case study that deals with the safety procedures associated with the launching of a sounding rocket View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured language for specifications of quantitative requirements

    Page(s): 221 - 227
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    Requirements for dependable systems need to be understandable and, at the same time, have to satisfy consistency and non-ambiguity properties. We provide a means to specify nonfunctional requirements in terms of structured English sentences. We define their syntax by a clear and consistent notation. For verification, these sentences have to be transformed into a notation that can be interpreted by analysis tools. It is shown how this can be achieved via several translation steps View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assurance system architecture for information service by utilizing autonomous mobile agents

    Page(s): 273 - 280
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (524 KB)  

    An information service system must be designed such that service providers (SPs) and users can supply and use the information service in a timely and reliable way, while each of the SPs has their own mission and each of the users has their own service preferences. Moreover, the mission and the user preferences change dynamically. As the network becomes larger, it may include the failure, maintenance and addition of nodes. An assurance system, called the FIF (Faded Information Field) system, is proposed to achieve real-time properties and reliability of the service provision and utilization under evolving situations. In FIF, push-type/pull-type mobile agents are generated for the SP and the user to autonomously allocate information to the nodes and to utilize the information at these nodes. The push-type mobile agent, originating at the SP, carries the SP's information to be autonomously allocated to the nodes, but the information gradually fades in the process of node-to-node movement. The fading part of the information is selected at the node on the basis of the user preferences. The more highly preferred information is replicated at more nodes on the network. The pull-type mobile agent, being sent out from the user, can autonomously search for the necessary information among the nodes. Then the user can reliably and timely obtain the preferred information at the local nodes in the FIF. The autonomous fading/navigating technologies make it possible to adapt to large and rapidly changing situations in the services without stopping the system and also to satisfy the assurance of information provision and utilization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An exception handling software architecture for developing fault-tolerant software

    Page(s): 311 - 320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB)  

    Fault-tolerant object-oriented software systems are inherently complex and have to cope with an increasing number of exceptional conditions in order to meet the system's dependability requirements. This work proposes a software architecture which integrates uniformly both concurrent and sequential exception handling. The exception handling architecture is independent of programming language or exception handling mechanism, and its use can minimize the complexity caused by the handling of abnormal behavior. Our architecture provides, during the architectural design stage, the context in which more detailed design decisions related to exception handling are made in later development stages. This work also presents a set of design patterns which describes the static and dynamic aspects of the components of our software architecture. The patterns allow a clear separation of concerns between the system's functionality and the exception handling facilities, applying the computational reflection technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Disappearing formal methods*

    Page(s): 95 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Requirements formalization and validation for a telecommunication equipment protection switcher

    Page(s): 169 - 176
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB)  

    Using formal methods, namely model checking, we can automatically verify a formal model of the requirements against given properties. This allows us to detect errors early in the design process, thus decreasing development cost and time to market. However, to modify a well established design process to introduce formal methods is not easy. We present a case study exploring the possibility of replacing informal functional specifications with formal ones in the design process of telecommunication Equipment Protection Switchers (EPSs). Our finding is that for EPSs the time effort to write formal specs from informal requirements is comparable with that for writing informal functional specs from informal requirements. This suggests that for EPSs replacing informal functional specs in the design process with formal specs can be done without suffering delays due to the formalization activity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A high-assurance measurement repository system

    Page(s): 265 - 272
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (764 KB)  

    High-quality measurement data are very useful for assessing the efficacy of high-assurance system engineering techniques and tools. Given the rapidly evolving suite of modern tools and techniques, it is helpful to have a large repository of up-to-date measurement data that can be used to quantitatively assess the impact of state-of-the-art techniques on the quality of the resulting systems. For many types of defects, including Y2K failures, infinite loops, memory overflow, access violations, arithmetic overflow, divide-by-zero, off-by-one errors, timing errors, deadlocks, etc., it may be possible to combine data from a large number of projects and use these to make statistical inferences. This paper presents a highly secure and reliable measurement repository system for measurement data acquisition, storage and analysis. The system is being used by the QuEST Forum, which is a new industry forum consisting of over 100 leading telecommunications companies. The paper describes the decisions that were made in the design of the measurement repository system, as well as implementation strategies that were used in achieving a high-level of confidence in the security and reliability of the system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of group communication protocols to assess quality of service properties

    Page(s): 247 - 256
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    Focuses on a QoS analysis carried out through analytical modelling and experimental evaluation. QoS is defined as a set of qualitative and quantitative characteristics of a (sub)system, which are necessary for obtaining the required functionality of an application. Its analysis is a necessary step for the early verification and validation of an appropriate design, and for taking design decisions about the most rewarding choice, in relation to the user requirements. In this paper, we concentrate on a family of group communication protocols in a wireless environment, which is used as the reference system to which the QoS analysis is applied. Specific indicators have been defined and evaluated, which capture the main characteristics of the protocols and of the environment, focusing our attention on performance and dependability attributes. The model-based analysis is devoted to give an estimate of the coverage of the protocol assumptions and performance. We integrate the analytical modelling with fine-grained experimental measurements to determine realistic parameters of message delays and messages losses, and we compare the measured protocol performance with the analytical results. The main purpose of our analysis is to provide a fast, cost-effective and formally sound way to further analyse and understand the protocol behaviour and its environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring and assessing software test processes using test data

    Page(s): 259 - 264
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    Testing of large-scale software systems is a complex and expensive process that involves both technical and managerial issues. To improve its cost-effectiveness, the process should be continuously monitored, consistently measured and carefully assessed. This paper proposes an assessment methodology in this direction, called process-oriented metrics-based assessment (POMBA). Novel concepts include considering test problems as one of the dependent variables of the test process and the use of test intensity as one of the independent variables. As a proof-of-concept, the methodology is applied to test data collected from high-level Y2K (Year-2000) tests of seven large-scale transaction software systems. It is found that, in testing large-scale software, test problems that prevent the completion of tests due to insufficient test planning and setup activities tend to occur frequently. This not only wastes resources but also affects the effectiveness of the overall process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal specification techniques as a catalyst in validation

    Page(s): 203 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    The American Heritage Dictionary defines a catalyst as a substance, usually present in small amounts relative to the reactants, that modifies and especially increases the rate of a chemical reaction without being consumed in the process. This article reports on the experience gained in an industrial project that formal specification techniques form such a catalyst in the validation of complex systems. These formal development methods improve the validation process significantly by generating precise questions about the system's intended functionality very early and by uncovering ambiguities and faults in textual requirement documents. This project has been a cooperation between the IST and the company Frequentis. The Vienna Development Method (VDM) has been used for validating the functional requirements and the existing acceptance tests of a network node for voice communication in air traffic control. In addition to several detected requirement faults, the formal specification highlighted how additional test-cases could be derived systematically View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safety validation of embedded control software using Z animation

    Page(s): 228 - 237
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB)  

    Describes a rigorous approach to safety validation of embedded control software by specification animation. The software control logic is specified in Z and systematically animated together with a model of the equipment under control. All reachable equipment states under software control are systematically identified and compared with known hazardous states in normal operation and under dominant failure conditions. The process is completely automated, removing the need for human intervention and associated errors, and can be applied much earlier than traditional test-based techniques. As a result, the validation method has the potential to provide cost-effective, high-integrity safety assurance for embedded software. The approach is illustrated with a hypothetical industrial press control system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prediction of software faults using fuzzy nonlinear regression modeling

    Page(s): 281 - 290
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    Software quality models can predict the risk of faults in modules early enough for cost-effective prevention of problems. This paper introduces the fuzzy nonlinear regression (FNR) modeling technique as a method for predicting fault ranges in software modules. FNR modeling differs from classical linear regression in that the output of an FNR model is a fuzzy number. Predicting the exact number of faults in each program module is often not necessary. The FNR model can predict the interval that the number of faults of each module falls into with a certain probability. A case study of a full-scale industrial software system was used to illustrate the usefulness of FNR modeling. This case study included four historical software releases. The first release's data were used to build the FNR model, while the remaining three releases' data were used to evaluate the model. We found that FNR modeling gives useful results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The experience of auditing software for safety critical railway signalling equipment

    Page(s): 193 - 196
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB)  

    The experience of carrying out Audit of two versions of Software for a Railway Signalling equipment, called the “Universal Fail Safe Block Interface” (UFSBI), being developed by two vendors is described. For the users and the developers, this had been the first exposure to formal development of safety critical software. The auditors were academics with IV&V experience for only aerospace software and no prior experience with railway signalling system. Prototypes of UFSBI had been operating before the auditors were formally brought in, but a full life cycle audit was undertaken due to safety criticality of the system as per the European CENELEC standard, desired to be adopted by the users. In the absence of local precedence, new paradigms of interaction had to be evolved and the role of the auditors, expanded to also include mentoring and facilitation. Initial scepticism and conflicting expectations from software audit gradually became a participatory learning activity for all the involved parties View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the sensitivity of NMR unreliability to non-exponential repair distributions

    Page(s): 293 - 300
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    The failure and repair of modules in an N-modular redundant (NMR) system are governed by a failure time distribution and a repair time distribution, respectively. It is generally reasonable to assume that a module's failure time distribution is a simple exponential distribution. However, it is not reasonable to assume that the repair time distribution is also exponential. Reliability models with non-exponential repair have a higher computational complexity than a model of the same system with an exponential repair time distribution. This paper presents the results of a systematic study to determine whether non-exponential repair distributions produce significant differences in calculated NMR system unreliability, relative to an exponential repair distribution with the same mean time to repair (MTTR). Our approach is to embed Erlang repair distributions in generalized stochastic Petri net (GSPN) models of NMR systems and evaluate the unreliability. Our results show that, for a wide range of system parameters, the choice of a repair time distribution has minimal impact on the calculated unreliability. Rather, it is the MTTR that is the dominant parameter affecting unreliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian framework for reliability assurance of a deployed safety critical system

    Page(s): 321 - 329
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB)  

    The existence of software faults in safety-critical systems is not tolerable. Goals of software reliability assessment are estimating the failure probability of the program, θ, and gaining statistical confidence that θ is realistic. While in most cases reliability assessment is performed prior to the deployment of the system, there are circumstances when reliability assessment is needed in the process of (re)evaluation of the fielded (deployed) system. Post deployment reliability assessment provides reassurance that the expected dependability characteristics of the system have been achieved. It may be used as a basis of the recommendation for maintenance and further improvement, or the recommendation to discontinue the use of the system. The paper presents practical problems and challenges encountered in an effort to assess and quantify software reliability of NASA's Day-of-Launch I-Load Update (DOLILU II) system DOLILU II system has been in operational use for several years. A Bayesian framework is chosen for reliability assessment, because it allows incorporation of (in this specific case failure free) program executions observed in the operational environment. Furthermore, we outline the development of a probabilistic framework that allows accounting of rigorous verification and validation activities performed prior to a system's deployment into the reliability assessment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The synthesis of real-time systems from processing graphs

    Page(s): 177 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (840 KB)  

    Directed graphs, called processing graphs, are a standard design aid for complex real-time systems. The primary problem in developing real-time systems with processing graphs is transforming the processing graph into a predictable real-time system in which latency can be managed. Software engineering techniques are combined with real-time scheduling theory to solve this problem. In the parlance of software engineering methodologies, a synthesis method is presented. New results on managing latency in the synthesis of real-time systems from cyclic processing graphs are also presented. The synthesis method is demonstrated with an embedded signal processing application for an anti-submarine warfare (ASW) system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A flexible real-time architecture

    Page(s): 99 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB)  

    Assuring hard real-time characteristics of I/O associated with embedded software is often a difficult task. Input-Output related statements are often intermixed with the computational code, resulting in I/O timing that is dependent on the execution path and computational load. One way to mitigate this problem is through the use of interrupts. However, the non-determinism that is introduced by interrupt driven I/O may be so difficult to analyze that it is prohibited in some high consequence systems. This paper describes a balanced hardware/software solution to obtain consistent, interrupt-free I/O timing, and results in software that is much more amenable to analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An embedded system for safe, secure and reliable execution of high consequence software

    Page(s): 107 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    As more complex and functionally diverse requirements are placed on high consequence embedded applications, ensuring safe and secure operation requires an ultra-reliable execution environment. The selection of an embedded processor and its development environment has the most far-reaching effects on the production of the system than any other element in the design. This choice ripples through the remainder of the hardware design and profoundly affects the entire software development process. Experience indicates an object oriented (OO) methodology provides a superior development environment. However, embedded programming languages do not directly support OO techniques. Furthermore, the processors themselves do not support nor enforce an OO environment. This paper describes a system level architecture for an object aware processor targeted at high consequence embedded applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Providing guaranteed assurance to connection-oriented group communications using disjoint routing

    Page(s): 197 - 198
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    We compare different approaches which provide guaranteed assurance to connection-oriented group communications based on the use of working and backup disjoint route sets. Specifically, we present experimental results showing the effect of disjoint backup route sets on mesh/tree/ring feasibility and cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic abstractions of real-time specifications

    Page(s): 147 - 158
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB)  

    This paper explores the automatic generation of abstractions of real-time specifications. Abstractions of formal specifications hide certain details while preserving other essential aspects of system behavior. Abstractions are useful in the context of model-checking because the state-space explosion problem often prohibits model-checking of the full specification. Abstractions are commonly used to develop reduced models, but the abstractions are often generated in ad hoc, informal and not automated ways. As a consequence, the reduced models may be incorrect-that as, they may not accurately capture the behavior of the original specification. The approach described here uses dependency information to automatically generate mathematically sound abstractions for real-time specifications. In addition, timing information is incorporated to further reduce the model. The technique is illustrated by an example which yields a 26% reduction in the time required to generate the state-space representing equivalent behavior to the original specification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • First principles applied to software safety - the novel use of silicon machinery

    Page(s): 216 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    Presents a methodology that may provide a radical new way of assuring the safety of software-based systems through a novel application of first principles enabled by micro-electromechanical systems (MEMS) technology, i.e. silicon machinery. `First principles' is defined as theory that is defensible through fundamental laws of nature in the chemical, physical or mechanical structure of materials or assemblages thereof. The proposed methodology is limited to `passive safety' applications, i.e. those where a potential hazard is mitigated (assured safe) by means that do not require action or energy to maintain. The proposed methodology is based upon long-standing safety principles employed in nuclear weapons. It is proposed that two of these long-established principles be applied to high-consequence software systems. The nuclear weapon stronglink and the unique signal (UQS) concept are fundamental to nuclear weapon safety and represent the conceptual genesis of the approach taken in this paper. Incompatibility and isolation are the two fundamental nuclear weapon safety principles made possible by the nuclear weapon stronglink and UQS concepts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.