By Topic

High-Assurance Systems Engineering, 2005. HASE 2005. Ninth IEEE International Symposium on

Date 12-14 Oct. 2005

Filter Results

Displaying Results 1 - 25 of 27
  • Ninth IEEE International Symposium on High-Assurance Systems Engineering

    Save to Project icon | Request Permissions | PDF file iconPDF (1162 KB)  
    Freely Available from IEEE
  • Ninth IEEE International Symposium on High-Assurance Systems Engineering - Title Page

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • Ninth IEEE International Symposium on High-Assurance Systems Engineering - Copyright Page

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Ninth IEEE International Symposium on High-Assurance Systems Engineering - Table of contents

    Page(s): v - vi
    Save to Project icon | Request Permissions | PDF file iconPDF (28 KB)  
    Freely Available from IEEE
  • Message from the General Chair

    Page(s): vii
    Save to Project icon | Request Permissions | PDF file iconPDF (19 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Message from the Program Chair

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (19 KB)  
    Freely Available from IEEE
  • Organization

    Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (20 KB)  
    Freely Available from IEEE
  • List of reviewers

    Page(s): xi - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (16 KB)  
    Freely Available from IEEE
  • A panacea or academic poppycock: formal methods revisited

    Page(s): 3 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (77 KB) |  | HTML iconHTML  

    Many formal methods have been proposed to improve software quality. These include new specification and modeling languages as well as formal verification techniques, such as model checking and theorem proving. This paper describes several ways in which tools supporting formal methods can help improve the quality of both software code as well as software specifications and models. However, while promising, formal methods and their support tools are rarely used in practical software development. To overcome this problem, this paper describes a number of needed improvements - in techniques for requirements capture, in languages, in specifications and models, in code quality, and in code verification techniques - which could lead to more widespread use of formal methods and their support tools in practical software development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The future EU R&D on security and dependability: moving towards resilience and plasticity

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (37 KB)  

    The IST programme has started consulting the European research constituency on what would be the challenges and priorities for the strategic R&D on security and dependability in the future ICT theme of the Framework Programme 7th. The notion of resilience and plasticity would be presented together with the rationale and the initial findings of this consultation process. The paper also highlights the hurdles towards and the opportunities for a truly world class EU research in this area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tomorrow's needs - yesterday's technology: DoD's architectural dilemma & plan for resolution

    Page(s): 9 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (67 KB) |  | HTML iconHTML  

    As department of defense (DoD) is moving rapidly towards service-oriented computing (SOC), new challenges arise. SOC represents a new and emerging paradigm of computing. The new paradigm would affect every phase of system development and operation. This paper presents the impact of SOC on software architecture, specification languages, and engineering techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and analysis of fault tolerant architectures by model weaving

    Page(s): 15 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    Aspect-oriented modeling is proposed to design the architecture of fault tolerant systems. Notations are introduced that support the separate and modularized design of functional and dependability aspects in UML class diagrams. This notation designates sensitive parts of the architecture and selected architecture patterns that implement common redundancy techniques. A model weaver is presented that constructs both the integrated model of the system and the dependability model on the basis of the analysis sub-models attached to the architecture patterns. In this way fault tolerance mechanisms can be systematically analyzed when they are integrated into the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safe allocation of avionics shared resources

    Page(s): 25 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    We propose an approach to analyse the safety of avionic systems that takes into account the impact of computation and communication resource sharing. The approach is made of three main steps: use a formal notation to describe how failures propagate in the system under study, use model-checking tools to verify safety requirements and to derive allocation constraints, use a constraint solver to generate safe allocations. This approach is illustrated by the study of the Terrain Following/Terrain Avoidance (TF/TA) System of a fighter aircraft. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The reliable platform service: a property-based fault tolerant service architecture

    Page(s): 34 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    The reliable platform is a fault tolerant architecture designed to provide a structured but flexible framework for the delivery of dependable services for highly critical applications such as X-by-wire systems. The approach is based on defining a structured hierarchy of critical fault tolerant services with corresponding properties that can be explicitly specified and verified. The architecture also incorporates a comprehensive error model that is inclusive of symmetric and asymmetric (i.e. Byzantine) errors of both a permanent and transient nature. Advanced features include the use of hybrid error recovery algorithms, and node/process level synchronization strategies. The system is capable of managing diverse processes at different levels of severity and with varied failure semantics. The system is dynamically reconfigurable based on error containment regions and online diagnosis protocols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting component and architectural re-usage by detection and tolerance of integration faults

    Page(s): 47 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    We present an extended interface description language supporting the avoidance and the automatic-detection and tolerance of inconsistency classes likely to occur when integrating pre-developed components. In particular, the approach developed allows the automatic generation of component wrapping mechanisms aimed at handling the occurrence of local and global inconsistencies during runtime. On the whole, the application of the procedure suggested supports re-usage of components and of architectural patterns by allowing their easy adaptation to the specific needs of the application considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework for simplifying the development of kernel schedulers: design and performance evaluation

    Page(s): 56 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    Writing a new scheduler and integrating it into an existing OS is a daunting task, requiring the understanding of multiple low-level kernel mechanisms. Indeed, implementing a new scheduler is outside the expertise of application programmers, even though they are the ones who understand best the scheduling needs of their applications. To address these problems, we present the design of Bossa, a language targeted toward the development of scheduling policies. Bossa provides high-level abstractions that are specific to the domain of scheduling. These constructs simplify the task of specifying a new scheduling policy and facilitate the static verification of critical safety properties. We illustrate our approach by presenting an implementation of the EDF scheduling policy. The overhead of Bossa is acceptable. Overall, we have found that Bossa simplifies scheduler development to the point that kernel expertise is not required to add a new scheduler to an existing kernel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel framework for non-deterministic testing of message-passing programs

    Page(s): 66 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    Message-passing programs are difficult to test because of their non-deterministic behavior. One approach, called non-deterministic testing, involves executing a message-passing program with the same input many times in hope that faults would be exposed by one of these executions. Non-deterministic testing has been widely used in practice, but unfortunately, in an ad-hoc manner. In this paper, we present a novel framework for non-deterministic testing of message-passing programs. The framework uses a coverage criterion to guide the testing process. During each test run, the sequence of send and receive events that are executed is recorded in an execution trace. After each test run, the trace is analyzed to identify race conditions, which are used to derive coverage elements that have not been covered yet. Then, random delays are inserted at a chosen set of program locations in order to increase the chance of covering the uncovered elements in the next test run. This framework provides a heuristic condition that can be used to decide when to stop testing. The condition is easy to compute and its satisfaction signals that the coverage criterion has likely been satisfied. This framework can be automated at the source code level and allows one to obtain a measure of test coverage at the end of the testing process. We describe a prototype tool and report some empirical results that demonstrate the effectiveness of our framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safe composition of real time software

    Page(s): 79 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    There is an increasing move towards the use of modular approaches to software design and implementation in the development of critical systems. The reason is the approaches have a number of benefits including providing support for concurrent development and helping to simplify software maintenance. However, there is little guidance on how to perform a modular safety process for the certification of critical systems as most of the standards assume a monolithic design. Of particular concern is performing safety analyses, with the limited context afforded by a modular approach, in order to derive valid safety requirements with appropriate context/assumptions. Expressing requirements using contracts is one way to help support change. An example use of contracts between a real-time operating system (RTOS) and application is given. This example has been chosen as the use of an RTOS is an increasingly prevalent form of modularisation, instead of embedding operating system services within the applications. In fact having an RTOS is considered a key enabling technology as it provides a clear interface between the application and platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing software quality with limited fault-proneness defect data

    Page(s): 89 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1472 KB) |  | HTML iconHTML  

    Assuring whether the desired software quality and reliability is met for a project is as important as delivering it within scheduled budget and time. This is especially vital for high-assurance software systems where software failures can have severe consequences. To achieve the desired software quality, practitioners utilize software quality models to identify high-risk program modules: e.g., software quality classification models are built using training data consisting of software measurements and fault-proneness data from previous development experiences similar to the project currently under-development. However, various practical issues can limit availability of fault-proneness data for all modules in the training data, leading to the data consisting of many modules with no fault-proneness data, i.e., unlabeled data. To address this problem, we propose a novel semi-supervised clustering scheme for software quality analysis with limited fault-proneness data. It is a constraint-based semi-supervised clustering scheme based on the k-means algorithm. The proposed approach is investigated with software measurement data of two NASA software projects, JM1 and KC2. Empirical results validate the promise of our semi-supervised clustering technique for software quality modeling and analysis in the presence of limited defect data. Additionally, the approach provides some valuable insight into the characteristics of certain program modules that remain unlabeled subsequent to our semi-supervised clustering analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured assurance cases: three common standards

    Page(s): 99 - 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    For safety-, mission-, or security-critical systems, there are typically regulations or acquisition guidelines requiring a documented body of evidence to provide a compelling justification that the system satisfies specified critical properties. Current frameworks suggest the detailed outline of the final product but leave the truly meaningful and challenging aspects of arguing assurance to the developers and reviewers. We began with two major hypotheses. We selected a software notation suitable for building structured safety cases and applied it to three disparate assurance standards. Each of the three standard mapping efforts is discussed, along with the problems we encountered. In addition to the standards, we used the notation to structure an assurance case for a practical security-critical system, and we describe the lessons learned from that experience. We conclude with practical options for using our mappings of the standards and how well our initial hypotheses are borne out by the project. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic generation of executable assertions for runtime checking temporal requirements

    Page(s): 111 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    Checking various temporal requirements is a key dependability concern in safety-critical systems. As model-checking approaches do not scale well to systems of high complexity the runtime verification of temporal requirements has received a growing attention recently. This paper presents a code-generation based method for runtime evaluation of linear temporal logic formulae over program execution traces. The processing-power requirements of our solution are much lower than in case of previous approaches enabling its application even in resource-restricted embedded environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • View graphs for analysis and testing of programs at different abstraction levels

    Page(s): 121 - 130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    This paper introduces view graphs, which allow representation of source code for program analysis and testing at different levels of abstraction. At a low level of abstraction, view graphs can be used for white-box analysis and testing, and at a high level of abstraction, they can be used for black-box analysis and testing. View graphs are thus an approach to integrate black-box and white-box techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The simulation of anomalies in the functional testing of the ERTMS/ETCS trackside system

    Page(s): 131 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (840 KB) |  | HTML iconHTML  

    ERTMS/ETCS is going to become the reference standard for modern railway signalling. To develop a safe and reliable automatic train protection system (ATPS) based on ERTMS/ETCS, a detailed functional testing phase is needed, meeting the requirements of international railway safety standards. In this paper we deal with the functional validation of the trackside part of an ERTMS/ETCS compliant system. An extensive set of functional tests have been specified in order to thoroughly verify the system, using an innovative approach based on influence variables and state diagrams. However, such a detailed test specification requires a great amount of time and resources to be entirely executed in the real environment. Moreover, several tests need to generate abnormal safety-critical conditions that are unfeasible on the field. In this paper we describe how we overcame such problems using a specific simulation environment capable to quickly and automatically execute anomaly tests in normal as well as in degraded operating conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian perspective of optimal checkpoint placement

    Page(s): 143 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Checkpointing and rollback recovery is a commonly used technique to save the information on the main memory in file systems to a safe secondary medium. In this paper we develop fully Bayesian learning algorithms to place the checkpoint adoptively. Based on two kinds of prior distributions for the Weibull system failure time distribution, we give semi-parametric estimation methods of the optimal checkpoint interval minimizing the expected operating cost rate. Simulation experiments show how to determine the hyper-parameters as well as asymptotic properties of the resulting estimators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear randomized voting algorithm for fault tolerant sensor fusion and the corresponding reliability model

    Page(s): 153 - 162
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB) |  | HTML iconHTML  

    Sensor failures in process control programs can be tolerated through application of well known modular redundancy schemes. The reliability of a specific modular redundancy scheme depends on the predefined number of sensors that may fail, f, out of the total number of sensors available, n. Some recent sensor fusion algorithms offer the benefit of tolerating a more significant number of sensor failures than modular redundancy techniques at the expense of degrading the precision of sensor readings. In this paper, we present a novel sensor fusion algorithm based on randomized voting, having linear - O(n) expected execution time. The precision (the length) of the resulting interval is dependent on the number of faulty sensors - parameter f. A novel reliability model applicable to general sensor fusion schemes is proposed. Our modeling technique assumes the coexistence of two major types of sensor failures, permanent and transient. The model offers system designers the ability to analyze and define application specific balances between the expected system reliability and the desired interval estimate precision. Under the assumptions of failure independence and exponentially distributed failure occurrences, we use Markov models to compute system reliability. The model is then validated empirically and examples of reliability prediction are provided for networks with fairly large number of sensors (n>100). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.