By Topic

Software Engineering Workshop, 2003. Proceedings. 28th Annual NASA Goddard

Date 3-4 Dec. 2003

Filter Results

Displaying Results 1 - 25 of 30
  • Design tool assessment for safety-critical software development

    Page(s): 105 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4245 KB)  

    The paper presents taxonomy of criteria and procedures for evaluating software development tools used in safety-critical real-time systems. The ultimate purpose of the research is to provide a base for creation of guidelines for the tool certification process. The specific application area is airborne software and appropriate references are made to the accepted RTCA DO-178B guidelines. The software design and coding processes are focal point of tool assessment. The paper presents the industry viewpoint, tool qualification and evaluation criteria, and an active experiment serving as a test-bed for collecting software development effort data and engineering observations supporting the tool assessment methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying run-time monitoring to the Deep-Impact fault protection engine

    Page(s): 127 - 133
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    Run-time monitoring is a lightweight verification method whereby the correctness of a programs' execution is verified at run-time using executable specifications. This paper describes the verification of the fault protection engine of the Deep-Impact spacecraft flight software using a temporal logic based run-time monitoring tool. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based software testing via incremental treatment learning

    Page(s): 82 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1261 KB) |  | HTML iconHTML  

    Model-based software has become quite popular in recent years, making its way into a broad range of areas, including the aerospace industry. The models provide an easy graphical interface to develop systems, which can generate the sometimes tedious code that follows. While there are many tools available to assess standard procedural code, there are limits to the testing of model-based systems. A major problem with the models are that their internals often contain gray areas of unknown system behavior. These possible behaviors form what is known as a data cloud, which is an overwhelming range of possibilities of a system that can overload analysts (Menzies et al., 2003). With large data clouds, it is hard to demonstrate which particular decision leads to a particular outcome. Even if definite decisions can't be made, it is possible to reduce the variance of and condense the clouds (Menzies et al., 2003). This paper presents two case studies; one with a simple illustrative model and another with a more complex application. The TAR3 treatment learning tool summarizes the particular attribute ranges that selects for particular behaviors of interest, reducing the data clouds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the advantages of approximate vs. complete verification: bigger models, faster, less memory, usually accurate

    Page(s): 75 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    We have been exploring LURCH, an approximate (not necessarily complete) alternative to traditional model checking based on a randomized search algorithm. Randomized algorithms like LURCH have been known to outperform their deterministic counterparts for search problems representing a wide range of applications. The cost of an approximate strategy is the potential for inaccuracy. If complete algorithms terminate, they find all the features they are searching for. On the other hand, by its very nature, randomized search can miss important features. Our experiments suggest that this inaccuracy problem is not too serious. In the case studies presented here and elsewhere, LURCHS random search usually found the correct results. Also, these case studies strongly suggest that LURCH can scale to much larger models than standard model checkers like NuSMV and SPIN. The two case studies presented in this paper are selected for their simplicity and their complexity. The simple problem of the dining philosophers has been widely studied. By making the dinner more crowded, we can compare the memory and runtimes of standard methods (SPIN) and LURCH. When hundreds of philosophers sit down to eat, both LURCH and SPIN can find the deadlock case. However, SPINS memory and runtime requirements can grow exponentially while LURCHS requirements stay quite low. Success with highly symmetric, automatically generated problems says little about the generality of a technique. Hence, our second example is far more complex: a real-world flight guidance system from Rockwell Collins. Compared to NuSMV, LURCH performed very well on this model. Our random search finds the vast majority of faults (close to 90%); runs much faster (seconds and minutes as opposed to hours); and uses very little memory (single digits to 10s of megabytes as opposed to 10s to 100s of megabytes). The rest of this paper is structured as follows. We begin with a theoretical rationale for why random search methods like LURCH can be incomplete, yet still successful. Next, we note that for a class of problems, the complete search of standard model checkers can be overkill. LURCH is then briefly introduced and our two case studies are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RGML: a markup language for characterizing requirements generation processes

    Page(s): 29 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    In this paper we present the Requirements Generation Markup Language (RGML). The RGML provides a formal specification mechanism for characterizing the structure, process flow and activities inherent to the requirements generation process. Within activities, the RGML supports the characterization of application instantiation, the use of templates and the production of artifacts. The RGML can also describe temporal control within a process as well as conditional expressions that control if and when various activity scenarios will be executed. The language is expressively powerful, yet flexible in its characterization capabilities, and thereby, provides the capability to describe a wide spectrum of different requirements generation processes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Instrumentation of intermediate code for runtime verification

    Page(s): 66 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    Runtime monitoring is aimed at ensuring correct runtime behavior with respect to specified constraints. It provides assurance that properties are maintained during a given program execution. The dynamic monitoring with integrity constraints (DynaMICs) approach is a runtime monitoring system under development at the University of Texas at El Paso. The focus of the paper is on the identification of instructions at the object-code level that require instrumentation for monitoring. Automated instrumentation is desirable because it can reduce errors introduced by humans, it provides finer control over monitoring, and it allows greater control over instrumentation. The paper also discusses two other technologies associated with DynaMICs: the elicitation and formal specification of properties and constraint; and tracing property or constraint violations to the software engineering artifacts from which the constraints and properties were derived. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software dynamics: a new measure of performance for real-time software

    Page(s): 120 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (625 KB) |  | HTML iconHTML  

    This paper presents an approach to use concepts from continuous dynamical systems to describe behavior of real-time software. The idea is applicable to nearly all real-time software architectures. It relies on changing deadlines and taking quantitative measurements how many deadlines are missed or what is the total time of missed deadlines. A resulting graph can be approximated by a straight line or an exponential curve, from which dynamic parameters, such as sensitivity and time constant, can be inferred. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the run-time verification of autonomy software

    Page(s): 58 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (721 KB)  

    The mission-critical and dependability aspects of autonomous systems demand formal level of assurance in ascertaining their mission-survivability capabilities. The complete understanding of system autonomy and its verification and validation (V&V) continue to pose technical challenges. In recent years, formal methods have shown considerable promise in the area of V&V of autonomous systems. In this paper, we further explore the applicability of model checking techniques in run-time verification of autonomy software such as automated planning and scheduling algorithms. We illustrate our proposed approach for runtime verification through a case study of FireSat satellite. We also discuss our experiences and ongoing research activities in this direction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Addressing software security and mitigations in the life cycle

    Page(s): 201 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (303 KB) |  | HTML iconHTML  

    Traditionally, security is viewed as an organizational and information technology (IT) systems function comprising of firewalls, intrusion detection systems (IDS), system security settings and patches to the operating system (OS) and applications running on it. Until recently, little thought has been given to the importance of security as a formal approach in the software life cycle. The Jet Propulsion Laboratory has approached the problem through the development of an integrated formal software security assessment instrument (SSAI) with six foci for the software life cycle. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying fault correction profiles

    Page(s): 185 - 192
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (349 KB) |  | HTML iconHTML  

    In general, software reliability models have focused on modeling and predicting the failure detection process and have not given equal priority to modeling the fault correction process. However, it is important to address the fault correction process in order to identify the need for process improvements. Process improvements, in turn, will contribute to achieving software reliability goals. We introduce the concept of a fault correction profile - a set of functions that predict fault correction events as a function of failure detection events. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Related to the fault correction profile is the goal fault correction profile. This profile represents the fault correction goal against which the achieved fault correction profile can be compared. This comparison motivates the concept of fault correction process instability, and the attributes of instability. Applying these concepts to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that the need for process improvement can be identified, and that improvements in process would contribute to meeting product reliability goals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Establishing a generic and multidimensional measurement repository in CMMI context

    Page(s): 12 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2624 KB) |  | HTML iconHTML  

    We propose a measurement repository for collecting, storing, analyzing and reporting measurement data based on the requirements of the capability maturity model integrated (CMMI). Our repository is generic, flexible and integrated, supporting a dynamic measurement system. It was originally designed to support Ericsson Research Canada's business information needs. Our multidimensional repository can relate measurement information needs to CMMI processes and products requirements. The data model is based on a hierarchical and multidimensional definition of measurement data. It has been developed based on the concept of a data warehouse environment. Reporting features are based on the definition of queries to on line analytical process (OLAP) cubes. OLAP cubes are created as materialized views of the measurement data, and the user functionalities are implemented as analytical drill-down/roll-up capabilities and as indicator and trend analysis capabilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity of software usage to changes in the operational profile

    Page(s): 157 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (342 KB) |  | HTML iconHTML  

    In this paper we present a methodology for uncertainty analysis of the software operational profile suitable for large complex component-based applications and applicable throughout the software life cycle. Within this methodology, we develop a method for studying the sensitivity of software usage to changes in the operational profile based on perturbation theory. This method is then illustrated on three case studies: software developed for the European Space Agency, an e-commerce application, and real-time control software. Results show that components with small execution rates are the most sensitive to the changes in the operational profile. This observation is very important due to the fact that rarely executed components usually handle critical functionalities such as exception handling or recovery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A metrics based approach for identifying requirements risks

    Page(s): 23 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (229 KB) |  | HTML iconHTML  

    The NASA Independent Verification & Validation (IV&V) Facility's metrics data program (MDP) has been tasked with collecting data in the form of metrics on software products from various NASA Projects. The goals of the program include: improve the effectiveness of software assurance, evaluate the effectiveness of current metrics, identify and include new metrics, improve the effectiveness of software research, and improve the ability of projects to predict software errors early in the lifecycle. This article presents a model for accomplishing these goals from a requirements position. Identification of metrics from a requirements perspective approach presents a particularly difficult challenge. First, there are few automated tools to assist in collection of requirement based metrics. Secondly, few metrics have been identified for requirements. In this article, an approach is presented for capturing requirements measurements generated utilizing Goddard Space Flight Center (GSFC) Software Assurance Technology Center's (SATC) automated requirements measurement (ARM) tool. These measurements are used in combination with newly identified measurements to identify and assign a risk level metric to each requirement. The assigned requirement risk level represents an indicator for early lifecycle analysis for use in prediction of problem requirements and areas within the software that could be more prone to errors. The early identification of high risk areas allows for risk mitigation and application during planning of future development, test, and maintenance activities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal software release time incorporating fault correction

    Page(s): 175 - 184
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (330 KB) |  | HTML iconHTML  

    The "stopping rule" problem which involves determining an optimal release time for a software application at which costs justify the stop test decision has been addressed by several researchers. However, most of these research efforts assume instantaneous fault correction, an assumption that underlies many software reliability growth models, and hence provide optimistic predictions of both the cost at release and the release time. In this paper, we present an economic cost model which takes into consideration explicit fault correction in order to provide realistic predictions of release time and release cost. We also present a methodology to compute the failure rate of the software in the presence of fault correction, which is necessary in order to apply the cost model. We illustrate the utility of the cost model to provide realistic predictions of release time and cost with a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software verification and validation within the (rational) unified process

    Page(s): 216 - 220
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    We discuss the integration of software verification and validation activities (as defined by the IEEE Std. 1012) within the unified process. We compare and contrast these two process frameworks, and identify the aspects of verification and validation that are directly supported, partially supported or not supported by the unified process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintaining verification test consistency between executable specifications and embedded software in a virtual system integration laboratory environment

    Page(s): 221 - 228
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (523 KB) |  | HTML iconHTML  

    The root causes of the majority of software defects discovered during the integration test phase of an embedded system development project have been attributed to errors in understanding and implementing requirements. The independence that typically exits between the system and software development processes provides ample opportunity for the introduction of these types of faults. This paper shows a viable method of verifying object software using the same tests created to verify an executable specification-based system design from which the software is developed. If the object software passes the same tests used to verify the system design, it can be said that the software has correctly implemented all of the known system requirements. This method enables the discovery of functional faults prior to the system integration test phase of a project. Previous research has shown that finding software faults early in the development cycle not only improves software assurance, but also reduces software development expense and time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating MC/DC adequate test sequences through model checking

    Page(s): 91 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB)  

    We present a method for automatically generating test sequences to satisfy MC/DC like structural coverage criteria of software behavioral models specified in state-based formalisms. The use of temporal logic for characterizing test criteria and the application of model-checking techniques for generating test sequences to those criteria have been of interest in software verification research for some time. Nevertheless, criteria for which constraints span more than one test sequence, such as the modified condition/decision coverage (MC/DC) mandated for critical avionics software, cannot be characterized in terms of a single temporal property. This paper discusses a method for recasting two-sequence constraints in the original model as a single sequence constraint expressed in temporal logic on a slightly modified model. The test-sequence generated by a model-checker for the modified model can be easily separated into two different test-sequences for the original model, satisfying the given test criteria. The approach has been successful in generating MC/DC test sequences from a model of the mode-logic in a flight-guidance system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessing IV & V benefits using simulation

    Page(s): 97 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (241 KB) |  | HTML iconHTML  

    There is a critical need for cost effective independent verification and validation (IV & V). The goal of this research is to create a flexible tool that NASA IV & V can use to quantitatively assess the economic benefit of performing IV & V on NASA software development projects and to optimize that benefit across alternative IV & V plans. The tool is based on extensive research into software process simulation models (SPSMs) conducted at the Software Engineering Institute (SEI) by Watts Humphrey and Marc Kellner (1989), and Bill Curtis and others (1992). SPSMs can be used to quantify the costs and benefits associated with NASA IV & V practices enabling management to effectively allocate scarce resources for IV & V activities. In addition, SPSMs facilitate the IV & V of NASA software development processes by enabling checks and performance assessments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tailorable architecture methods

    Page(s): 152 - 156
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (230 KB) |  | HTML iconHTML  

    In this paper we discuss a set of architecture-based methods for architecture design and analysis that have been developed over the past 10 years at the Software Engineering Institute. We then discuss the need for integrating these architecture-based methods, both with each other and into an organization's system development life cycle, based on experience with NASA's EOSDIS project. We discuss the framework for doing this integration, and present a life cycle view of architecture-based design and analysis methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modelling and analysing fault propagation in safety-related systems

    Page(s): 167 - 174
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    A formal specification for analysing and implementing multiple fault diagnosis software is proposed in this paper. The specification computes all potential fault sources that correspond to a set of triggered alarms for a safety-related system, or part of a system. The detection of faults occurring in a safety-related system is a fundamental function that needs to be addressed efficiently. Safety monitors for fault diagnosis have been extensively studied in areas such as aircraft systems and chemical industries. With the introduction of intelligent sensors, diagnosis results are made available to monitoring systems and operators. For complex systems composed of thousands of components and sensors, the diagnosis of multiple faults and the computational burden of processing test results are substantial. This paper addresses the multiple fault diagnosis problem for zero-time propagation using a fault propagation graph. Components represented as nodes in a fault propagation graph are allocated with alarms. When faults occur and are propagated some of these alarms are triggered. The allocation of alarms to nodes is based on a severity analysis performed using a form of failure mode and effect analysis on components in the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A stress-point resolution system based on module signatures

    Page(s): 193 - 198
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    This paper introduces a framework to provide design and testing guidance through a stress-point resolution system based on a module signature for module categorization. A stress-point resolution system includes stress-point identification and the selection of appropriate mitigation activities for those identified stress-points. Progress has been made in identifying stress-point to target the most fault-prone modules in a system by the module signature classification technique. Applying the stress-point prediction method on a large Motorola production system with approximately 1500 modules and comparing the classified modules to change reports, misclassification errors occurred at a rate of less than 2%. After identifying the stress point candidates, localized remedial actions should be undertaken. This algorithmic classification may suggest more insights into defect analysis and correction activities to enhance the software development strategies of software designers and testers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Diagnosing architectural degeneration

    Page(s): 137 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    Software systems evolve over time and undergo changes that can lead to a degeneration of the systems' architecture. Degeneration may eventually reach a level where a complete redesign of the software system is necessary, which is a task that requires significant effort. In this paper, we start by presenting examples of such degeneration and continue with an analysis of technologies that can be used to diagnose degeneration. These technologies can be employed in identifying, degeneration so that it can be treated as early as possible, before it is too late and the system has to undergo a costly redesign. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A component-based model for building reliable multi-agent systems

    Page(s): 41 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4668 KB)  

    In this article, we describe a specification model that seeks to couple formal specification methods and agent-oriented software engineering techniques. The objective is to allow faster formal development of flexible and reusable multiagent systems (MAS) with strict requirements of quality and reliability. The specification model is specifically tailored to support highly dynamic and evolutive characteristics of MAS. The agents are formally specified and instantiated by a framework and reuse is achieved by transforming the framework structural model into multiple agents. Agent flexibility, and adaptation capacity is ensured through the use of design patterns and properties such as: encapsulation, high-cohesion, low-coupling and through the definition of a formal XML model. The specification model represented in XML can be transformed into a code block that needs few adjustments, granting the system a high flexibility and trustworthiness. The purpose is to reduce time, effort and costs associated with MAS design and development with high quality requirements and reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Validation of object oriented software design with fault tree analysis

    Page(s): 209 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    Software plays an increasing role in the safety critical systems. Increasing the quality and reliability of the software has become the major objective of software development industry. Researchers and industry practitioners, look for innovative techniques and methodologies that could be used to increase their confidence in the software reliability. Fault tree analysis (FTA) is one method under study at the Software Assurance Technology Center (SATC) of NASA's Goddard Space Flight Center to determine its relevance to increasing the quality and the reliability of software. This paper briefly reviews some of the previous research in the area of software fault tree analysis (SFTA). Next we discuss a roadmap for application of the SFTA to software, with special emphasis on object-oriented design. This is followed by a brief discussion of the paradigm for transforming a software design artifact (i.e., sequence diagram) to its corresponding software fault tree. Finally, we discuss challenges, advantages and disadvantages of SFTA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software impact analysis in a virtual environment

    Page(s): 143 - 151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (310 KB) |  | HTML iconHTML  

    With the relentless growth in software, automated support for visualizing and navigating software artifacts is no longer a luxury. As packaged software components and middleware occupy more and more of the software landscape, interoperability relationships point to increasingly relevant software change impacts. Packaged software now represents over thirty-two percent of the software portfolio in most organizations benchmark (Rubin, 2001). While traceability and dependency analysis has effectively supported impact analysis in the past, they fall short today as their webs of dependency information extend beyond most software engineers ability to comprehend them. This paper describes research for extending current software change impact analysis to incorporate software architecture dependency relationships. We discuss how we address the extensive dependency information involved, extending impact analysis using software visualization, and outline our approach to employing the software impact analysis virtual environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.