By Topic

Application-Specific Software Engineering Technology, 1998. ASSET-98. Proceedings. 1998 IEEE Workshop on

Date 28-28 March 1998

Filter Results

Displaying Results 1 - 25 of 32
  • Proceedings. 1998 IEEE Workshop on Application-Specific Software Engineering and Technology. ASSET-98 (Cat. No.98EX183)

    Save to Project icon | Request Permissions | PDF file iconPDF (197 KB)  
    Freely Available from IEEE
  • Panel discussion: Lucrative wireless telecom applications: five years from now - have not even been invented

    Page(s): 174 - 176
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 176
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • Building business processes using a state transition model on World Wide Web

    Page(s): 2 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (36 KB)  

    Describes a state transition model to build a business process using the World Wide Web (WWW) as the user interface. A process is modeled as a collection of states, with each task represented by one state and several types of transition between states. The workflow model proposed in this paper allows the user to execute tasks concurrently, thus reducing the overall execution time. The design of the system to support integrated workflow and implementation issues using ODBC and WWW is discussed in the paper. This system allows users to collaborate on a job in distributed environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptable software for communications in video conferencing

    Page(s): 8 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    Video conferencing systems (VCS) have become practical in commercial and research institutions because the advances of technologies in networking and multimedia applications. A video conferencing session involves multiple parties, possibly geographically interspersed, which exchange real-time video data. How anomalies such as site failure and network partitioning affect the effectiveness and utilization of the communication capabilities. Video conferencing systems lack the ability of dynamically adapting themselves to the variations in the system resources such as network bandwidths, CPU utilization, memory and disk storage. In VCS, changes in parameters such as frame sizes, codec schemes, color depths, and frame resolutions can be agreed upon by users interactively based on their requirements for Quality of Service. They cannot be made automatically based on the distributed system measurements of currently available resources. We need to limit the users' burden in keeping the system running in the most suitable mode to current environment and make it possible to provide the best possible service based on the status of the system. Incorporating adaptability into a video conferencing systems minimizes the effects of the variations in system environments on the quality of video conference sessions. In this paper we present the following. First we briefly discuss the concept of adaptability and the basic idea for achieving adaptability in a video conferencing system. Next to identify and describe some of the common anomalies encountered in a distributed system. We further characterize the Quality of Service parameters in terms of timeliness, accuracy, and precision. These parameters are also identified in different layers of the software. We give an overview of the NV video conferencing system which serves as the testbed of our experiments. Then we describe the extension and modification to NV and discuss some reconfiguration issues. Finally, a summary of experimental data analyses, observations, and discussions are presented. Specifically we show how VCS parameters affect the communication and computing. Next we present some guidelines for maintaining timeliness and accuracy when bandwidth decreases. We are conducting a series of experiments that will lead in the development of policies for adaptability at the application, system, and network layer to meet the quality of service requirements. Next we study the impact of network constraints in determining the quality of service that can be guaranteed to the user. Based on these experiments, we plan to identify guidelines and expertise that will allow the applications and network to meet the quality of service requirements at all layers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying a modified EQL optimization method to MRL rule-based programs

    Page(s): 75 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    We modify our previously developed (Blaz Zupan et al., IEEE Trans. on Knowledge and Data Eng., April 1997) optimization method for EQL (EQuational rule-based Language) systems to optimize MRL (Macro Rule-based Language) systems. In particular, we show how the EQL optimization method can be applied to an MRL system after its corresponding state-space graphs have been constructed. Since the time and space complexity of a bidirectional search [O(bd/2)] is better than the breadth-first search's O(bd), we use bidirectional search and bidirectional breadth-first search strategies instead of the original bottom-up and breadth-first search strategies employed by Blaz Zupan et al. As in that paper, the resulting optimized MRL system (1) has a better response time in general because it requires fewer rule firings to reach the fixed point, (2) is stable because it has no cycles, and (3) has no redundant rules View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependency characterization in path-based approaches to architecture-based software reliability prediction

    Page(s): 86 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Prevalent black-box based approaches to software reliability modeling are inappropriate to model the failure behavior of modern, component-based heterogeneous systems. Reliability prediction of applications taking into account their architecture is absolutely essential. The path-based approaches to architecture-based software reliability prediction rely on a fundamental assumption that the successive execution of the components is independent. This leads to very pessimistic estimates of software reliability. In this paper, we describe the dependency characterization which is a major bottleneck in the application of path-based approaches to real-life systems, and propose a way to resolve this issue based on time-dependent failure intensity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Telecommunication software validation using a synchronous approach

    Page(s): 98 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    Telephone services and features provide a challenging application domain for the development and the validation of real-time software. This paper reviews our experiment on incremental validation of services and features which has been carried out in collaboration with CNET-France Telecom. Because of the well-known “feature interaction problem”, telephone software can be considered as safety-critical software, and must exhibit qualities such as correctness and safety with very high assurance. For this class of software, the requirements engineering phase usually ends in a formal specification which is provided in some logic; therefore, the validation can be performed in a very rigorous and formal way using proof tools and/or specification-based testing techniques. Much critical software is reactive: it continuously reacts with its environment at its own speed. Therefore, it must satisfy some strong temporal causalities between external events, in order to bring about or maintain the desired relationships in the environment. We have developed a new approach for specification-based testing of synchronous reactive software and its associated environment. The specification language is LUSTRE, which is both a temporal logic and a synchronous data-flow programming language. We have successfully modelled a telecommunication system as a reactive software system; this allowed us to extensively apply our testing approach to this type of software. A synchronous model of a telecommunication system is described. A specification of the model is then given and the validation work is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • COTS software failures: can anything be done?

    Page(s): 140 - 144
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    Software development is quickly becoming more of a process of acquiring software components and composing them than building systems from scratch. From a time-to-market perspective, this is ideal, but from a quality perspective, this is worrisome. This paper addresses steps that component integrators should follow before relying on someone else's software libraries and components View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The application of the SequenceL language to complicated database applications

    Page(s): 166 - 171
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (36 KB)  

    There is an ongoing discussion at the highest levels of the government concerning data morgues. The concern has to do with the current hardware capabilities that permit the acquisition and storage of vast amounts of data and the inability of scientists armed with current software technology to process and analyze the data. The current problem actually demonstrates that advances in computer software have not kept pace with advances in computer hardware. If corresponding software advances can be made, the data may be found to be comatose, rather than dead. Among other root problems, the comatose data is symptomatic of the fact that currently available software technology is not based upon abstractions that appropriately simplify approaches to complex problems and data sets. In order to analyze the large data sets containing, e.g., telemetry data, exploratory or data mining programs must be written. When written in traditional computer languages, these programs require scientists to work with computer specialists and, therefore, require much time to deploy. Higher level languages could well support this activity, particularly languages providing abstractions that scientists could employ without the assistance of specialists. If more exploratory programs can be written in a reduced amount of time, more of the comatose data can be analyzed. Furthermore, new abstractions may provide new points of view. The differing points of view may lead to new insights into how the data can be best analyzed. At the root of any technical solution to the comatose data problem will be a computer language. The higher the level of the root language, the faster the technical solutions will be found. This research effort is focused on computer language improvement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A protocol architecture for multimedia document retrieval over high speed LANs

    Page(s): 116 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    The emergence of gigabit local area networks (G-LANs) has spurred a tremendous interest in supporting networked multimedia applications over a LAN. Such LANs do not support the notion of QoS required by multimedia documents due to their asynchronous media access protocol. In this paper, we propose a dynamic bandwidth management scheme that uses the concept of time division multiple access (TDMA). A significant performance improvement is observed through experimental results, especially to transmission rates and jitter. We also propose a framework for graceful degradation of the playout quality of multimedia objects in cases where the LAN's total capacity is not sufficient to meet the overall demand View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-board maintenance for long-life systems

    Page(s): 69 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    Due to the low power, low cost, high reliability and high performance goals, the traditional approaches to fault-tolerant, ultra-reliable systems that rely on custom-built hardware and extensive component/subsystem replication will not be feasible for the new-generation spaceborne computing systems. In this paper, we present a new concept called “on-board maintenance”. We classify on-board maintenance into three categories: preventive maintenance, perfective maintenance and corrective maintenance. For each type, we present its definition and propose some approaches to its realization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • STAR: a CASE tool for requirement engineering

    Page(s): 28 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    Requirement analysis is one of the most critical and time-consuming steps in the software development process. Requirements are usually vague and imprecise in nature. They often conflict with each other and many conflicts are implicit and difficult to identify. Moreover accessing the customer's trade-off preferences among the conflicting requirements is challenging. A CASE tool that assists the software developer in identifying conflicting requirements and in analyzing trade-off relationship can be useful. In this paper we introduce a tool for the Specification, the Trade-off and the Analysis for the Requirements (STAR). We briefly describe the formal foundation for STAR, which uses fuzzy logic to specify imprecise requirements. STAR has a set of heuristics for inferring cooperative and conflicting relationships between requirements. Once the conflicting requirements are identified, STAR supports a systemic approach for assessing the relative priority between conflicting requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Observation inaccuracy in conformance testing with multiple testers

    Page(s): 80 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    In the conformance testing of a software system, independent multiple testers (or observers) are often used, each providing input messages to and receiving output messages from the software system. For an execution of the software system, if all testers have observed correct results, the intuitive conclusion is that this execution is correct. However, this conclusion may be wrong, and this problem is referred to as multi-tester observation inaccuracy. In this paper, for a specification written as a deterministic or nondeterministic finite state machine (FSM), we present a necessary and sufficient condition for an incorrect test observation by multiple testers. For a specification written as a deterministic FSM, we define three types of implementation faults that can cause incorrect test observations: input exchange, forward output shifting and backward output shifting faults. We also propose a strategy for solving the problem of multi-tester observation inaccuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software reliability analysis of three successive generations of a telecommunications system

    Page(s): 122 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (52 KB)  

    Analyzes the data (failure and correction reports) collected on the software of three successive generations of the Brazilian switching system TROPICO-R, during validation and operation. A comparative analysis of the three products is done and the main results are outlined. Emphasis is placed on the evolution of the software and the corresponding failures and corrected faults. The analysis addresses the modifications introduced on system components, the distribution of failures and corrected faults in the components, and the functions fulfilled by the system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Return on investment of software quality predictions

    Page(s): 145 - 150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    Software quality classification models can be used to target reliability enhancement efforts toward high risk modules. We summarize a generalized classification rule which we have proposed. Cost aspects of a software quality classification model are discussed. The contribution of this paper is a demonstration of how to assess the return on investment of model accuracy, in the context of a software quality classification model. An industrial case study of a very large telecommunications system illustrates the method. The dependent variable of the model was the probability that a module will have faults discovered by customers. The independent variables were software product and process metrics. The model is compared to random selection of modules for reliability enhancement. Calculation of return on investment can guide selection of the generalized classification rule's parameter so that the model is well-suited to the project View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring the effectiveness of a test case

    Page(s): 157 - 159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    This work is concerned with the effectiveness of test cases used in program testing. It is well known that no matter which test-case selection method is used in program testing, some programming errors can still escape detection because the program may produce fortuitously correct results. One reason why a program may produce a fortuitously correct result is that it contains expressions of the form exp1 op exp2, and the test case used causes exp1 to assume a special value such that exp1 op exp2=exp1 regardless of the value of exp2 . In that event, if there is an error in exp2, it will never be reflected in the test result View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model validation using simulated data

    Page(s): 22 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (100 KB)  

    Effective and accurate reliability modeling requires the collection of comprehensive, homogeneous, and consistent data sets. Failure data required for software reliability modeling is difficult to collect, and even the available data tends to be noisy, distorted and unpredictable. Also, the complexity of the real world data might obscure the properties of the reliability models which are based on simpler assumptions. These properties may be revealed by evaluating the models using simpler data sets. Towards this end, we have created 20 sequences of interfailure times each from five software reliability models using rate-based simulation technique, and validated the models using the simulated data sets. In this paper we describe the experimental setup, model validation results, and the lessons learned during the experiment. Having established the credibility of simulation to generate failure data, we also show how the failure process underlying a failure data set can be described more accurately by simulating it using a combination of reliability models, as opposed to a single model as per conventional analytical techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An enhanced authentication protocol for personal communication systems

    Page(s): 128 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    The mutual authentication protocols of GSM and IS-41 are computationally efficient and thwart masquerading and eavesdropping. However, these protocols do not support non-repudiation of service. We propose a simple authentication protocol for personal communication systems emphasizing non-repudiation and play back attack prevention. We extend the core authentication functions of GSM to include a one way function that establishes trust between the mobile unit and visiting location register. The protocol is presented using a general notation and semantics, including major message flows View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Early prediction of project schedule slippage

    Page(s): 40 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    Schedule slippage can be avoided if timely preventive measures are adopted. The most common solution involves deploying additional manpower to boost productivity. But increasing the manpower deployed can hasten product development only to a certain point. Staffing beyond this elusive optimum can actually delay the ultimate completion of a project. To complicate the issue, optimal staffing for minimal development time is not easily ascertainable. But the predicted development time of a product and the skill level of the development team can be used as indicators of schedule slippage. This paper uses the Gamma model to provide skill level and completion time insights into optimally staffing a project View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability prediction of a trajectory verification system

    Page(s): 63 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    The existence of software faults in safety-critical systems is not tolerable. Goals of software reliability assessment are estimating the failure probability of the program, θ, and gaining statistical confidence that θ is realistic. The paper presents practical problems and challenges encountered in an ongoing effort to assess and quantify software reliability of NASA's Day-of-Launch I-Loan Update (DOLILU II) system. DOLILU II system has been in operational use for several years. Bayesian framework is chosen for reliability assessment, because it allows incorporation of failure free executions, observed in the operational environment, into the reliability prediction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of architecture configuration on software reliability and performance estimation

    Page(s): 90 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    Presents a case study that enables the early prediction of software reliability and performance at the architecture design stage. Software architecture design is a crucial stage in the software development process, especially in developing large-scale software. Early prediction of the reliability and performance of the software can be used as a basis for making design decisions. We have studied several common architectural styles, with emphasis on the pipe-filter and the batch-sequential styles, and observed the impact of different configurations on reliability and performance measurements. Moreover, several external factors that might have an influence on these measurements are studied. The results show that altering the architecture configuration to attain higher reliability and/or better performance is feasible depending on variations in the execution environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ensuring system and software reliability in safety-critical systems

    Page(s): 48 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    Reliability growth models, formal specifications, testing, safety analysis have been proposed to address system and software reliability. This paper presents a technique, called ripple effect analysis, which is well known in software maintenance, for system and software reliability. This technique is useful to ensure that all the changes that need to be done are indeed changed after a software modification. It is different from regression test of which the purpose is to show that those parts that should not be changed remain unchanged after a software modification. We have used this technique at Guidant-CPI and found ripple effect analysis is an effective technique for ensuring system and software reliability in developing safety-critical systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of integrity techniques and risk assessment in system design

    Page(s): 60 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (20 KB)  

    This paper focuses on developing a formal understanding of “failure” with respect to system implementations. Furthermore, we would like the system design process to be able to leverage off of this understanding. Our approach is restricted to the class of systems that can be modelled by HFSMs as described in Winter (1998). The purpose of this paper is to lay out a classification process that can aid in identification and characterization of techniques for dealing with the different types of system threats. This classification framework leads naturally to a taxonomy of strategies and technologies for dealing with various types of threats View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statecharts supervision models for soft real-time systems

    Page(s): 54 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    Developing a reliable software system is a major requirement today as it is mainly used for critical applications. Applications such as automatic flight control, banking, and telephone switching demand safety and real-time features. In such an environment, the occurrence of a failure may result in damage to the company's reputation, and even catastrophic economic consequences. Another issue that must be addressed is low cost development. This paper presents the software supervision paradigm as a means to improve software reliability during the operational stage of a real-time system, specifically a PBX (Private Branch eXchange). As well, the use of Statecharts for specifying the realtime supervisor is advocated, and the supervision model for the PBX is given. Benefits of this approach are discussed throughout the paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.