By Topic

IBM Systems Journal

Issue 1 • Date 2002

Filter Results

Displaying Results 1 - 12 of 12
  • Message from the Corporate Director, IBM Software Test

    Page(s): 1
    Save to Project icon | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): 2 - 3
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (37 KB)  

    Customers and independent software vendors have a right to expect high-quality, defect-free products from IBM. The process used for software development has a great deal to do with the quality of the results, and testing is a crucial part of that process. Because the cost of testing and verification can exceed the cost of design and programming, the methodologies, techniques, and tools used for testing are key to efficient development of high-quality software. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software debugging, testing, and verification

    Page(s): 4 - 12
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (243 KB)  

    In commercial software development organizations, increased complexity of products, shortened development cycles, and higher customer expectations of quality have placed a major responsibility on the areas of software debugging, testing, and verification. As this issue of the IBM Systems Journal illustrates, there are exciting improvements in the underlying technology on all three fronts. However, we observe that due to the informal nature of software development as a whole, the prevalent practices in the industry are still immature, even in areas where improved technology exists. In addition, tools that incorporate the more advanced aspects of this technology are not ready for large-scale commercial use. Hence there is reason to hope for significant improvements in this area over the next several years. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Metrics to evaluate vendor-developed software based on test case execution results

    Page(s): 13 - 30
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (523 KB)  

    Various business considerations have led a growing number of organizations to rely on external vendors to develop software for their needs. Much of the day-to-day data from vendors are not available to the vendee, and typically the vendee organization ends up with its own system or acceptance test to validate the software. The 2000 Summer Olympics in Sydney was one such project in which IBM evaluated vendor-delivered code to ensure that all elements of a highly complex system could be integrated successfully. The readiness of the vendor-delivered code was evaluated based primarily on the actual test execution results. New metrics were derived to measure the degree of risk associated with a variety of test case failures such as functionality not enabled, bad fixes, and defects not fixed during successive iterations. The relationship of these metrics to the actual cause was validated through explicit communications with the vendor and the subsequent actions to improve the quality and completeness of the delivered code. This paper describes how these metrics can be derived from the execution data and used in a software project execution environment. Even though we have applied these metrics in a vendor-related project, the underlying concepts are useful to many software projects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving software testing via ODC: Three case studies

    Page(s): 31 - 44
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (946 KB)  

    Orthogonal Defect Classification (ODC) is a methodology used to classify software defects. When combined with a set of data analysis techniques designed to suit the software development process, ODC provides a powerful way to evaluate the development process and software product. In this paper, three case studies demonstrate the use of ODC to improve software testing. The first case study illustrates how a team developing a high-quality, mature product arrived at specific testing strategies aimed at reducing field defects. The second is a middleware project that identified the areas of system test that needed to be strengthened. The third describes how a very small team with an inadequate testing strategy recognized its risk in trying to meet the scheduled release and made the product more stable by postponing the release date and adding badly needed testing scenarios. All three case studies highlight how technical teams can use ODC data for objective feedback on their development processes and the evolut ion of their products. This feedback facilitates the identification of actions to increase the efficiency and effectiveness of development and test, resulting in improved resource management and enhanced software quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A metric for predicting the performance of an application under a growing workload

    Page(s): 45 - 54
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (178 KB)  

    A new software metric, designed to predict the likelihood that the system will fail to meet its performance goals when the workload is scaled, is introduced. Known as the PNL (Performance Nonscalability Likelihood) metric, it is applied to a study of a large industrial system, and used to predict at what workloads bottlenecks are likely to appear when the presented workload is significantly increased. This allows for intelligent planning in order to minimize disruption of acceptable performance for customers. The case study also outlines our performance testing approach and presents the major steps required to identify current production usage and to assess the software performance under current and future workloads. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing z/OS: The premier operating system for IBM's zSeries server

    Page(s): 55 - 73
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (219 KB)  

    The “z” in zSeries™ stands for zero down time. As businesses have come to rely more and more on the continuous availability of their largest systems, the verification techniques used by IBM in developing those systems have had to evolve. Methodologies, techniques, and tools need continuous enhancements to develop the necessary verification processes that support development for a “zero down time” system. This paper describes the verification methodologies used in z/OS™ development, as well as test technologies and techniques. Special attention is paid to tool and test case reuse, and to techniques for testing for data integrity and system recovery. We also explain how these methodologies can be used for both traditional on-line transaction processing and newer Web-based or distributed applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The STCL test tools architecture

    Page(s): 74 - 88
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (411 KB)  

    The Software Test Community Leaders (STCL) group is an IBM-wide initiative focused on improving software test and quality practices within the corporation. In 1999, we began working to develop an architecture to integrate both new and existing test tools into solutions for use in the testing organizations across IBM. This paper discusses the requirements for the architecture, as well as the issues associated with developing a solution architecture for a large base of tools that span a variety of platforms and domains. The architecture is being designed and developed to address three concerns for integrating testing tools: integration of the data across tools and repositories, integration of the control across tools, and integration to provide a single graphical user interface into the tool set. Because of the heterogeneous nature of the platforms and domains the architecture must support, extensibility is essential. We address each of these three integration concerns using an open-source framework that op erates on a set of standardized but extensible entities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a model-based test generator to test for standard conformance

    Page(s): 89 - 110
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (219 KB)  

    In this paper we describe two experiments in the verification of software standard conformance. In our experiments, we use a model-based test generator to create a test suite for parts of the POSIX™ standard and another test suite for the specification of Java™ exception handling. We demonstrate that models derived from specifications produce better test suites than the suites specified by standards. In particular, our test suites achieved higher levels of code coverage with complete test requirements coverage. Moreover, the test suite for the Java study found code defects that were not exposed by other benchmark test suites. The effort involved in producing these models and test suites was comparable to the effort involved in developing a test suite by more conventional methods. We avoid the state space explosion problem by modeling only the external behavior of a specific feature of the standard, without modeling the details of any particular implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multithreaded Java program test generation

    Page(s): 111 - 125
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3347 KB)  

    We describe ConTest, a tool for detecting synchronization faults in multithreaded Java™ programs. The program under test is seeded with a sleep( ), yield( ), or priority( ) primitive at shared memory accesses and synchronization events. At run time, ConTest makes random or coverage-based decisions as to whether the seeded primitive is to be executed. Thus, the probability of finding concurrent faults is increased. A replay algorithm facilitates debugging by saving the order of shared memory accesses and synchronization events. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Software Testing Automation Framework

    Page(s): 126 - 139
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (199 KB)  

    Software testing is an integral, costly, and time-consuming activity in the software development life cycle. As is true for software development in general, reuse of common artifacts can provide a significant gain in productivity. In addition, because testing involves running the system being tested under a variety of configurations and circumstances, automation of execution-related activities offers another potential source of savings in the testing process. This paper explores the opportunities for reuse and automation in one test organization, describes the shortcomings of potential solutions that are available “off the shelf,” and introduces a new solution for addressing the questions of reuse and automation: the Software Testing Automation Framework (STAF), a multiplatform, multilanguage approach to reuse. It is based on the concept of reusable services that can be used to automate major activities in the testing process. The design of STAF is described. Also discussed is how it was employe d to automate a resource-intensive test suite used by an actual testing organization within IBM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FLAVERS: A finite state verification technique for software systems

    Page(s): 140 - 165
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (931 KB)  

    Software systems are increasing in size and complexity and, subsequently, are becoming ever more difficult to validate. Finite state verification (FSV) has been gaining credibility and attention as an alternative to testing and to formal verification approaches based on theorem proving. There has recently been a great deal of excitement about the potential for FSV approaches to prove properties about hardware descriptions but, for the most part, these approaches do not scale adequately to handle the complexity usually found in software. In this paper, we describe an FSV approach that creates a compact and conservative, but imprecise, model of the system being analyzed, and then assists the analyst in adding additional details as guided by previous analysis results. This paper describes this approach and a prototype implementation called FLAVERS, presents a detailed example, and then provides some experimental results demonstrating scalability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Throughout its history, the IBM Systems Journal has been devoted to software, software systems, and services, focusing on concepts, architectures, and the uses of software.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
John J. Ritsko
IBM T. J. Watson Research Center5