Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Software Engineering, IEEE Transactions on

Issue 7 • Date Jul 1993

Filter Results

Displaying Results 1 - 8 of 8
  • Capacity of voting systems

    Publication Year: 1993 , Page(s): 698 - 706
    Cited by:  Papers (2)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (788 KB)  

    Data replication is often used to increase the availability of data in a database system. Voting schemes can be used to manage this replicated data. The authors use a simple model to study the capacity of systems using voting schemes for data management. Capacity of a system is defined as the number of operations the system can perform successfully, on an average, per unit time. The capacity of a system using voting is examined and compared with the capacity of a system using a single node. It is shown that the maximum increase in capacity by the use of majority voting is bounded by 1/p, where p is the steady-state probability of a node being alive. It is also shown that for a system employing majority voting, if the reliability of nodes is high, increasing the number of nodes to more than three gives only a marginal increase in capacity. Similar analyses are performed for three other voting schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data structures for parallel resource management

    Publication Year: 1993 , Page(s): 672 - 686
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1216 KB)  

    The problem of resource management for many processor architectures can be viewed as the problem of simultaneously updating data structures that hold system state. An approach in which the possibility of using structures with weakened specifications is examined, is presented. Specifically, data structures that weaken the specification of a priority queue, permitting it to be updated simultaneously by multiple processes are introduced. Two structures, the concurrent heap and the software banyan are proposed, along with their associated algorithms for update. The algorithms are shown to possess attractive properties of simultaneous update and throughput. The results of simulation and actual implementations show that such data structures can improve the execution times of parallel algorithms quite significantly. These structures are proposed as possible basic building blocks for implementation of resource allocation in operating systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Layout appropriateness: a metric for evaluating user interface widget layout

    Publication Year: 1993 , Page(s): 707 - 719
    Cited by:  Papers (8)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1080 KB)  

    Numerous methods for evaluating user interfaces have been investigated to develop a metric that incorporates simple task descriptions which can assist designers in organizing their user interface. The metric, Layout Appropriateness (LA), requires a description of the sequences of actions users perform and how frequently each sequence is used. This task description can either be from observations of an existing system or from a simplified task analysis. The appropriateness of a given layout is computed by weighting the cost of each sequence of actions by how frequently the sequence is performed, which emphasizes frequent methods of accomplishing tasks while incorporating less frequent methods in the design. In addition to providing a comparison of proposed or existing layouts, an LA-optimal layout can be presented to the designer. The designer can compare the LA-optimal and existing layouts or start with the LA-optimal layout and modify it to take additional factors into consideration View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software performance engineering: a case study including performance comparison with design alternatives

    Publication Year: 1993 , Page(s): 720 - 741
    Cited by:  Papers (20)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1612 KB)  

    Software performance engineering (SPE) provides an approach to constructing systems to meet performance objectives. The authors illustrate the application of SPE to an example with some real-time properties and demonstrate how to compare performance characteristics of design alternatives. They show how SPE can be integrated with design methods and demonstrate that performance requirements can be achieved without sacrificing other desirable design qualities such as understandability, maintainability, and reusability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Block access estimation for clustered data using a finite LRU buffer

    Publication Year: 1993 , Page(s): 641 - 660
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB)  

    Data access cost evaluation is fundamental in the design and management of database systems. When some data items have duplicates, a clustering effect that can heavily influence access costs is observed. The availability of a finite amount of buffer memory in real systems has an even more dramatic impact. A comprehensive cost model for clustered data retrieval by an index using a finite buffer is presented. The approach combines and extends previous models based either on finite buffer or on uniform data clustering assumptions. The computational costs of the formulas proposed are independent of the data size or of the query cardinality and need only a single statistics per search key, the clustering factor, to be maintained by the system. The predictive power and the accuracy of the model are shown in comparison with actual costs resulting from simulations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Clarifying some fundamental concepts in software testing

    Publication Year: 1993 , Page(s): 742 - 746
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    A software test data adequacy criterion is a means for determining whether a test set is sufficient, or adequate, for testing a given program. A set of properties that useful adequacy criteria should satisfy have been previously proposed (E. Weyuker, 1986; 1988). The authors identify some additional properties of useful adequacy criteria that are appropriate under certain realistic models of testing. They discuss modifications to the formal definitions of certain popular adequacy criteria to make the criteria consistent with these additional properties View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On some reliability estimation problems in random and partition testing

    Publication Year: 1993 , Page(s): 687 - 697
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    Studies have shown that random testing can be an effective testing strategy. One of the goals of testing is to estimate the reliability of the program from the test outcomes. The authors extend the Thayer-Lipow-Nelson reliability model (R. Thayer et al., 1978) to account for the cost of errors. They also compare random testing with partition testing by examining upper confidence bounds for the cost weighted performance of the two strategies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation and comparison of Albrecht's function point and DeMarco's function bang metrics in a CASE environment

    Publication Year: 1993 , Page(s): 661 - 671
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB)  

    Software size estimates provide a basis for software cost estimation during software development. Hence, it is important to measure the system size reliably as early as possible. Two of the best known specification level metrics, Albrecht's function points (A.J. Albrecht, 1979) and DeMarco's function bang metrics (T. DeMarco, 1982) are compared by a simulation study in which automatically generated randomized dataflow diagrams (DFDs) were used as a statistical sample to automatically count function points and function bang in a built CASE environment. These value counts were correlated statistically using correlation coefficients and regression analysis. The simulation study permits sufficient variation in the base material to cover most types of system specifications. Moreover, it allows sufficient sampling sizes to make statistical analysis of data. The obtained results show that in certain cases there is a relatively good statistical correlation between these metrics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org