Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Computers, IEEE Transactions on

Issue 5 • Date May 2004

Filter Results

Displaying Results 1 - 13 of 13
  • Design verification by test vectors and arithmetic transform universal test set

    Publication Year: 2004 , Page(s): 628 - 640
    Cited by:  Papers (10)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1048 KB) |  | HTML iconHTML  

    We investigate methodology for simulation-based verification under a fault model. Since it is currently not feasible to describe a comprehensive explicit model of design errors, we propose an implicit fault model. The model is based on the arithmetic transform (AT) spectral representation of faults. The verification of circuits under the small errors in spectral domain is then performed by the universal test set (UTS) approach to test vector generation. The major result shows that, for errors whose AT has at most t nonzero coefficients, there exist the UTS test vector set of size O(n2log t). Consequently, verification confidence can be parameterized by the size of the error t, where at most O(n2log t) verification vectors are simulated to verify the absence of faults belonging to such an implicitly defined fault class. The experimental confirmation of the feasibility of verification using such UTS is presented, together with the relations between the arithmetic and Walsh-Hadamard spectra that bound the AT error spectrum and show that a class of small error circuits has small error spectrum. The proposed approach has the advantage of compatibility with formal verification and testing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implications of clock distribution faults and issues with screening them during manufacturing testing

    Publication Year: 2004 , Page(s): 531 - 546
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2437 KB) |  | HTML iconHTML  

    Based on real process data of a reference microprocessor, fault models are derived for the manufacturing defects most likely to affect signals of the clock distribution network. Their probability is estimated with Inductive Fault Analysis performed on the actual layout of the reference microprocessor. The effects of the most likely faults have been evaluated by electrical level simulations. We have found that, contrary to common assumptions, only a small percentage of such faults result in catastrophic failures easily detected during manufacturing testing. On the contrary, the majority of such faults lead to local failures not likely to be detected during manufacturing testing, despite their possibly compromising the microprocessor operation and reliability. In particular, we have found that the clock faults can be detected during manufacturing testing in only 12 percent of cases. Even more surprisingly, we have also found that, in 10 percent of cases, the undetected clock faults also invalidate the testing procedure itself. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deriving deadlines and periods for real-time update transactions

    Publication Year: 2004 , Page(s): 567 - 583
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB)  

    Typically, temporal validity of real-time data is maintained by periodic update transactions. We examine the problem of period and deadline assignment for these update transactions such that 1) these transactions can be guaranteed to complete by their deadlines and 2) the imposed CPU workload is minimized. To this end, we propose a novel approach, named the More-Less approach. By applying this approach, updates occur with a period which is more than the period obtained through traditional approaches, but with a deadline which is less than the traditional period. We show that the More-Less approach is better than existing approaches in terms of schedulability and the imposed load. We examine the issue of determining the assignment order in which transactions must be considered for period and deadline assignment so that the resulting CPU workloads can be minimized. To this end, the More-Less approach is first examined in a restricted case where the shortest validity first (SVF) order is shown to be an optimal solution. We then relax some of the restrictions and show that SVF is an approximate solution which results in CPU workloads that are close to the optimal solution. Our analysis and experiments show that the More-Less approach is an effective design approach that can provide better schedulability and reduce update transaction CPU workload while guaranteeing data validity constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • EPIC: profiling the propagation and effect of data errors in software

    Publication Year: 2004 , Page(s): 512 - 530
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1630 KB) |  | HTML iconHTML  

    We present an approach for analyzing the propagation and effect of data errors in modular software enabling the profiling of the vulnerabilities of software to find 1) the modules and signals most likely exposed to propagating errors and 2) the modules and signals which, when subjected to error, tend to cause more damage than others from a systems operation point-of-view. We discuss how to use the obtained profiles to identify where dependability structures and mechanisms will likely be the most effective, i.e., how to perform a cost-benefit analysis for dependability. A fault-injection-based method for estimation of the various measures is described and the software of a real embedded control system is profiled to show the type of results obtainable by the analysis framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient and accurate analytical modeling of whole-program data cache behavior

    Publication Year: 2004 , Page(s): 547 - 566
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (923 KB) |  | HTML iconHTML  

    Data caches are a key hardware means to bridge the gap between processor and memory speeds, but only for programs that exhibit sufficient data locality in their memory accesses. Thus, a method for evaluating cache performance is required to both determine quantitatively cache misses and to guide data cache optimizations. Existing analytical models for data cache optimizations target mainly isolated perfect loop nests. We present an analytical model that is capable of statically analyzing not only loop nest fragments, but also complete numerical programs with regular and compile-time predictable memory accesses. Central to the whole-program approach are abstract call inlining, memory access vectors, and parametric reuse analysis, which allow the reuse and interference both within and across loop nests to be quantified precisely in a unified framework. Based on the framework, the cache misses of a program are specified using mathematical formulas and the miss ratio is predicted from these formulas based on statistical sampling techniques. Our experimental results using kernels and whole programs indicate accurate cache miss estimates in a substantially shorter amount of time (typically, several orders of magnitude faster) than simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A performance model for wormhole-switched interconnection networks under self-similar traffic

    Publication Year: 2004 , Page(s): 601 - 613
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (907 KB) |  | HTML iconHTML  

    Many recent studies have convincingly demonstrated that network traffic exhibits a noticeable self-similar nature, which has a considerable impact on queuing performance. However, the networks used in current multicomputers have been primarily designed and analyzed under the assumption of the traditional Poisson arrival process, which is inherently unable to capture traffic self-similarity. Consequently, it is crucial to reexamine the performance properties of multicomputer networks in the context of more realistic traffic models before practical implementations show their potential faults. In an effort toward this end, we propose the first analytical model for wormhole-switched k-ary n-cubes in the presence of self-similar traffic. Simulation experiments demonstrate that the proposed model exhibits a good degree of accuracy for various system sizes and under different operating conditions. The analytical model is then used to investigate the implications of traffic self-similarity on network performance. We reveal that the network suffers considerable performance degradation when subjected to self-similar traffic, stressing the great need for improving network performance to ensure efficient support for this type of traffic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experiences, strategies, and challenges in building fault-tolerant CORBA systems

    Publication Year: 2004 , Page(s): 497 - 511
    Cited by:  Papers (27)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB) |  | HTML iconHTML  

    It has been almost a decade since the earliest reliable CORBA implementation and, despite the adoption of the fault-tolerant CORBA (FT-CORBA) standard by the Object Management Group, CORBA is still not considered the preferred platform for building dependable distributed applications. Among the obstacles to FT-CORBA's widespread deployment are the complexity of the new standard, the lack of understanding in implementing and deploying reliable CORBA applications, and the fact that current FT-CORBA do not lend themselves readily to complex, real-world applications. We candidly share our independent experiences as developers of two distinct reliable CORBA infrastructures (OGS and Eternal) and as contributors to the FT-CORBA standardization process. Our objective is to reveal the intricacies, challenges, and strategies in developing fault-tolerant CORBA systems, including our own. Starting with an overview of the new FT-CORBA standard, we discuss its limitations, along with techniques for best exploiting it. We reflect on the difficulties that we have encountered in building dependable CORBA systems, the solutions that we developed to address these challenges, and the lessons that we learned. Finally, we highlight some of the open issues, such as nondeterminism and partitioning, that remain to be resolved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power-aware scheduling for periodic real-time tasks

    Publication Year: 2004 , Page(s): 584 - 600
    Cited by:  Papers (155)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1486 KB) |  | HTML iconHTML  

    We address power-aware scheduling of periodic tasks to reduce CPU energy consumption in hard real-time systems through dynamic voltage scaling. Our intertask voltage scheduling solution includes three components: 1) a static (offline) solution to compute the optimal speed, assuming worst-case workload for each arrival, 2) an online speed reduction mechanism to reclaim energy by adapting to the actual workload, and 3) an online, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the average-case workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that our reclaiming algorithm alone outperforms other recently proposed intertask voltage scheduling schemes. Our speculative techniques are shown to provide additional gains, approaching the theoretical lower-bound by a margin of 10 percent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method enabling feasible conformance test sequence generation for EFSM models

    Publication Year: 2004 , Page(s): 614 - 627
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1171 KB) |  | HTML iconHTML  

    A formal description of an implementation under test (IUT), such as its VHDL behavior description, is required to automatically generate feasible test sequences for the IUT. Although finite-state machines (FSMs) can be used to describe the control structures of communication protocols, the data portion can only be modeled by extended finite-state machines (EFSMs). However, infeasible paths due to the conflicts among the condition and action variables of EFSMs complicate the test generation process. We introduce a method enabling the automatic generation of realizable test sequences from a class of EFSMs. Algorithms to detect and eliminate conflicts caused by the interdependences among the variables of a class of EFSM models are presented. After all conflicts are eliminated from the EFSM graph, the existing FSM-based automated test generation methods can be used to generate feasible test sequences. Recently, these algorithms have been implemented as a software package called INDEEL. This methodology is applied to generate feasible tests for protocols such as ACA and MIL-STD 188-220. Current applications include IETF protocols and ASAP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Trans. on Computers - Table of Content

    Publication Year: 2004 , Page(s): 0_1
    Save to Project icon | Request Permissions | PDF file iconPDF (275 KB)  
    Freely Available from IEEE
  • IEEE Computer Society's - Staff List

    Publication Year: 2004 , Page(s): 0_2
    Save to Project icon | Request Permissions | PDF file iconPDF (206 KB)  
    Freely Available from IEEE
  • TC: Information for authors

    Publication Year: 2004 , Page(s): 641
    Save to Project icon | Request Permissions | PDF file iconPDF (206 KB)  
    Freely Available from IEEE
  • IEEE Computer Society Information

    Publication Year: 2004 , Page(s): 642
    Save to Project icon | Request Permissions | PDF file iconPDF (275 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Paolo Montuschi
Politecnico di Torino
Dipartimento di Automatica e Informatica
Corso Duca degli Abruzzi 24 
10129 Torino - Italy
e-mail: pmo@computer.org