By Topic

Computers, IEEE Transactions on

Issue 7 • Date July 2002

Filter Results

Displaying Results 1 - 14 of 14
  • Editor's note

    Publication Year: 2002 , Page(s): 737 - 739
    Save to Project icon | Request Permissions | PDF file iconPDF (689 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Efficient online and offline testing of embedded DRAMs

    Publication Year: 2002 , Page(s): 801 - 809
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (808 KB) |  | HTML iconHTML  

    This paper presents an integrated approach for both built-in online and off-line testing of embedded DRAMs. It is based on a new technique for output data compression which offers the same benefits as signature analysis during off-line test, but also supports efficient online consistency checking. The initial fault-free memory contents are compressed to a reference characteristic and compared to test characteristics periodically. The reference characteristic depends on the memory contents, but unlike similar characteristics based on signature analysis, it can be easily updated concurrently with WRITE operations. This way, changes in memory do not require a time consuming recomputation. The respective test characteristics can be efficiently computed during the periodic refresh operations of the dynamic RAM. Experiments show that the proposed technique significantly reduces the time between the occurrence of an error and its detection. Compared to error detecting codes (EDC) it also achieves a significantly higher error coverage at lower hardware costs. Therefore, it perfectly complements standard online checking approaches relying on EDC, where the concurrent detection of certain types of errors is guaranteed, but only during READ operations accessing the erroneous data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multivariate statistical analysis of audit trails for host-based intrusion detection

    Publication Year: 2002 , Page(s): 810 - 820
    Cited by:  Papers (61)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    Intrusion detection complements prevention mechanisms, such as firewalls, cryptography, and authentication, to capture intrusions into an information system while they are acting on the information system. Our study investigates a multivariate quality control technique to detect intrusions by building a long-term profile of normal activities in information systems (norm profile) and using the norm profile to detect anomalies. The multivariate quality control technique is based on Hotelling's T2 test that detects both counterrelationship anomalies and mean-shift anomalies. The performance of the Hotelling's T 2 test is examined on two sets of computer audit data: a small data set and a large multiday data set. Both data sets contain sessions of normal and intrusive activities. For the small data set, the Hotelling's T2 test signals all the intrusion sessions and produces no false alarms for the normal sessions. For the large data set, the Hotelling's T2 test signals 92 percent of the intrusion sessions while producing no false alarms for the normal sessions. The performance of the Hotelling's T2 test is also compared with the performance of a more scalable multivariate technique-a chi-squared distance test View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design, implementation, and performance evaluation of a detection-based adaptive block replacement scheme

    Publication Year: 2002 , Page(s): 793 - 800
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1673 KB) |  | HTML iconHTML  

    A new buffer replacement scheme, called DEAR (detection-based adaptive replacement), is presented for effective caching of disk blocks in the operating system. The proposed DEAR scheme automatically detects block reference patterns of applications and applies different replacement policies to different applications depending on the detected reference pattern. The detection is made by a periodic process and is based on the relationship between block attribute values, such as backward distance and frequency gathered in a period, and the forward distance observed in the next period. This paper also describes an implementation and performance measurement of the DEAR scheme in FreeBSD. The results from performance measurements of several real applications show that, compared with the LRU scheme, the proposed scheme reduces the number of disk I/Os by up to 51 percent, and the response time by up to 35 percent in the case of single application executions. For multiple application executions, the results show that the proposed scheme reduces the number of disk I/Os by up to 20 percent and the overall response time by up to 18 percent View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concurrency control for mixed transactions in real-time databases

    Publication Year: 2002 , Page(s): 821 - 834
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB) |  | HTML iconHTML  

    Many recent studies have suggested that the optimistic concurrency control (OCC) protocols outperform the locking-based protocols in real-time database systems (RTDBS). However, the OCC protocols suffer from the problem of unnecessary transaction restarts that is detrimental to transactions meeting their deadlines. The problem is more intensified in mixed transaction environments. Firm transactions are more vulnerable to restarts when they are in conflict with hard transactions on data access. In this paper, we addressed the problem and devised an effective OCC protocol with dynamic adjustment of serialization order, called OCC-DA, for RTDBS with mixed transactions. This protocol can avoid unnecessary transaction restarts by dynamically adjusting the serialization order of the conflicting transactions with respect to the validating transaction. As a result, much resource can be saved and more firm transactions can meet their deadlines without affecting the execution of hard transactions. The characteristics of the OCC-DA protocol were examined in detail by simulation. The results show that the performance of the OCC-DA protocol was consistently better than the other two popular protocols, OCC with forward validation and OCC with Wait-50, over a wide range of system settings. In particular, the OCC-DA protocol provides a more significant performance gain in mixed transaction environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bit-parallel finite field multiplier and squarer using polynomial basis

    Publication Year: 2002 , Page(s): 750 - 758
    Cited by:  Papers (51)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    Bit-parallel finite field multiplication using polynomial basis can be realized in two steps: polynomial multiplication and reduction modulo the irreducible polynomial. In this article, we present an upper complexity bound for the modular polynomial reduction. When the field is generated with an irreducible trinomial, closed form expressions for the coefficients of the product are derived in term of the coefficients of the multiplicands. The complexity of the multiplier architectures and their critical path length are evaluated, and they are comparable to the previous proposals for the same class of fields. An analytical form for bit-parallel squaring operation is also presented. The complexities for bit-parallel squarer are also derived when an irreducible trinomial is used. Consequently, it is argued that to solve multiplicative inverse using polynomial basis can be at least as good as using a normal basis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data hiding in 2-color images

    Publication Year: 2002 , Page(s): 873 - 878
    Cited by:  Papers (24)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2641 KB) |  | HTML iconHTML  

    In an earlier paper (Chen et al., 2000), we proposed a steganography scheme for hiding a piece of critical information in a host binary image. That scheme ensures that, in each m × n image block of the host image, as many as [log2 (mn + 1)] bits can be hidden in the block by changing at most 2 bits in the block. We propose a new scheme that improves (Chen et al., 2000) in its capability to maintain higher quality of the host image after data hiding by sacrificing some data hiding space. The new scheme can still offer a good data hiding ratio. It ensures that, for any bit that is modified in the host image, the bit is adjacent to another bit which has a value equal to the former's new value. Thus, the hiding effect is quite invisible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Handling execution overruns in hard real-time control systems

    Publication Year: 2002 , Page(s): 835 - 849
    Cited by:  Papers (17)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB) |  | HTML iconHTML  

    In many real-time control applications, the task periods are typically fixed and worst-case execution times are used in schedulability analysis. With the advancement of robotics, flexible visual sensing using cameras has become a popular alternative to the use of embedded sensors. Unfortunately, the execution time of visual tracking varies greatly. In such environments, control tasks have a normally short computation time, but also an occasional long computation time; therefore, the use of worst-case execution time is inefficient for controlling performance optimization. Nevertheless, to maintain the control stability, we still need to guarantee the schedulability of the task set, even if the worst case arises. In this paper, we propose an integrated approach to control performance optimization and task scheduling for control applications where the execution time of each task can vary greatly. We present an innovative approach to overrun management that allows us to fully utilize the processor for optimizing the control performance and yet guaranteeing the schedulability of all tasks under worst-case conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling wireless local loop with general call holding times and finite number of subscribers

    Publication Year: 2002 , Page(s): 775 - 786
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (675 KB) |  | HTML iconHTML  

    This paper proposes an analytic model to compute the loss probability for wireless local loop (WLL) with a finite number of subscribers. The number of trunks between the WLL concentrator and the base station controller is less than the total number of radio links in the WLL. This model is validated against the simulation results. The execution of our model is efficient compared with simulation. However, its time complexity is higher than several existing analytic models that approximate the loss probability for WLL. Therefore, we design an efficient WLL network planning procedure (in terms of time complexity and accuracy) that utilizes the approximate analytic models to provide small ranges for selecting the values of system parameters. Our model is then used to accurately search the operation points of WLL within the small ranges of the system parameter values. This paper proves that the performance of WLL with limited trunk capacity and finite subscriber population is not affected by the call holding time distributions. Based on our model, we illustrate WLL design guidelines with several numerical examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enumeration of test sequences in increasing chronological order to improve the levels of compaction achieved by vector omission

    Publication Year: 2002 , Page(s): 866 - 872
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (341 KB) |  | HTML iconHTML  

    We describe a method to improve the levels of compaction achievable by static compaction procedures based on vector omission. Such procedures are used to reduce the lengths of test sequences for synchronous sequential circuits without reducing the fault coverage. The proposed procedure enumerates, in increasing chronological order, test sequences consisting of subsets of the vectors included in a given test sequence that needs to be compacted. The unique feature of this approach is that test vectors omitted from the test sequence at an earlier iteration can be reintroduced at a later iteration. This results in a less greedy procedure and helps reduce the compacted test sequence length beyond the length that can be achieved if vectors are omitted permanently as in earlier procedures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid load-value predictors

    Publication Year: 2002 , Page(s): 759 - 774
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2294 KB) |  | HTML iconHTML  

    Load instructions diminish processor performance in two ways. First, due to the continuously widening gap between CPU and memory speed, the relative latency of load instructions grows constantly and the slows program execution. Next, memory reads limit the available instruction-level parallelism as instructions that use the result of a load must wait for the memory access to complete before they can start executing. Load-value predictors alleviate both problems by allowing the CPU to speculatively continue processing without having to wait for load instructions, which can significantly improve the execution speed. In this paper, we investigate the performance of all hybrids that can be built out of a register value, a last value, a stride 2-delta, the last four values, and a finite context method predictor. Our analysis shows that hybrids can deliver 25 percent more speedup than the best single-component predictors. Our hybridization study identified the register value + stride 2-delta predictor as one of the best two-component hybrids. It matches or exceeds the speedup of two-component hybrids from the literature in spite of its substantially smaller and simpler design. Of all the predictors we studied, the register value + stride 2-delta + last four value hybrid performs best View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Randomized algorithms: a system-level, poly-time analysis of robust computation

    Publication Year: 2002 , Page(s): 740 - 749
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    Provides a methodology for analyzing the performance degradation of a computation once it has been affected by perturbations. The suggested methodology, by relaxing all assumptions made in the related literature, provides design guidelines for the subsequent implementation of complex computations in physical devices. Implementation issues, such as finite precision representation, fluctuations of the production parameters and aging effects, can be studied directly at the system level, independent of any technological aspect and quantization technique. Only the behavioral description of the computational flow, which is assumed to be Lebesgue-measurable, and the architecture to be investigated are needed. The suggested analysis is based on the theory of randomized algorithms, which transforms the computationally intractable problem of robustness investigation into a polynomial-time algorithm by resorting to probability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of a diagnosis algorithm for regular structures

    Publication Year: 2002 , Page(s): 850 - 865
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4672 KB) |  | HTML iconHTML  

    The problem of identifying the faulty units in regularly interconnected systems is addressed. The diagnosis is based on mutual tests of units, which are adjacent in the "system graph" describing the interconnection structure. This paper evaluates an algorithm named EDARS (Efficient Diagnosis Algorithm for Regular Structures). The diagnosis provided by this algorithm is provably correct and almost complete with high probability. Diagnosis correctness is guaranteed if the cardinality of the actual fault set is below a "syndrome-dependent bound," asserted by the algorithm itself along with the diagnosis. Evaluation of EDARS relies upon extensive simulation which covered grids, hypercubes, and cube-connected cycles (CCC). Simulation experiments showed that the degree of the system graph has a strong impact over diagnosis completeness and affects the "syndrome-dependent bound," ensuring correctness. Furthermore, a comparative analysis of the performance of EDARS, with hypercubes and CCCs on one side and grids of the same size and degree on the other side, showed that diameter and bisection width of the system graph also influence the diagnosis correctness and completeness View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Remote device command and resource sharing over the Internet: a new approach based on a distributed layered architecture

    Publication Year: 2002 , Page(s): 787 - 792
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (266 KB) |  | HTML iconHTML  

    In addition to the remote access and computer-augmented functionality brought about by the earliest modalities of distance operation, technical advances in the form of telematics have opened up a whole new range of applications, one of which, resource sharing, deserves special attention. Nonetheless, while distance operations through computer networks, and particularly over the Internet, have attracted a great deal of attention in recent years, there is still a noticeable lack of important acquisitions regarding the systemic treatment of essential issues in this field. This paper presents an overview of the current trends in this emerging interdisciplinary area and briefly comments on the fundamentals of telematics-supported distance operation. A case study is used to report on an experience involving methodological investigations in this area View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Paolo Montuschi
Politecnico di Torino
Dipartimento di Automatica e Informatica
Corso Duca degli Abruzzi 24 
10129 Torino - Italy
e-mail: pmo@computer.org