Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Computers and Digital Techniques, IEE Proceedings E

Issue 6 • Date Nov 1988

Filter Results

Displaying Results 1 - 8 of 8
  • Scale based algorithm for recognition of blurred planar objects

    Publication Year: 1988 , Page(s): 307 - 311
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    The paper presents an algorithm based on scale-space analysis, for the recognition of blurred planar objects. Apart from satisfying the usual requirements for invariance under translation, rotation and scaling, the algorithm is also invariant under blurring, that is, across all levels of detail or scales. The technique makes use of the spatial coincidence of the inflexion points on the object contour at all scales, and of the fact that no new such points are created as the object becomes more blurred. The algorithm therefore searches for the best match of these points at a single scale in the scale-space image. The algorithm was implemented on an IBM/AT in Modula-2 programming language, and was tested out on a group of 20 geographical maps of different sizes and at varying distances from the camera. A recognition rate of 95 to 100% and an average recognition time of 2.5 seconds were obtained by an efficient organisation of the template dictionary. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Techniques for improving stability rate of linear predictive image coding schemes

    Publication Year: 1988 , Page(s): 298 - 306
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (724 KB)  

    Following the considerable success that linear predictive coding (LPC) has had in speech compression, the technique has been applied to the coding of two-dimensional (2-D) signals such as natural images. Unlike its one-dimensional (1-D) counterpart, the 2-D technique is not guaranteed to be stable. It is found that two much correlation in the signal causes a significant proportion of the analysis frames to produce unstable prediction filters, rendering the decoded image unintelligible. The paper introduces two methods for systematically reducing the signal correlation, and hence improving the so called 'stability rate' of a 2-D LPC system. The first method is based on the 2-D Fourier transform, and the second is based on the 2-D Hadamard transform. The effectiveness of each method is illustrated followed by a cost analysis based on algorithm complexity and bit-rate overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ECL fault modelling

    Publication Year: 1988 , Page(s): 312 - 317
    Cited by:  Papers (6)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    A procedure for describing an ECL circuit at the gate level is proposed. All voltages and currents which switch during circuit operation are described by logic variables, and thus the 'stuck line' model can be effectively applied to describe circuit failures. Faults resulting from open connections and short circuits between transistor terminals are considered in detail. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault tolerance: step towards WSI

    Publication Year: 1988 , Page(s): 289 - 297
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (896 KB)  

    Since the early 1960s, semiconductor chips have matured from single transistor devices to 1 million transistors per chip. This increase in integration has been achieved by reducing transistor dimensions, and by increasing chip sizes. However, physical limits are being reached as submicrometre dimensions are approached. Therefore, there is need for larger chip, and wafer, sizes. Manufacturers use defect tolerance to maintain the increase in chip complexity; acceptable yields are achieved with larger chip sizes. In the paper, various aspects of defect-tolerant design are investigated. It is suggested that yield improvement is only one of the many possible gains offered by defect tolerance. The advantages and disadvantages of the many approaches to defect tolerance are discussed. Wafer-scale integration appears to be the target size for defect tolerant IC chips, but as wafer size continues to increase, the constraints due to wafer-scale implementation of an architecture must be weighed up against the advantages that such devices offer. The paper also examines a particular subsystem implemented in VLSI, ULSI and WSI, and considers the relative merits of each implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wafer scale integration

    Publication Year: 1988 , Page(s): 281 - 288
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    The history of integrated circuit development has been one of continually decreasing feature size, accompanied by a much more gradual increase in chip dimensions. There are obvious limits to both of these trends. In the case of chip size, clearly, the limit is the whole wafer, offering a circuit area on a 15 cm wafer that is roughly 175 times larger than current VLSI chips. Such a step function in chip size offers a massive increase in potential functional complexity, but also implies a radical rethink of philosophy in design, fabrication, test and packaging. This article discusses the problems associated with the development of WSI technology and reviews some of the strategies that have been adopted in the pursuit of WSI sub-systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithms for multiplication in Galois field for implementation using systolic arrays

    Publication Year: 1988 , Page(s): 336 - 339
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    Operations in finite fields find diverse applications. Circuits have been designed for carrying out such operations. In the paper, two circuits that carry out multiplication in GF(2p) have been presented. These circuits are suitable for implementation using VLSI techniques and are simpler than existing circuits. The architecture used is that of systolic arrays and consists of regular interconnection of simple cells. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards effective nonlinear cryptosystem design

    Publication Year: 1988 , Page(s): 325 - 335
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (756 KB)  

    The paper describes the design of nonlinear Boolean functions. The first part reviews the case of Boolean functions of n variables. The second part addresses the problem of the generation of Boolean permutations to obtain the collection of nonlinear Boolean functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simplified procedure for correcting both errors and erasures of Reed-Solomon code using Euclidean algorithm

    Publication Year: 1988 , Page(s): 318 - 324
    Cited by:  Papers (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation that is needed to decode a Reed-Solomon (RS) code. In the paper, a simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained simultaneously and simply, by the Euclidean algorithm only. With this improved technique, the complexity of time-domain Reed-Solomon decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.