Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 3 • Date Jun 1998

Filter Results

Displaying Results 1 - 13 of 13
  • A modular high-throughput architecture for logarithmic search block-matching motion estimation

    Publication Year: 1998 , Page(s): 299 - 315
    Cited by:  Papers (13)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    A high-throughput modular architecture for a logarithmic search block-matching algorithm is presented. The design efforts are focused on exploiting the search area data dependencies using special data input ordering constraints. The input bandwidth problem has been solved by a random access on-chip memory, and a simple address generation procedure has been described. Furthermore, this architecture can handle a large search range with unequal horizontal and vertical spans using a technique called pipeline interleaving. Compared to the existing architectures for the three-step search BMA, this architecture delivers a high throughput rate with fewer input lines, and is linearly scalable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An object-oriented coder using block-based motion vectors and residual image compensation

    Publication Year: 1998 , Page(s): 316 - 327
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB)  

    This paper proposes a two-stage global motion parameter estimation method using block-based motion vectors and a model failure (MF) object compensation algorithm by object-oriented fractal mapping of the residual image for an object-oriented coder. In the first stage of the two-stage motion parameter estimation algorithm, coarse motion parameters are estimated by fitting block-based motion vectors computed hierarchically to a six-parameter model, and in the second stage, the estimated motion parameters are refined by the gradient method using an image reconstructed by motion parameters detected in the first stage. The local prediction error by the six-parameter method is locally reduced by blockwise motion parameter correction using the residual image. Finally, the MF object is compensated by object-oriented fractal mapping of the previous residual image into the current one, in which geometric affine mapping is followed by the massic transformation. For MF object compensation, the selection between motion parameter compensation and fractal mapping is achieved by the validity test. Computer simulation results show that the proposed method gives better performance than the conventional ones in terms of the peak signal-to-noise ratio (PSNR) and compression ratio (CR) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new affine transformation: its theory and application to image coding

    Publication Year: 1998 , Page(s): 269 - 274
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    The fractal image coding technique has attracted a degree of interest for its low bit rate. But the reconstructed image is of medium quality. This problem has prevented the fractal technique from being used practically. In order to improve the compression fidelity, a new affine transformation is proposed. Meanwhile, its contractivity requirement is analyzed, and the optimal parameters are derived using the least square method. The new affine transformation has been practically used in image coding. Experiments show that the PSNR can reach 28.7 dB at a compression ratio (CR) of 16.4 for the 256×256×8 “Lena” image. Comparison with other fractal coding schemes shows that the new affine transformation can improve the reconstructed image quality efficiently View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Programmable H.263-based wireless video transceivers for interference-limited environments

    Publication Year: 1998 , Page(s): 275 - 286
    Cited by:  Papers (25)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    In order to exploit the nonuniformly distributed channel capacity over the cell area, an intelligent 7.3-kB programmable videophone transceiver is proposed, which is capable of exploiting the higher channel capacity of uninterfered, high-channel-quality cell areas, while supporting more robust, but lower bit-rate operation in more interfered areas. The system employed an enhanced H.263-compatible video codec. Since most existing wireless systems exhibit a constant bit-rate, the video codec's bit-rate fluctuation was smoothed by a novel adaptive packetization algorithm, which is capable of supporting automatic repeat request (ARQ)-assisted operation in wireless distributive video transmissions, although in the proposed low-latency interactive videophone transceiver, we refrained from using ARQ. Instead, corrupted packets are dropped by both the local and remote decoders in order to prevent error propagation. The minimum required channel signal-to-interference-plus-noise ratio (SINR) was in the range of 8-28 dB for the various transmission scenarios, while the corresponding video peak-signal-to-noise ratio (PSNR) was in the range of 32-39 dB. The main system features are summarized View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation of calculations for forward discrete cosine transform

    Publication Year: 1998 , Page(s): 264 - 268
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    This paper presents new schemes to reduce the computation of the discrete cosine transform (DCT) with negligible peak-signal-to-noise ratio (PSNR) degradation. The methods can be used in the software implementation of current video standard encoders, for example, H.26x and MPEG. We investigated the relationship between the quantization parameters and the position of the last nonzero DCT coefficient after quantization. That information is used to adaptively make the decision of calculating all 8×8 DCT coefficients or only part of the coefficients. To further reduce the computation, instead of using the exact DCT coefficients, we propose a method to approximate the DCT coefficients which leads to significant computation savings. The results show that for practical situations, significant computation reductions can be achieved while causing negligible PSNR degradation. The proposed method also results in computation savings in the quantization calculations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shape-adaptive DCT with block-based DC separation and ΔDC correction

    Publication Year: 1998 , Page(s): 237 - 242
    Cited by:  Papers (37)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    This paper refers to a shape-adaptive DCT algorithm (SA-DCT) originally proposed by Sikora and Makai (see ibid., vol.5, p.59-62, 1995). The SA-DCT has been developed in the framework of the ongoing MPEG-4 standardization phase of ISO/IEC, and has been included in the video verification model of MPEG-4. In this context, the focus of the paper is to emphasize a systematic performance limitation of conventional SA-DCT, and to propose an extended version (ΔDC-SA-DCT) which avoids this restriction in general. This modification considerably improves the efficiency of SA-DCT, and can easily be implemented in existing SA-DCT tools View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduction of blocking artifact in block-coded images using wavelet transform

    Publication Year: 1998 , Page(s): 253 - 257
    Cited by:  Papers (22)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    We propose a simple yet efficient method which reduces the blocking artifact in block-coded images by using a wavelet transform. An image is considered as a set of one-dimensional signals, and so all processing including the wavelet transform are one-dimensionally executed. The artifact reduction operation is applied to only the neighborhood of each block boundary in the wavelet transform at the first and second scales. The key idea behind the method is to remove the blocking component which reveals stepwise discontinuities at block boundaries. Each block boundary is classified into one of shade region, smooth edge region, and step edge region. Threshold values for the classification are selected adaptively according to each coded image. The performance is evaluated for 512×512 images JPEG coded with 30:1 and 40:1 compression ratios. Experimental results show that the proposed method yields not only a PSNR improvement of about 0.69-1.06 dB, but also a subjective quality nearly free of the blocking artifact and edge blur View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Disparity and occlusion estimation in multiocular systems and their coding for the communication of multiview image sequences

    Publication Year: 1998 , Page(s): 328 - 344
    Cited by:  Papers (35)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    An efficient disparity estimation and occlusion detection algorithm for multiocular systems is presented. A dynamic programming algorithm, using a multiview matching cost as well as pure geometrical constraints, is used to estimate disparity and to identify the occluded areas in the extreme left and right views. A significant advantage of the proposed approach is that the exact number of views in which each point appears (is not occluded) can be determined. The disparity and occlusion information obtained may then be used to create virtual images from intermediate viewpoints. Furthermore, techniques are developed for the coding of occlusion and disparity information, which is needed at the receiver for the reproduction of a multiview sequence using the two encoded extreme views. Experimental results illustrate the performance of the proposed techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel algorithm for low-power image and video coding

    Publication Year: 1998 , Page(s): 258 - 263
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    A novel scheme for low-power image and video coding and decoding is presented. It is based on vector quantization, and reduces its memory requirements, which form a major disadvantage in terms of power consumption. The main innovation is the use of small codebooks, and the application of simple but efficient transformations to the codewords during coding to compensate for the quality degradation introduced by the small codebook size. In this way, the small codebooks are computationally extended, and the coding task becomes computation based rather than memory based, leading to significant power consumption reduction. The parameters of the transformations depend on the image block under coding, and thus the small codebooks are dynamically adapted each time to this specific image block, leading to image qualities comparable to or better than those corresponding to classical vector quantization. The algorithm leads to power savings of a factor of 10 in coding and of a factor of 3 in decoding at least, in comparison to classical full-search vector quantization. Both image quality and power consumption highly depend on the size of the codebook that is used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the POCS-based postprocessing technique to reduce the blocking artifacts in transform coded images

    Publication Year: 1998 , Page(s): 358 - 367
    Cited by:  Papers (67)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    We propose a novel postprocessing technique, based on the theory of projections onto convex sets (POCS), to reduce the blocking artifacts in transform coded images. It is assumed, in our approach, that the original image is highly correlated. Thus, the global frequency characteristics in two adjacent blocks are similar to the local ones in each block. We consider the high-frequency components in the global characteristics of a decoded image, which are not found in the local ones, as the results from the blocking artifact. We employ an N-point discrete cosine transform (DCT) to obtain the local characteristics, and a 2N-point DCT to obtain the global ones, and then derive the relation between the N-point and 2N-point DCT coefficients. A careful comparison of the N-point with the 2N-point DCT coefficients makes it possible to detect the undesired high-frequency components, mainly caused by the blocking artifact. Then, we propose novel convex sets and their projection operators in the DCT domain. The performance of the proposed and conventional techniques are compared on the still images, decoded by JPEG. The results show that, regardless of the content of the input images, the proposed technique yields significantly better performance than the conventional techniques in terms of objective quality, subjective quality, and convergence behavior View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A motion-compensated spatio-temporal filter for image sequences with signal-dependent noise

    Publication Year: 1998 , Page(s): 287 - 298
    Cited by:  Papers (30)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    A novel spatio-temporal filter is described for monochrome image sequences with either signal-independent or signal-dependent noise by considering both spatial and temporal correlations. With the assumptions of spatio-temporal separability and temporal stationarity, it is shown that motion-compensated groups of frames can be decorrelated by using the Karhunen-Loeve transform. Practical filters that work well on a variety of image sequences are developed by first applying the Hadamard transform along the temporal direction. Subsequently, the parametric adaptive Wiener filter is applied to each of the resulting approximately decorrelated transformed images. These transformed images are classified into one average image and a remaining set of residual images, which provide interesting and useful interpretations of the type of image sequence. The filter performance is evaluated by considering different types of image sequences in the database. The procedure advanced for processing a sequence of monochrome images can be adapted for generalization to multispectral images and this possibility is currently under detailed investigation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of mesh-based motion estimation in H.263-like coders

    Publication Year: 1998 , Page(s): 243 - 252
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    We present two mesh-based motion estimation algorithms, and evaluate their performance when incorporated in an H.263-like block-based video coder. Both algorithms compute nodal motions in a hierarchical manner. Within each hierarchy level, the first algorithm (HMMA) minimizes the prediction error in the four elements surrounding each node, where the prediction is accomplished by a bilinear mapping. The optimal solution is obtained by a full search within a range defined by the topology of the mesh. The second algorithm (HBMA) minimizes the error in a block surrounding each node, assuming the motion in the block is constant. In both cases, bilinear mapping is used for motion-compensated prediction based on nodal displacements. The two algorithms are compared with an exhaustive block-matching algorithm (EBMA) by evaluating their performance in temporal prediction and in an H.263/TMN4 coder. For prediction only, the HMMA and HBMA algorithms yield visually more satisfactory results, even though the PSNRs of the predicted images are on average lower. The coded images also have lower PSNRs at similar bit rates. The coding artifacts are different: while the block-based method leads to more severe block distortions, the mesh-based method experiences some warping artifacts. The HMMA algorithm outperforms the HBMA slightly for certain sequences at the expense of higher computational complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blocking artifacts reduction in image compression with block boundary discontinuity criterion

    Publication Year: 1998 , Page(s): 345 - 357
    Cited by:  Papers (25)  |  Patents (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    This paper proposes a novel blocking artifacts reduction method based on the notion that the blocking artifacts are caused by heavy accuracy loss of transform coefficients in the quantization process. We define the block boundary discontinuity measure as the sum of the squared differences of pixel values along the block boundary. The proposed method compensates for selected transform coefficients so that the resultant image has a minimum block boundary discontinuity. The proposed method does not require a particular transform domain where the compensation should take place; therefore, an appropriate transform domain can be selected at the user's discretion. In the experiments, the scheme is applied to DCT-based compressed images to show its performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it