By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 11 • Date Nov. 2011

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Publication Year: 2011 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Circuits and Systems for Video Technology publication information

    Publication Year: 2011 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • One-Sided \rho -GGD Source Modeling and Rate-Distortion Optimization in Scalable Wavelet Video Coder

    Publication Year: 2011 , Page(s): 1557 - 1570
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB) |  | HTML iconHTML  

    We develop an accurate source model, one-sided ρ-generalized Gaussian distribution (GGD), for approximating the residual signals in scalable wavelet video coding. An efficient piecewise linear expression is suggested to estimate the shape parameter of the one-sided ρ-GGD. We also improve the model accuracy in matching the real data by modifying the ρ parameter estimation formula. Continuing our previous work on developing the motion information gain metric to measure the motion information efficiency, we now incorporate the one-sided ρ -GGD model in the cost function, which is used for deciding the motion vectors and motion estimation mode in scalable wavelet video coding. Compared with the conventional Lagrangian optimization, our simulation results show that the new mode decision method generally improves the peak signal-to-noise ratio performance in the combined signal-to-noise ratio and temporal scalability cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning to Extract Focused Objects From Low DOF Images

    Publication Year: 2011 , Page(s): 1571 - 1580
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    This paper proposes an approach to extract focused objects (i.e., attention objects) from low depth-of-field images. To recognize the focused object, we decompose the image into multiple regions, which are described by using three types of visual descriptors. Each descriptor is extracted from a representation of some aspects of local appearance, e.g., a spatially localized texture, color, or geometrical property. Therefore, the focus detection of a region can be achieved by the classification of extracted visual descriptors based on a binary classifier. We employ a boosting algorithm to learn the classifier with a cascade of decision structure. Given a test image, initial segmentation can be achieved using obtained classification results. Finally, we apply a post-processing technique to improve the results by incorporating region grouping and pixel-level segmentation. Experimental evaluation on a number of images demonstrates the performance advantages of the proposed method, when compared with state-of-the-art methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Lossless Color Image Compression Architecture Using a Parallel Golomb-Rice Hardware CODEC

    Publication Year: 2011 , Page(s): 1581 - 1587
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (471 KB) |  | HTML iconHTML  

    In this paper, a high performance lossless color image compression and decompression architecture to reduce both memory requirement and bandwidth is proposed. The proposed architecture consists of differential-differential pulse coded modulation (DDPCM) and Golomb-Rice coding. The original image frame is organized as m by n sub-window arrays, to which DDPCM is applied to produce one seed and m × n - 1 pieces of differential data. Then the differential data are encoded using the Golomb-Rice algorithm to produce losslessly compressed data. According to the experimental results on benchmark images, the proposed architecture can guarantee high enough compression rate and throughput to perform real-time lossless CODEC operations with a reasonable hardware area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Object Tracking by Learning Hybrid Template Online

    Publication Year: 2011 , Page(s): 1588 - 1599
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (623 KB) |  | HTML iconHTML  

    This paper presents an adaptive tracking algorithm by learning hybrid object templates online in video. The templates consist of multiple types of features, each of which describes one specific appearance structure, such as flatness, texture, or edge/corner. Our proposed solution consists of three aspects. First, in order to make the features of different types comparable with each other, a unified statistical measure is defined to select the most informative features to construct the hybrid template. Second, we propose a simple yet powerful generative model for representing objects. This model is characterized by its simplicity since it could be efficiently learnt from the currently observed frames. Last, we present an iterative procedure to learn the object template from the currently observed frames, and to locate every feature of the object template within the observed frames. The former step is referred to as feature pursuit, and the latter step is referred to as feature alignment, both of which are performed over a batch of observations. We fuse the results of feature alignment to locate objects within frames. The proposed solution to object tracking is in essence robust against various challenges, including background clutters, low-resolution, scale changes, and severe occlusions. Extensive experiments are conducted over several publicly available databases and the results with comparisons show that our tracking algorithm clearly outperforms the state-of-the-art methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Low-Cost High-Quality Adaptive Scalar for Real-Time Multimedia Applications

    Publication Year: 2011 , Page(s): 1600 - 1611
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1171 KB) |  | HTML iconHTML  

    A novel scaling algorithm is proposed for the implementation of 2-D image scalar. The algorithm consists of a bilinear interpolation, a clamp filter, and a sharpening spatial filter. The bilinear interpolation algorithm is selected due to its having low complexity and high quality. The clamp and sharpening spatial filters are added as pre-filters to solve the blurring and aliasing effects produced by bilinear interpolation. Furthermore, an adaptive technology is used to enhance the effects of clamp and sharpening spatial filters. To reduce memory buffers and computing resources for the very large scale integration (VLSI) implementation, the clamp filter and sharpening spatial filters both convoluted by a 3 × 3 matrix coefficient kernel are combined into a 5 × 5 combined convolution filter. The bilinear interpolation is simplified by the co-operation and hardware sharing technique to reduce computing resource and hardware costs. The VLSI architecture in this paper can achieve 280 MHz with 9.28-K gate counts, and its chip area is 46 418 μm2 synthesized by a 0.13 μm CMOS process. Compared with previous techniques, this paper not only reduces gate counts by more than 46.6% and power consumptions by 24.2%, but also improves average quality by over 0.42 dB. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction in Video-Endoscopic Images

    Publication Year: 2011 , Page(s): 1612 - 1621
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (930 KB) |  | HTML iconHTML  

    A low-cost very large scale integration (VLSI) implementation of real-time correction of barrel distortion for video-endoscopic images is presented in this paper. The correcting mathematical model is based on least-squares estimation. To decrease the computing complexity, we use an odd-order polynomial to approximate the back-mapping expansion polynomial. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Hornor's algorithm. With the iterative characteristic of Hornor's algorithm, the hardware cost and memory requirement can be conserved by time multiplexed design. In addition, a simplified architecture of the linear interpolation is used to reduce more computing resource and silicon area. The VLSI architecture of this paper contains 13.9-K gates by using a 0.18 μm CMOS process. Compared with some existing distortion correction techniques, this paper reduces at least 69% hardware cost and 75% memory requirement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Efficient Intensity Correction Algorithm for High Definition Video Surveillance Applications

    Publication Year: 2011 , Page(s): 1622 - 1630
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1170 KB) |  | HTML iconHTML  

    The video surveillance market is increasingly moving toward cheaper, efficient, portable, and high resolution systems. A typical video surveillance system consists of several cameras, which issue warnings or initiate smart reactions according to the analysis results of the captured video data. Many algorithms in video surveillance systems assume fixed lighting conditions for the monitored area. The performance of these algorithms is severely affected by the illumination changes of the monitored area. This paper presents an efficient intensity correction algorithm for high definition video surveillance applications. The new algorithm corrects both the global and the local intensity changes. It uses an apparent gain factor to correct the global intensity changes. In addition it corrects the local intensity changes using the local intensity mean and standard deviation. The proposed algorithm shows a promising performance when compared with other intensity correction algorithms. It has a low computational cost that makes it a suitable choice for real-time hardware implementation. This paper also presents a hardware implementation of the proposed algorithm using Xilinx Spartan3A digital signal processor (DSP) XC3SD3400A device. The targeted resolution is 1920 × 1080 at 30 f/s. The hardware prototype utilizes 17% of the slices, 21% of the block random access memories, and 24% of the DSP48As available in Spartan3A DSP XC3SD3400A device. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transform Kernel Selection Strategy for the H.264/AVC and Future Video Coding Standards

    Publication Year: 2011 , Page(s): 1631 - 1645
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB) |  | HTML iconHTML  

    In this paper, we propose a new discrete cosine transform (DCT)-like kernel IK(5, 7, 3) and revitalize another DCT-like kernel IK(13, 17, 7) for the transform coding process of hybrid video coding. Making use one of these kernels together with the H.264/AVC kernel IK(1, 2, 1), we are able to design new multiple-kernel schemes which give better coding performance over that of the conventional approaches. All these schemes make use of the adaptive kernel mechanism at macroblock-level (MB-AKM), which requires heavy computation during the encoding process. We subsequently discovered that a rate-distortion feature extracted from a pair of kernels gives an intrinsic property that can be used to select a better kernel for a two-kernel MB-AKM system. This is a powerful tool with theoretical interest and practical uses. In order to reduce computation substantially, we make use of this tool to make an analysis and design of a frame-level adaptive kernel mechanism and come up with a simple solution that the kernel IK(1, 2, 1) be used for I-frames and P-frames and the kernel IK(5, 7, 3) be used for B-frames coding. This proposed frame-based AKM gives similar, or even better, performance as the proposed macroblock-based AKM. Furthermore, it substantially reduces computation and certainly gives a good improvement in terms of the PSNR and bitrate compared to those obtained from the H.264/AVC default scheme and other MB-AKM schemes available in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High Efficiency Architecture Design of Real-Time QFHD for H.264/AVC Fast Block Motion Estimation

    Publication Year: 2011 , Page(s): 1646 - 1658
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1780 KB) |  | HTML iconHTML  

    Motion estimation (ME) in the MPEG-4 AVC/JVT/H.264 video coding standard employs seven permitted block sizes to improve the rate-distortion performance. This novel feature achieves significant coding gain over coding a macroblock using the fixed block size. However, ME is computationally intensive with the complexity increasing linearly with the number of the allowed block sizes. This paper presents an architecture for a combined fast ME algorithm with the predict hexagon search (PHS) and the edge information mode decision (EIMD). The EIMD algorithm utilizes edge information to predict the best block size quickly and precisely. The PHS algorithm searches the best motion vector efficiently. The analytical results reveal that the EIMD+PHS algorithm is 2.4-25 times faster than other popular fast ME algorithms. Additionally, the EIMD+PHS algorithm is 600-2000 times faster than JM10.2, and the peak signal-to-noise ratio degradation is less than 0.15 dB. The proposed architecture applies a large search range and low operation frequency as compared with other popular ME architectures. The proposed architecture only needs 19.4 MHz operating frequency to achieve real-time execution for the general specification of the standard-definition television (720 × 480) used with four reference frames and the search range of 256 × 256. The proposed architecture only requires 116.6 MHz operating frequency to achieve real-time execution for the ultrahigh specification of the quad full high definition (3840 × 2160) used with one reference frame and the search range of 256 × 256. The gate count of the proposed architecture is 300 K, and the memory usage is 12.6 kB. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Mode Decision for Multiview Video Coding Using Mode Correlation

    Publication Year: 2011 , Page(s): 1659 - 1666
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (585 KB) |  | HTML iconHTML  

    Exhaustive mode decision has been exploited in multiview video coding for effectively improving the coding efficiency, but at the expense of yielding much higher computational complexity. In this paper, a fast mode decision algorithm, called the mode correlation-based mode decision (MCMD), is proposed to speed up the encoding process by reducing the number of the modes required to be checked. In our approach, all the prediction modes are first categorized into five motion-activity classes, and only one of them will be chosen to identify the optimal mode in a hierarchical manner, as follows. For each macroblock (MB), the proposed MCMD algorithm always begins with checking whether the rate-distortion cost computed at the SKIP mode (i.e., Class 1) is below an adaptive threshold for providing a possible early termination chance. If this early termination condition is not met, one of the remaining four motion-activity classes will be chosen for further mode checking according to the analysis of the predicted motion vector (PMV) of the current MB. The above-mentioned adaptive threshold and PMV are derived by exploiting the mode correlation between the current MB and a set of adjacent MBs (i.e., region of support) in the current view and its neighboring view. Experimental results have shown that compared with exhaustive mode decision, which is a default approach set in the joint multiview video model (JMVM) reference software, the proposed MCMD algorithm achieves a reduction of the computational complexity by 73.39% on average, while incurring only 0.07 dB loss in peak signal-to-noise ratio (PSNR) and 2.22% increment on the total bit rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptually Scalable Extension of H.264

    Publication Year: 2011 , Page(s): 1667 - 1678
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1045 KB) |  | HTML iconHTML  

    We propose a novel visual scalable video coding (VSVC) framework, named VSVC H.264/AVC. In this approach, the non-uniform sampling characteristic of the human eye is used to modify scalable video coding (SVC) H.264/AVC. We exploit the visibility of video content and the scalability of the video codec to achieve optimal subjective visual quality given limited system resources. To achieve the largest coding gain with controlled perceptual quality degradation, a perceptual weighting scheme is deployed wherein the compressed video is weighted as a function of visual saliency and of the non-uniform distribution of retinal photoreceptors. We develop a resource allocation algorithm emphasizing both efficiency and fairness by controlling the size of the salient region in each quality layer. Efficiency is emphasized on the low quality layer of the SVC. The bits saved by eliminating perceptual redundancy in regions of low interest are allocated to lower block-level distortions in salient regions. Fairness is enforced on the higher quality layers by enlarging the size of the salient regions. The simulation results show that the proposed VSVC framework significantly improves the subjective visual quality of compressed videos. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Channel Distortion Modeling for Multi-View Video Transmission Over Packet-Switched Networks

    Publication Year: 2011 , Page(s): 1679 - 1692
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (960 KB) |  | HTML iconHTML  

    Channel distortion modeling for generic multi-view video transmission remains a unfilled blank, despite that intensive research efforts have been devoted to model traditional 2-D video transmission. This paper aims to fill this blank through developing a recursive distortion model for multi-view video transmission over lossy packet-switched networks. Based on the study on the characteristics of multi-view video coding and the propagating behavior of transmission error due to random frame losses, a recursive mathematical model is derived to estimate the expected channel-induced distortion at both the frame and sequence levels. The model we develop explicitly considers both temporal and inter-view dependencies, induced by motion-compensated and disparity-compensated coding, respectively. The derived model is applicable to all multi-view video encoders using the classical block-based motion-/disparity-compensated prediction framework. Both objective and subjective experimental results are presented to demonstrate that the proposed model is capable of effectively model channel-induced distortion for multi-view video. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User-Friendly Random-Grid-Based Visual Secret Sharing

    Publication Year: 2011 , Page(s): 1693 - 1703
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (749 KB) |  | HTML iconHTML  

    Recently, the visual secret sharing (VSS) technique based on a random-grid algorithm (RGVSS), proposed by Kafri and Keren in 1987, has drawn attention in academia again. However, Kafri and Keren's scheme is not participant-friendly; that is to say, the generated shared images are meaningless, so users feel that this huge amount of data is hard to manage. The literature has illustrated the concept of creating meaningful shared images, in which some shape or information appears for easing management, for VSS technique by visual cryptography (VCVSS). Those friendly VCVSS schemes are not directly suitable for RGVSS. Instead, a new friendly RGVSS must be designed. Most friendly VCVSS schemes worsen the pixel expansion problem, in which the size of shared images is larger than that of the original secret image, to achieve the goal of generic meaningful shares. As a result, in this paper we have focused on proposing a novel RGVSS scheme by skillfully designing a procedure of distinguishing different light transmissions on shared images based on the pixel values of the logo image with two primary advantages: no pixel expansion, and being user-friendly. In order to illustrate the correctness, the formal analysis is demonstrated while the experimental results show the proposed schemes do work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum Frame Rate Video Acquisition Using Adaptive Compressed Sensing

    Publication Year: 2011 , Page(s): 1704 - 1718
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (583 KB) |  | HTML iconHTML  

    Compressed sensing is a novel technology to acquire and reconstruct sparse signals below the Nyquist rate. It has great potential in image and video acquisition to explore data redundancy and to significantly reduce the number of collected data. In this paper, we explore the temporal redundancy in videos, and propose a block-based adaptive framework for compressed video sampling. To address independent movement of different regions in a video, the proposed framework classifies blocks into different types depending on their inter-frame correlation, and adjusts the sampling and reconstruction strategy accordingly. Our framework also considers the diverse texture complexity of different regions, and adaptively adjusts the number of measurements collected for each region. The proposed framework also includes a frame rate selection module that selects the maximum achievable frame rate from a list of candidate frame rates under the hardware sampling rate and the perceptual quality constraints. Our simulation results show that compared to traditional raster scan, the proposed framework can increase the frame rate by up to six times depending on the scene complexity and the video quality constraint. We also observe a 1.5-7.8 dB gain in the average peak signal-to-noise ratio of the reconstructed frames when compared with prior works on compressed video sensing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Two-Level Classification-Based Approach to Inter Mode Decision in H.264/AVC

    Publication Year: 2011 , Page(s): 1719 - 1732
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB) |  | HTML iconHTML  

    The H.264/AVC standard achieves a high coding efficiency compared to previous standards. However, this gain is accomplished at great computational cost, with mode decision being one of the most demanding subsystems. In this paper, a two-level classification-based approach to the inter mode decision problem is proposed. A first classifier detects SKIP/Direct modes, while a second one is able to decide whether to use a large (16 × 16, 16 × 8, and 8 × 16) or a small mode (8 × 8, 8 × 4, 4 × 8, and 4 × 4). The suggested classifiers are binary and linear, and the input features in the classifiers have been carefully selected. A novel cost function that pays more attention to the most critical samples during the classifier training process has been designed. The experimental results show an average computational savings of 60% of the total encoding time with respect to JM10.2 over a comprehensive variety of sequences and formats. This is achieved with negligible degradation in rate-distortion performance and compares favorably with state-of-the-art fast mode decision methods. Furthermore, the proposed method has been successfully assessed at different levels of complexity reduction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Selective Protection Scheme for Scalable Video Coding

    Publication Year: 2011 , Page(s): 1733 - 1746
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1114 KB) |  | HTML iconHTML  

    Selective protection can exploit dependency coding properties to effectively perform partial protection on scalable video coding (SVC) bitstreams since protecting frames in lower scalability layers affects visual quality of the reconstructed frames in higher scalability layers. In this paper, we propose a selective protection scheme that maximizes protection effects by the minimum number of encoded frames to be protected in the SVC bitstream domain. We first model the SVC dependency coding structure as a directed acyclic graph which is characterized with an estimated visual quality value as the attribute at each node. In addition, a visual quality estimation model is proposed based on the proportions of intra-predicted and inter-predicted MBs, amounts of residues, and estimated visual quality of reference frames. The proposed selective protection scheme traverses the dependency graph to find optimal protection paths that can give the maximum visual quality degradation. Experimental results show that, compared to the existing protection schemes, the proposed selective protection scheme reduces computation complexity in the number of protected frames, the amount of protected data, and the protection time saving. The SVC file format specification supports the carriage of selectively protected bitstreams based on the concept of our selective protection in dependency coding structure of SVC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time, Adaptive, and Locality-Based Graph Partitioning Method for Video Scene Clustering

    Publication Year: 2011 , Page(s): 1747 - 1759
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (661 KB) |  | HTML iconHTML  

    We propose in this paper an efficient, adaptive, and locality-based graph partitioning method for video scene clustering. First, a graph partitioning method is proposed to group video shots into scenes, and a peer-group filtering (PGF) scheme is used to identify all the shots similar to each particular shot based on Fisher's discriminant analysis. To work with computable shot similarity measures that have only limited discriminating power, we develop a graph partitioning scheme to cluster the shots by maximizing the likeness of shots within the same cluster and minimizing that between different clusters. Second, considering that video data are normally obtained and viewed sequentially, we propose to perform a locality-based PGF and graph partitioning on video segments with 50 shots, 100 shots, and so on. This proposed locality-based method has the advantage that the number of scene clusters is not required to be known a priori, and it can achieve performance comparable to that processing on the whole video sequence. Experimental results are presented to demonstrate the effectiveness and efficiency of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Progressive Visual Cryptography With Unexpanded Shares

    Publication Year: 2011 , Page(s): 1760 - 1764
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    The basic (k, n)-threshold visual cryptography (VC) scheme is to share a secret image with n participants. The secret image can be recovered while stacking k or more shares obtained; but we will get nothing if there are less than k pieces of shares being overlapped. On the contrary, progressive VC can be utilized to recover the secret image gradually by superimposing more and more shares. If we only have a few pieces of shares, we could get an outline of the secret image; by increasing the number of the shares being stacked, the details of the hidden information can be revealed progressively. Previous research, such as Jin in 2005, and Fang and Lin in 2006, were all based upon pixel-expansion, which not only causes the waste of storage space and transmission time but also gets a poor visual quality on the stacked image. Furthermore, Fang and Lin's research had a severe security problem that will disclose the secret information on each share. In this letter, we proposed a brand new sharing scheme of progressive VC to produce pixel-unexpanded shares. In our research, the possibility for either black or white pixels of the secret image to appear as black pixels on the shares is the same, which approximates to 1/n. Therefore, no one can obtain any hidden information from a single share, hence ensures the security. When superimposing k (sheets of share), the possibility for the white pixels being stacked into black pixels remains 1/n, while the possibility rises to k/n for the black pixels, which sharpens the contrast of the stacked image and the hidden information, therefore, become more and more obvious. After superimposing all of the shares, the contrast rises to (n-1)/n which is apparently better than the traditional ways that can only obtain 50% of contrast, consequently, a clearer recovered image can be achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comments on "2-D Order-16 Integer Transforms for HD Video Coding

    Publication Year: 2011 , Page(s): 1765 - 1767
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (69 KB)  

    In a recent paper, Dong proposed a set of order-16 nonorthogonal integer cosine transforms (NICTs). They proved that the reconstruction error caused by the nonorthogonality is negligible as compared to the error caused by the quantization. However, we would like to point out three problems found in derivations and also give two comments. Nevertheless, the problems are defects only, hence do not affect the overall justifications to the proposed NICT. This letter is to enhance and clarify the proof of Dong 's work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on circuits, systems and algorithms for compressive sensing

    Publication Year: 2011 , Page(s): 1768
    Save to Project icon | Request Permissions | PDF file iconPDF (102 KB)  
    Freely Available from IEEE
  • IEEE Circuits and Systems Society Information

    Publication Year: 2011 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Circuits and Systems for Video Technology Information for authors

    Publication Year: 2011 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it