By Topic

Signal Processing, Communication and Computing (ICSPCC), 2012 IEEE International Conference on

Date 12-15 Aug. 2012

Filter Results

Displaying Results 1 - 25 of 173
  • Improved space-time-frequency block code for MIMO-OFDM wireless communications

    Publication Year: 2012 , Page(s): 538 - 541
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (820 KB) |  | HTML iconHTML  

    This paper presents an improved STF (space-time-frequency) coded MIMO-OFDM (multiple-input and multiple-output, orthogonal frequency-division multiplexing) system for two transmit antennas. In this system, we devise an interleaving method which can maximize diversity gain and achieve optimal system performance with a moderate decoding complexity. Complete characterization of the proposed STF-OFDM scheme is provided and its BER (bit error rate), BLER (block error rate) and spectral efficiency performance is evaluated. Result of the simulation shows that the proposed scheme significantly improves the system performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Placement of wide nulls in the radiation pattern of a linear array antenna using iterative Fast Fourier transform

    Publication Year: 2012 , Page(s): 552 - 555
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (745 KB) |  | HTML iconHTML  

    Obtaining a wide null over a certain angular range and lower values of side lobe levels finds extensive applications in communication and radar engineering. The high side lobe level can degrade the system performance as well as antenna power efficiency significantly. In this paper, we present the iterative Fast Fourier technique to generate wide nulls while keeping the side lobe level to its minimum value. Advantage of this technique is very high computational speed. This is because of the fact that the core calculations are based on direct and inverse fast Fourier transforms (FFT). The simulation results have shown that this technique is able to find suitable weights vector to reduce the side lobe power for the given null depth. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional frequency estimation using a modified version of the two-stage separated virtual steering vector-based algorithm

    Publication Year: 2012 , Page(s): 509 - 512
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB) |  | HTML iconHTML  

    In this paper, we have proposed a modified version of the two-stage separated virtual steering vector-based algorithm. Our proposed algorithm has solved the problem encountered in the two-stage separated virtual steering vector-based algorithm (SVSV) without significant increase in the computational load. SVSV could not resolve the multiple two-dimensional (2D) frequency components having equivalent fi2 - fi1, (i = 1,2,...,Iw). Computer simulations are given to demonstrate the effectiveness of the proposed algorithm over SVSV algorithm. Noise performance of the modified SVSV is also analyzed through Monte Carlos simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study of Voice Onset Time for Modern Standard Arabic and Classical Arabic

    Publication Year: 2012 , Page(s): 691 - 695
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    Voice Onset Time (VOT) is an important feature in signal sound. This feature appears only in stop sounds. VOT can be utilized by human auditory system to distinguish between voiced and unvoiced stops such as /p/ and /b/ in case of English. Similarly, VOT can be adopted by digital systems to classify and recognize stop sounds and their carried syllables and words. This paper focuses on computing and analyzing VOT of two main standard Arabic languages and comparing them with other languages. These two main standard languages are Modern Standard Arabic (MSA) and Classical Arabic (CA). We built a database using a carrier word structure of CV-CV-CV. This database was utilized to conduct our experiments. One of the main outcomes of the experiments is that the VOT is always positive regardless of the stop voicing. In addition, we found that the VOT is short for voiced sounds and long for unvoiced sounds. Moreover, the VOT values showed different values for different Arabic dialects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Energy level performance of multi hop CDMA wireless sensor network with error control

    Publication Year: 2012 , Page(s): 521 - 526
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (809 KB) |  | HTML iconHTML  

    Energy level performance of a multi hop CDMA wireless sensor networks is analyzed incorporating two popular error control schemes. Decoding is considered only at the sink which has enough power to run the complex decoding algorithms. In one case we consider a simple end-to-end ARQ, while the other case is based on hybrid ARQ type I (HARQ-I) using BCH code. The performance of the network is assessed in terms of energy consumption, and energy efficiency involved in successful transmission of a packet from source to sink via a number of hops. Two kinds of interference namely multiple access interference (MAI) and node interference (NI) are considered. The effects of several network parameters such as node density, packet length, error correcting capability on packet error rate (PER), energy consumption, and energy efficiency for successful reception of a packet at sink are investigated and compared for different error control strategies. Performance of the network system using hop by hop ARQ mechanism, where decoding is performed at every node, is also compared with the proposed schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video-based biometric identification using eye tracking technique

    Publication Year: 2012 , Page(s): 728 - 733
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (996 KB) |  | HTML iconHTML  

    Recently, biometric identification techniques have attracted great attention due to increasing demand of high-performance security systems. Compared with conventional identification methods, biometric techniques provide more reliable and robust solutions. In this paper, a novel video-based biometric identification model based on eye tracking technique is proposed. Inspired by visual attention, video clips are designed for subjects to view in order to capture eye tracking data reflecting their physiological and behavioral characteristics. Various visual attention characteristics, including acceleration, geometric, and muscle properties, are extracted from eye gaze data and used as biometric features to identify persons. An algorithm based on mutual information of features is adopted to perform feature evaluation for obtaining a set of the most discriminative features for biometric identification. Experiments are conducted by using two types of classifiers, Back-Propagation (BP) neural network and Support Vector Machine (SVM). Experimental results show that using video-based eye tracking data for biometric identification is feasible. In particular, eye tracking can be used as an additional biometric modal to enhance the performance of current biometric person identification systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast dynamic range compression with local contrast preservation algorithm for low dynamic range image enhancement

    Publication Year: 2012 , Page(s): 456 - 461
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1466 KB) |  | HTML iconHTML  

    This paper presents a new fast dynamic range compression format with a local-contrast-preservation (FDRCLCP) algorithm to efficiently resolve low dynamic range (LDR) image enhancement problem for natural color images. The proposed FDRCLCP algorithm can combine with any continuously differentiable intensity transfer function to achieve LDR image enhancement. In combination with the FDRCLCP algorithm, a new intensity-transfer function that achieves satisfactory dynamic-range compression while preventing over enhancement in dark regions of the image is proposed. Experimental results validate that the proposed method provides better visual representation in comparison with two existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing image objects resolution with Walsh-coded spatial light modulators embedded into Mach-Zehnder interferometry

    Publication Year: 2012 , Page(s): 479 - 483
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (777 KB) |  | HTML iconHTML  

    In this paper, we use Mach-Zehnder interferometry (MZI) technique to enhance image objects resolution. We take sample's correlation characteristics over Walsh-Hadamard (WH)-coded spatial-light modulator (SLM) to boost the image resolution of sample objects detection. Comparing the spectral interferences with and without WH-coded SLM, we get more contrast difference on the correlation magnitudes. Through with computer simulations, we find that the interference correlations under coded-SLM have a contrast ratio of 2.0 which is larger than the contrast ratio of 1.49 for sample measurement without SLM coding. The enhancement of image objects resolution is thus assured in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Handwritten signature verification using weighted fractional distance classification

    Publication Year: 2012 , Page(s): 212 - 217
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (695 KB) |  | HTML iconHTML  

    Signatures are one of the behavioural biometric traits, which are widely used as a means of personal verification. Therefore, they require efficient and accurate methods of authenticating users. The use of a single distance-based classification technique normally results in a lower accuracy compared to supervised learning techniques. This paper investigates the use of a combination of multiple distance-based classification techniques, namely individually optimized re-sampling, weighted Euclidean distance, fractional distance and weighted fractional distance. Results are compared to a similar system that uses support vector machines. It is shown that competitive levels of accuracy can be obtained using distance-based classification. The best accuracy obtained is 89.2%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced adaptive intra mode bit skip for H.264/AVC

    Publication Year: 2012 , Page(s): 98 - 103
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1015 KB) |  | HTML iconHTML  

    Adaptive intra mode bit skip (AIMBS) can improve the coding efficiency of intra coding in H.264/AVC. However, it has two shortcomings: one is that DC mode is not effectively handled in AIMBS as DC mode is duplicated in both Single-Prediction and Multiple-Prediction processing; the other is that AIMBS does not well match the most probable mode (MPM) estimation scheme of H.264/AVC. To tackle these two shortcomings, an enhanced AIMBS (EAIMBS) scheme with larger bitrate reduction and better QP robustness is presented. In addition, EAIMBS almost maintains as low computational complexity as AIMBS. The proposed technique introduces distance-based weighted prediction (DWP) to replace DC mode in Multiple-Prediction and it calculates the MPM by simplified L-shaped most probable mode estimation (SLMPME) method. Experimental results show that EAIMBS gives in average 3.97% bit-rate reduction and 51% computation reduction at QP=32 compared with H.264/AVC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Phased-MIMO radar with frequency diversity for increased system flexibility

    Publication Year: 2012 , Page(s): 16 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (706 KB) |  | HTML iconHTML  

    Phased-multiple-input multiple-output (MIMO) radar enjoys the advantages of MIMO radar in diversity gain without sacrificing the main advantage of phased-array radar in coherent transmit gain. However, a limitation of the phased-MIMO radar is that the beam steering is fixed in an angle for all the ranges. This limits the system performance to mitigate non-desirable range-dependent interferences. To overcome this disadvantage, this paper proposes a flexible phased-MIMO radar with frequency diversity. This approach divides the transmit antenna array into multiple subarrays that are allowed to overlap, each subarray coherently transmits a distinct waveform, which is orthogonal to the waveforms transmitted by other subarrays, at a distinct transmit frequency. Each subarray forms a range-dependent beam and all beams may be steered to different ranges or directions. The subarrays jointly offer flexible operating modes such as MIMO radar, phased-array radar, and phased-MIMO radar. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced-complexity equalizer for two-dimensional modulations with coherent demodulation

    Publication Year: 2012 , Page(s): 447 - 450
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (750 KB) |  | HTML iconHTML  

    A reduced-complexity equalizer capable of correcting channel distortion for two-dimensional modulations with coherent demodulation is presented in this paper. Based on the fact that the phase of the in-phase and quadrature sinusoids in the receiver must be synchronized to that of the carrier received, a structure using a pair of real-valued equalizers, one for the in-phase and the other for the quadrature channels, is proved to be appropriate. Conventional approaches in this regard use structures with complex-valued arithmetic and thus call for a much higher computational complexity. Due to the fact that only real-valued arithmetic is needed for the new structure, a saving of 50% multiplications, compared to its complex-valued counterpart, is thus achieved even with better symbol error rate performance. Computer simulation results justify the assertions made in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visualization of the sound pressure distribution in monitoring system of wireless sensor network

    Publication Year: 2012 , Page(s): 258 - 261
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1172 KB) |  | HTML iconHTML  

    In order to show the sound pressure distribution in monitoring system of wireless sensor network, a two-dimensional visualization method is proposed according to the positions and pressures of the sensor nodes and target. An expansion and smooth method of the convex hull is presented, which is used to calculate the positions of the boundary points of drawing region. The positions and pressures of the points within the region are calculated through bi-cubic interpolation. Then the Delaunay-triangulation is made on these points and the pressures are mapped into colors. Lastly, the sound pressure distribution plane is drawn though the OpenGL triangle primitives. The application result shows that the visualization method could meet the monitoring system's requirements in real time interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Watershed and Random Walks based depth estimation for semi-automatic 2D to 3D image conversion

    Publication Year: 2012 , Page(s): 84 - 87
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1321 KB) |  | HTML iconHTML  

    Depth map estimation from a single image is the key problem for the 2D to 3D image conversion. Many 2D to 3D converting processes, either automatic or semi-automatic, are proposed before. Quality of the depth map from automatic methods is low and there exists wrong depth values due to errors estimation in depth cue extraction. The semi-automatic approaches can generate a better quality of depth map based on the user-defined labels, which indicate a rough estimation of depth values in the scene, to generate the rest of depth value and reconstruct the stereoscopic image. However, they require complexity system and are very computational intensive. A simplified approach is to combine the depth maps from Graph Cuts and Random Walks to persevering the sharp boundary and fine detail inside the objects. The drawback is the time consuming of the energy minimization in the Graph Cuts. In this paper, a fast Watershed segmentation based on the priority queue, which indicates the neighbor distance relationship, is used to replace the Graph Cuts to generate the hard constraints depth map. It is appended to the neighbor cost in the Random Walks to generate the final depth map with hard constraints in the objects boundaries regions and fine detail inside objects. The Watershed and Random Walks are low computational intensive and can achieve approximate real time estimation which results in a fast stereoscopic conversion process. Experimental results demonstrate that it can produce good quality stereoscopic image in very short time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A TDI CCD based method for light target location in large field of view

    Publication Year: 2012 , Page(s): 517 - 520
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB) |  | HTML iconHTML  

    In this paper, the method of two TDI (Time Delay Integration) CCD intersection is proposed to achieve the location of light target in a large field. Frame difference method combined with OTSU threshold segmentation algorithm is proposed to identify the image with light target. Besides, in order to reduce the storage capacity, we carry out binarization processing on target image. For getting the position of the light target in target image, the grayscale centroid algorithm is presented. In this paper, TDI CCD with high-speed and high-sensitivity is utilized as image sensor, and MATLAB is selected as image processing software. In our test, we carry on image processing by using the above algorithm with the images of 8192*32 pixels acquired from the 320m*240m field by two TDI CCD cameras in MATLAB. With the method of intersection of two TDI CCD, the location error is within 5% compared with the result achieved by total station instrument. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The research of identification method of crowd convergence and divergence in the surveillance video

    Publication Year: 2012 , Page(s): 618 - 622
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1165 KB) |  | HTML iconHTML  

    This paper proposes an improved method that based on motion corner statistical properties of the crowd convergence and divergence event. The experimental results show that this method can identify the crowd convergence and divergence in video sequence effectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved RLS positioning algorithm for satellite navigation systems

    Publication Year: 2012 , Page(s): 490 - 494
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (779 KB) |  | HTML iconHTML  

    The calculating algorithm of the receiver of satellite navigation systems has important effect on positioning accuracy. In this paper, the novel Weight Least Square (WLS) by weighting processing for Recursive Least Square (RLS) algorithm is proposed to improve the positioning accuracy of the receiver. The associated weight of the proposed algorithm exploits the elevation angle to make the satellite data of larger elevation play a more important role in the application of WLS. The performance comparison between the proposed algorithm and RLS are given by numerical simulations. Simulation results show that the improved algorithm can effectively eliminate some error factors and improve the accuracy of positioning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved time delay estimation method based on cross-power spectrum phase

    Publication Year: 2012 , Page(s): 686 - 690
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (754 KB) |  | HTML iconHTML  

    In the application context of Anti-sniper acoustic detection positioning system, to solve the problems of lower accuracy and larger relative errors which appear when using those common correlation methods, an improved method based on the overlap add window segment and subsection de-noising cross-power spectrum phase is put forward in this paper. The experiments show that this method can improve the accuracy of time delay estimation and reduce the relative errors of six estimated values in the four-element array simultaneously. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Boundary-aided Extreme Value Detection based pre-processing algorithm for H.264/AVC fast intra mode prediction

    Publication Year: 2012 , Page(s): 623 - 627
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (790 KB) |  | HTML iconHTML  

    The mode decision in the intra prediction of an H.264/AVC encoder requires complex computations and a significant amount of time to select the best mode that achieves the minimum rate-distortion (RD). The complex computations for the mode decision cause difficulty in real-time applications, especially for software-based H.264/AVC encoders. This study proposes an efficient fast algorithm called Boundary-aided Extreme Value Detection (BEVD) to predict the best direction mode, excluding the DC mode, for fast intra-mode decision. The BEVD-based edge detection can predict luma-4×4, luma-16×16, and chroma-8×8 modes effectively. The first step involves using the pre-processing mode selection algorithm to find the primary mode that can be selected for fast prediction. The second step requires applying the selected fewer high-potential candidate modes to calculate the RD cost for the mode decision. The encoding time is largely reduced, and similar video quality is also maintained. Simulation results show that the proposed BEVD method reduces encoding time by 63 %, and requires a bit-rate increase of approximately 1.7 %, and a decrease in peak signal-to-noise ratio (PSNR) by approximately 0.06 dB in QCIF and CIF sequences, compared with the H.264/AVC JM 14.2 software. The proposed method achieves less PSNR degradation and bit-rate increase compared to previous methods with more encoding time reduction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A real-time target detection algorithm for Infrared Search and track system based on ROI extraction

    Publication Year: 2012 , Page(s): 774 - 778
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2191 KB) |  | HTML iconHTML  

    With regard to the difficulties confronted in the target detection of circumferential scan infrared search system, such as large image data quantity, low detection probability of weak target, high false detection rate, et al. a real-time target detection algorithm is proposed based on region of interest (ROI) extraction. Firstly, it extracts the ROI of the suspected targets by quick real-time algorithm in the whole panorama image, based on the high frequency and movement characteristics of the target pixels. And then, focusing on the suspected target sliced images of ROI, it has further delicate detection and recognition to exclude those false jamming. The detection result of the test images shows, the algorithm has realized stable detection with low-rate false alarm for distant dim targets, and has been applied to the engineering sample of the Panorama Infrared Search and Tracking system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A face hallucination algorithm via KPLS-eigentransformation model

    Publication Year: 2012 , Page(s): 462 - 467
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1644 KB) |  | HTML iconHTML  

    In this paper, we present a novel eigentransformation based algorithm for face hallucination. The traditional eigentransformation method is a linear subspace approach, which represents an image as a linear combination of training samples. Consequently, it cannot effectively represent the relationship between the low resolution facial images and the corresponding high-resolution version. In our algorithm, a Kernel Partial Least Squares (KPLS) predictor is introduced into the eigentransformation model for solving the High Resolution (HR) image form a Low Resolution (LR) facial image. We have compared our proposed method with some current Super Resolution (SR) algorithms using different zooming factors. Experimental results show that our algorithm provides improved performances over the compared methods in terms of both visual quality and numerical errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient and robust approach for wideband compressive spectrum sensing

    Publication Year: 2012 , Page(s): 499 - 502
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (810 KB) |  | HTML iconHTML  

    Compressive sensing (CS), which exploits the spectral sparsity of wideband signals, is a powerful approach to wideband spectrum acquisition at sub-Nyquist sampling rates. In this paper, we propose a modified CoSaMP method for fast and accurate wideband compressive spectrum sensing in noisy environments. We exploit the priori knowledge of the noise level to enhance the robustness of spectrum reconstruction to noise. In addition, to improve the efficiency of spectrum sensing, we utilize the residual change to adjust halting conditions and introduce feasible fast solutions to the essential step least-squares estimation. Simulation results show that the modified CoSaMP algorithm outperforms the original CoSaMP and the generic convex relaxation algorithm basis pursuit denoising (BPDN) with respect to approximation performance and robustness. Moreover, it executes much faster than BPDN and CoSaMP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Three-step-approach with validation for face hallucination

    Publication Year: 2012 , Page(s): 468 - 473
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (940 KB) |  | HTML iconHTML  

    In this paper, we propose a novel face hallucination framework using validation based on three-step-approach. In order to improve the performance of facial image reconstruction, we included validation in our framework for correcting error. That is the final result should be more accurate than the result before the validation process. In this paper, the 2D framework is applied that means the image can directly process without requiring the vectorization. Moreover, the spatial information can be preserved. Our framework is based on a three-step-approach. In the first step, error of face image reconstruction is learnt from training data set by Bilateral Two Dimensional Principal Component Analysis (B2DPCA). In this step, the validation is obtained from the error of Low-Resolution (LR) and error of High-Resolution (HR). In the second step, the global image is reconstructed by using Maximum a Posteriori (MAP) estimator and the final step is using Regression Model for Tensor (RM-T) to learn from samples data set by applying error regression analysis. The experimental results on a well-known face database demonstrate that the proposed methods can improve the face reconstruction. The results show that our method enhances the resolution and improves the quality of the face hallucination in comparison with the conventional method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Edge information based effective intra mode decision algorithm

    Publication Year: 2012 , Page(s): 628 - 633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (979 KB) |  | HTML iconHTML  

    In this paper, we present very simple, yet efficient, algorithms for intra mode decision algorithms in AVS video coding based on edges detection and neural network. The technique uses the SOBEL operator to check the edge information from the total image before its intra prediction. Then the prediction mode of each 8×8 block can be decided based on its edges. In our paper, we design two schemes to decide the intra mode. In our first method, the intra mode is determined by compare the total number the feature points in each sub-blocks. It can save the encoding time of AVS part 2 between 30 to 40%. It is a very fast method, but the bit stream is correspondingly increased. To reduce the gain of the bit rate, we design another effective method employing neural network classifier which trained by the edge points to decide the mode of the block for AVS intra prediction. Because only one prediction mode is chosen for RDO calculation using the proposed algorithm, simulation results also show that the proposed scheme achieves up to 9% computational saving with no video quality degradation, compared with results of the existing method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quality-efficient de-interlacing for H.264-coded videos

    Publication Year: 2012 , Page(s): 92 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB) |  | HTML iconHTML  

    In this paper, we propose an efficient de-interlacing method for H.264-coded video sequences with different resolutions. In our proposed method, using the syntax elements (SEs) in the H.264 bitstreams, two new strategies are delivered to improve the de-interlacing quality. The first strategy is based on the intra mode to improve the quality of the regions with skewed edges. The second strategy is based on the inter mode to refine the quality of de-interlaced videos as well as alleviate the error propagation side effect. Experimental results on popular test video sequences with the resolutions of common international format (CIF), quarter CIF (QCIF), standard-definition (SD), and full high-definition (HD) demonstrate the superiority of the proposed de-interlacing method in terms of both the objective video quality measure and subjective visual effect when compared with the state-of-the-art method by Dong and Ngan. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.