By Topic

Image Processing, IET

Issue 3 • Date June 2008

Filter Results

Displaying Results 1 - 7 of 7
  • Editorial - Visual information engineering

    Page(s): 105 - 106
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (82 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical, DCT and vector quantisation-based video codec

    Page(s): 107 - 115
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (577 KB)  

    The authors present a novel hybrid statistical, DCT and vector quantisation-based video-coding technique. In intra mode of operation, an input frame is divided into a number of non-overlapping pixel blocks. A discrete cosine transform then converts the coefficients in each block into the frequency domain. Coefficients with the same frequency index at different blocks are put together generating a number of matrices, where each matrix contains the coefficients of a particular frequency index. The matrix, which contains the DC coefficients, is losslessly coded. Matrices containing high frequency coefficients are coded using a novel statistical encoder. In inter mode of operation, overlapped block motion estimation / compensation is employed to exploit temporal redundancy between successive frames and generates a displaced frame difference (DFD) for each inter-frame. A wavelet transform then decomposes the DFD-frame into its frequency subbands. Coefficients in the detail subbands are vector quantised while coefficients in the baseband are losslessly coded. To evaluate the performance of the codec, the proposed codec and the adaptive subband vector quantisation (ASVQ) video codec, which has been shown to outperform H.263 at all bitrates, were applied to a number of test sequences. Results indicate that the proposed codec outperforms the ASVQ video codec subjectively and objectively at all bitrates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Influence of downsampling filter characteristics on compression performance in wavelet-based scalable video coding

    Page(s): 116 - 129
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1362 KB)  

    The application of different downsampling filters in video coding directly models visual information at lower resolutions and influences the compression performance of a chosen coding system. In wavelet-based scalable video coding the spatial scalability is achieved by the application of wavelets as downsampling filters. However, characteristics of different wavelets influence the performance at targeting spatio-temporal decoding points. An analysis of different downsampling filters in popular wavelet-based scalable video coding schemes is presented. Evaluation is performed for both intra- and inter-coding schemes using wavelets and standard downsampling strategies. On the basis of the obtained results a new concept of inter-resolution prediction is proposed, which maximises the average performance using a combination of standard downsampling filters and wavelet-based coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Residue-free video coding with pixelwise adaptive spatio-temporal prediction

    Page(s): 131 - 138
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (874 KB)  

    The authors introduce residue-free video coding, in which motion-compensated predictions from surrounding frames and spatial predictions from the current frame are combined adaptively on a pixel-by-pixel basis. The consequence is that residue frames, blocks or regions are never explicitly formed. The authors describe a practical embodiment of a residue-free coder- temporal prediction trees - in which the local adaptation is conditioned frame to frame by a control parameter derived from global motion statistics. Using fixed-block-size motion compensation, the resulting coder is competitive with conventional residue-based compression, and at higher data rates is able to outperform H.264/AVC for high-activity sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image-based facial recognition in the domain of high-order polynomial one-way mapping

    Page(s): 139 - 149
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (537 KB)  

    The authors present a secure facial recognition system. The biometric data are transformed to the cancellable domain using high-order polynomial functions and co-occurrence matrices. The proposed method has provided both high-recognition accuracy and biometric data protection. Protection of data relies on the polynomial functions, where the new reissued cancellable biometric can be obtained by changing the polynomial parameters. Besides the protection of data, the reconstructed co-occurrence matrices also contributed to the accuracy enhancement. Hadamard product is used to reconstruct the new measure and has shown high flexibility in proving a new relationship between two independent covariance matrices. The proposed cancellable biometric is treated in the same manner as the original biometric data, which enables replacement of original data by the novel cancellable algorithm with no change to the authentication system. The two-dimensional principal component analysis recognition algorithm is used at the authentication stage. Results have shown high non-reversibility of data with improved accuracy over the original data and raised the performance recognition rate to 97%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-delay video control in a personal area network for augmented reality

    Page(s): 150 - 162
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (451 KB)  

    A personal area network (PAN) is a feature of an augmented reality system, transmitting modified video for real-time display. Low-delay communication of encoded video over a Bluetooth wireless PAN is achieved in favourable channel conditions by a combination of dynamic packetisation of video slices together with centralised and predictive rate control. The result is minimised packet delay (below 0.05 s) and high-quality 40 dB video, with packet loss limited to 4 from radio frequency noise. Where channel conditions result in error bursts, dynamic rate change is introduced to reduce the need for packet retransmission and improve power efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SimBIL: appearance-based simulation of burst-illumination laser sequences

    Page(s): 165 - 174
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (474 KB)  

    A novel appearance-based simulator of burst illumination laser sequences, SimBIL, is presented and the sequences it generates are compared with those of a physical model-based simulator that the authors have developed concurrently. SimBIL uses a database of 3D, geometric object models as faceted meshes, and attaches example-based representations of material appearances to each model surface. The representation is based on examples of intensity-time profiles for a set of orientations and materials. The dimensionality of the large set of profile examples (called a profile eigenspace) is reduced by principal component analysis. Depth and orientation of the model facets are used to simulate time gating, deciding which object parts are imaged for every frame in the sequence. Model orientation and material type are used to index the profile eigenspaces and assign an intensity-time profile to frame pixels. To assess comparatively the practical merit of SimBIL sequences, the authors compare range images reconstructed by a reference algorithm using sequences from SimBIL, from the physics-based simulator, and real BIL sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org