By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 4 • Date Aug 1997

Filter Results

Displaying Results 1 - 12 of 12
  • Closed-loop motion compensation for video coding standards

    Publication Year: 1997 , Page(s): 227 - 232
    Cited by:  Patents (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    In the classical block-matching motion-estimation approach, the motion vectors which result in minimum distortion between the estimated and the actual image block are chosen. However, these motion vectors may not be optimal in terms of coding efficiency. An analysis by synthesis method which selects the optimal motion vectors, using the resulting bit rate and distortion, is presented. A significant reduction in bit rate is achieved with virtually no degradation in objective image quality. H.263 is used in simulation experiments to test the algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian approach to parameter estimation and interpolation of time-varying autoregressive processes using the Gibbs sampler

    Publication Year: 1997 , Page(s): 249 - 256
    Cited by:  Papers (10)  |  Patents (6)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (812 KB)  

    A nonstationary time series is one in which the statistics of the process are a function of time; this time dependency makes it impossible to utilise standard analytically defined statistical estimators to parameterise the process. To overcome this difficulty, the time series is considered within a finite time interval and is modelled as a time-varying autoregressive (AR) process. The AR coefficients that characterise this process are functions of time, represented by a family of basis vectors. The corresponding basis coefficients are invariant over the time window and have stationary statistical properties. A method is described for applying a Markov chain Monte Carlo method known as the Gibbs sampler to the problem of estimating the parameters of such a time-varying autoregressive (TVAR) model, whose time dependent coefficients are modelled by basis functions. The Gibbs sampling scheme is then extended to include a stage which may be used for interpolation. Results on synthetic and real audio signals show that the model is flexible, and that a Gibbs sampling framework is a reasonable scheme for estimating and characterising a time-varying AR process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive resampling algorithm for image zooming

    Publication Year: 1997 , Page(s): 207 - 212
    Cited by:  Papers (11)  |  Patents (7)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (756 KB)  

    Applying an interpolation function indiscriminately to an image, to resample it, will generally result in aliasing, edge blurring and other artefacts. The authors present an adaptive resampling algorithm for zooming up images. The algorithm is based on analysing the local structure of the image and applying a near optimal and least time-consuming resampling function will preserve edge locations and their contrast. This is done by segmenting the image dynamically into homogeneous areas, as it is scanned or received. Based on the location of the pixel to be computed (whether it is within a homogenous area, is on its edge or is an isolated feature), interpolation, extrapolation or pixel replication is chosen. The algorithm performance, from both a quality and a computational complexity aspect, are compared to different methods and functions previously reported in the literature. The advantage of the algorithm is quite apparent at edges and for large zooming factors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Direct reconstruction method for wavelet transform extrema representation

    Publication Year: 1997 , Page(s): 193 - 198
    Cited by:  Papers (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    In contrast to the iterative reconstruction algorithm of projections onto convex sets a noniterative method that completely solves the problem of reconstructing from the wavelet transform extrema representation is presented for the first time. The solution obtained by the proposed method is mathematically consistent and is indistinguishable from the true solution, i.e. both give the same representation. The proposed method consists of first finding a least-squares solution in the space spanned by the wavelet sampling bases. An orthogonal component that is to be added to the least-squares solution to form a consistent solution is then found by solving a set of linear inequalities specified by the a priori information in the representation using the linear programming technique. Numerical results presented show that the reconstructions are of good quality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiobjective neural network for image reconstruction

    Publication Year: 1997 , Page(s): 233 - 236
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (448 KB)  

    The authors propose a multiobjective neural network model and algorithm for image reconstruction from projections. This model combines the Hopfield model and multiobjective decision making approach. A weighted sum optimisation based neural network algorithm is developed. The dynamic process of the net is based on minimisation of a weighted sum energy function and Euler's iteration and this algorithm is applied to image reconstruction from computer-generated noisy projections and Siemens Somaton DR scanner data, respectively. Reconstructions based on this method are shown to be superior to those based on conventional iterative reconstruction algorithms such as MART (multiplicate algebraic reconstruction technique) and convolution from the point of view of accuracy of reconstruction. Computer simulation using the multiobjective method shows a significant improvement in image quality and convergence behaviour over conventional algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Illuminant-tilt estimation from images of isotropic texture

    Publication Year: 1997 , Page(s): 213 - 219
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (956 KB)  

    The authors present a new illuminant-tilt-angle estimator that works with isotropic textures. It has been developed from Kube and Pentland's (1988) frequency-domain model of images of three-dimensional texture, and is compared with Knill's (1990) spatial-domain estimator. The frequency and spatial-domain theory behind each of the estimators is related via an alternative proof of the basic phenomena that both estimators exploit: that is that the variance of the partial derivative of the image is at a maximum when the partial derivative is taken in the direction of the illuminant's tilt. Results obtained using both simulated and real textures suggest that the frequency-domain estimator is more accurate View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Similarity measure for superquadrics

    Publication Year: 1997 , Page(s): 237 - 243
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (764 KB)  

    Superquadrics with parametric deformations are suitable models for use as solid primitives for describing a complicated 3-D object. Some different methods for the recovery of superquadric primitives from range data have been proposed, but there is still no effective similarity measure for the matching task between two superquadrics in a 3-D object recognition system. The authors propose a similarity measure to evaluate the degree of shape similarity between two superquadric-based objects. This similarity measure is defined as the volume of regions bounded by the surfaces of two 3-D objects. The proposed measure has been proved to be a metric. The metric value is computed by the Monte Carlo integration method. The experimental results illustrate that the proposed similarity measure is effective in matching a recovered superquadric with a set of superquadrics in the model database View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalised method for pruning an FFT type of transform

    Publication Year: 1997 , Page(s): 189 - 192
    Cited by:  Papers (7)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    A new pruning method for an FFT type of transform structure is proposed. Its novelty lies in the fact that, besides being able to prune the transform, it is able to complete a previously pruned transform or to progress from one level of pruning to another. The method can be directly applied to fast progressive image coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced interframe coding based on morphological segmentation

    Publication Year: 1997 , Page(s): 220 - 226
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (960 KB)  

    Morphological segmentation has been proposed as an attractive alternative to transform-based compression for interframe coding of digital video. Being a spatial approach, segmentation based coding eliminates the artefacts commonly associated with transform coding, such as ringing around sharp edges. One disadvantage of this method is that it can introduce spurious edges in the reconstructed video sequence, associated with the boundaries of the transmitted regions. The authors present a statistically derived smoothing algorithm that reduces this problem. In addition, a single-stage entropy coder for the update signal is proposed in place of the conventional two-stage algorithm. Comparisons are made between the performance of a traditional motion compensated DCT coder and segmentation based codecs (with and without smoothing) for CIF sequences at bit rates between 64 and 256 kbps. It is concluded that, at the bit rates under investigation, the segmentation based method yields improved subjective quality of the reconstructed video View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple resolution image restoration

    Publication Year: 1997 , Page(s): 199 - 206
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (904 KB)  

    A new algorithm for solving the deconvolution problem is proposed. This algorithm uses the wavelet transform to induce a multiresolution approach to deconvolve a blurred signal/image. The low resolution part of a signal/image is restored first and then high resolution information is added successively into the estimation process. Two different ways to incorporate the image space positivity constraint, namely loosely and strictly, are discussed. In to most restoration algorithms, the positivity constraint is applied directly in the transformed domain. The performance of the algorithm in the presence of noise is also investigated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Block algorithms for the parametric estimation of signals and systems

    Publication Year: 1997 , Page(s): 257 - 259
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    Existing recursive parameter estimation methods use approximated covariance and gradient matrices which are actually computed as functions of the present parameter vector θˆ(t) by the matrices computed as functions of all previous parameter estimates θˆ(i) for i⩽t. By reducing the approximations considerably, modified versions of the recursive identification algorithms are obtained. Considering the local averages of the covariance and the gradient and clubbing conveniently with the block nature of estimators, efficient block versions of these algorithms are obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient weighted least-squares algorithm for the design of FIR filters

    Publication Year: 1997 , Page(s): 244 - 248
    Cited by:  Papers (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    The weighted least-squares (WLS) technique has been widely used for the design of digital FIR filters. In the conventional WLS, the filter coefficients are obtained by performing a matrix inverse operation, which needs computation of O(N3). The authors present a new WLS algorithm that introduces an extra frequency response including implicitly the weight function. In the new algorithm, the filter coefficients can be solved just by a matrix vector multiplication. It reduces the computational complexity from O(N3 ) to O(N2) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.