By Topic

Image Processing, IEEE Transactions on

Issue 4 • Date April 2007

Filter Results

Displaying Results 1 - 25 of 28
  • Table of contents

    Publication Year: 2007 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Publication Year: 2007 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • A New Orientation-Adaptive Interpolation Method

    Publication Year: 2007 , Page(s): 889 - 900
    Cited by:  Papers (46)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9272 KB) |  | HTML iconHTML  

    We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • State-Space Analysis of Cardiac Motion With Biomechanical Constraints

    Publication Year: 2007 , Page(s): 901 - 917
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4092 KB) |  | HTML iconHTML  

    Quantitative estimation of nonrigid motion from image sequences has important technical and practical significance. State-space analysis provides powerful and convenient ways to construct and incorporate the physically meaningful system dynamics of an object, the image-derived observations, and the process and measurement noise disturbances. In this paper, we present a biomechanical-model constrained state-space analysis framework for the multiframe estimation of the periodic cardiac motion and deformation. The physical constraints take the roles as spatial regulator of the myocardial behavior and spatial filter/interpolator of the data measurements, while techniques from statistical filtering theory impose spatiotemporal constraints to facilitate the incorporation of multiframe information to generate optimal estimates of the heart kinematics. Physiologically meaningful results have been achieved from estimated displacement fields and strain maps using in vivo left ventricular magnetic resonance tagging and phase contrast image sequences, which provide the tag-tag and tag-boundary displacement inputs, and the mid-wall instantaneous velocity information and boundary displacement measures, respectively View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multidimensional Directional Filter Banks and Surfacelets

    Publication Year: 2007 , Page(s): 918 - 931
    Cited by:  Papers (26)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1632 KB) |  | HTML iconHTML  

    In 1992, Bamberger and Smith proposed the directional filter bank (DFB) for an efficient directional decomposition of 2-D signals. Due to the nonseparable nature of the system, extending the DFB to higher dimensions while still retaining its attractive features is a challenging and previously unsolved problem. We propose a new family of filter banks, named NDFB, that can achieve the directional decomposition of arbitrary N-dimensional (Nges2) signals with a simple and efficient tree-structured construction. In 3-D, the ideal passbands of the proposed NDFB are rectangular-based pyramids radiating out from the origin at different orientations and tiling the entire frequency space. The proposed NDFB achieves perfect reconstruction via an iterated filter bank with a redundancy factor of N in N-D. The angular resolution of the proposed NDFB can be iteratively refined by invoking more levels of decomposition through a simple expansion rule. By combining the NDFB with a new multiscale pyramid, we propose the surfacelet transform, which can be used to efficiently capture and represent surface-like singularities in multidimensional data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tomographic Reconstruction of Dynamic Cardiac Image Sequences

    Publication Year: 2007 , Page(s): 932 - 942
    Cited by:  Papers (20)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1369 KB) |  | HTML iconHTML  

    In this paper, we propose an approach for the reconstruction of dynamic images from a gated cardiac data acquisition. The goal is to obtain an image sequence that can show simultaneously both cardiac motion and time-varying image activities. To account for the cardiac motion, the cardiac cycle is divided into a number of gate intervals, and a time-varying image function is reconstructed for each gate. In addition, to cope with the under-determined nature of the problem, the time evolution at each pixel is modeled by a B-spline function. The dynamic images for the different gates are then jointly determined using maximum a posteriori estimation, in which a motion-compensated smoothing prior is introduced to exploit the similarity among the different gates. The proposed algorithm is evaluated using a dynamic version of the 4-D gated mathematical cardiac torso phantom simulating a gated single photon emission computed tomography perfusion acquisition with Technitium-99m labeled Teboroxime. We thoroughly evaluated the performance of the proposed algorithm using several quantitative measures, including signal-to-noise ratio analysis, bias-variance plot, and time activity curves. Our results demonstrate that the proposed joint reconstruction approach can improve significantly the accuracy of the reconstruction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quality Evaluation of Motion-Compensated Edge Artifacts in Compressed Video

    Publication Year: 2007 , Page(s): 943 - 956
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (843 KB) |  | HTML iconHTML  

    Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Divergence Measure for Medical Image Registration

    Publication Year: 2007 , Page(s): 957 - 966
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB) |  | HTML iconHTML  

    A new type of divergence measure for the registration of medical images is introduced that exploits the properties of the modified Bessel functions of the second kind. The properties of the proposed divergence coefficient are analysed and compared with those of the classic measures, including Kullback-Leibler, Renyi, and Ialpha divergences. To ensure its effectiveness and widespread applicability to any arbitrary set of data types, the performance of the new measure is analysed for Gaussian, exponential, and other advanced probability density functions. The results verify its robustness. Finally, the new divergence measure is used in the registration of CT to MR medical images to validate the improvement in registration accuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Entropy Estimation Based on Doubly Stochastic Models for Quantized Wavelet Image Data

    Publication Year: 2007 , Page(s): 967 - 981
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (932 KB) |  | HTML iconHTML  

    Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ordering for Embedded Coding of Wavelet Image Data Based on Arbitrary Scalar Quantization Schemes

    Publication Year: 2007 , Page(s): 982 - 996
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1619 KB) |  | HTML iconHTML  

    Many modern wavelet quantization schemes specify wavelet coefficient step sizes as continuous functions of an input step-size selection criterion; rate control is achieved by selecting an appropriate set of step sizes. In embedded wavelet coders, however, rate control is achieved simply by truncating the coded bit stream at the desired rate. The order in which wavelet data are coded implicitly controls quantization step sizes applied to create the reconstructed image. Since these step sizes are effectively discontinuous, piecewise-constant functions of rate, this paper examines the problem of designing a coding order for such a coder, guided by a quantization scheme where step sizes evolve continuously with rate. In particular, it formulates an optimization problem that minimizes the average relative difference between the piecewise-constant implicit step sizes associated with a layered coding strategy and the smooth step sizes given by a quantization scheme. The solution to this problem implies a coding order. Elegant, near-optimal solutions are presented to optimize step sizes over a variety of regions of rates, either continuous or discrete. This method can be used to create layers of coded data using any scalar quantization scheme combined with any wavelet bit-plane coder. It is illustrated using a variety of state-of-the-art coders and quantization schemes. In addition, the proposed method is verified with objective and subjective testing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lossless Video Sequence Compression Using Adaptive Prediction

    Publication Year: 2007 , Page(s): 997 - 1007
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB) |  | HTML iconHTML  

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Maximum Entropy Approach to Unsupervised Mixed-Pixel Decomposition

    Publication Year: 2007 , Page(s): 1008 - 1021
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1464 KB) |  | HTML iconHTML  

    Due to the wide existence of mixed pixels, the derivation of constituent components (endmembers) and their fractional proportions (abundances) at the subpixel scale has been given a lot of attention. The entire process is often referred to as mixed-pixel decomposition or spectral unmixing. Although various algorithms have been proposed to solve this problem, two potential issues still need to be further investigated. First, assuming the endmembers are known, the abundance estimation is commonly performed by employing a least-squares error criterion, which, however, makes the estimation sensitive to noise and outliers. Second, the mathematical intractability of the abundance non-negative constraint results in computationally expensive numerical approaches. In this paper, we propose an unsupervised decomposition method based on the classic maximum entropy principle, termed the gradient descent maximum entropy (GDME), aiming at robust and effective estimates. We address the importance of the maximum entropy principle for mixed-pixel decomposition from a geometric point of view and demonstrate that when the given data present strong noise or when the endmember signatures are close to each other, the proposed method has the potential of providing more accurate estimates than the popular least-squares methods (e.g., fully constrained least squares). We apply the proposed GDME to the subject of unmixing multispectral and hyperspectral data. The experimental results obtained from both simulated and real images show the effectiveness of the proposed method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combined Error Concealment and Error Correction in Rate-Distortion Analysis for Multiple Substream Transmissions

    Publication Year: 2007 , Page(s): 1022 - 1035
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2816 KB) |  | HTML iconHTML  

    We propose a new framework for multiple scalable bitstream video communications over lossy channels. The major feature of the framework is that the encoder estimates the effects of postprocessing concealment and includes those effects in the rate-distortion analysis. Based on the framework, we develop a rate-distortion optimization algorithm to generate multiple scalable bitstreams. The algorithm maximizes the expected peak signal-to-noise ratio by optimally assigning forward error control codes and transmission schemes in a constrained bandwidth. The framework is a general approach motivated by previous methods that perform concealment in the decoder, as in our special case. Simulations show that the proposed approach can be implemented efficiently and that it outperforms previous methods by more than 2 dB View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vertex-Based Diffusion for 3-D Mesh Denoising

    Publication Year: 2007 , Page(s): 1036 - 1045
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1831 KB) |  | HTML iconHTML  

    We present a vertex-based diffusion for 3-D mesh denoising by solving a nonlinear discrete partial differential equation. The core idea behind our proposed technique is to use geometric insight in helping construct an efficient and fast 3-D mesh smoothing strategy to fully preserve the geometric structure of the data. Illustrating experimental results demonstrate a much improved performance of the proposed approach in comparison with existing methods currently used in 3-D mesh smoothing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive Image Segmentation via Adaptive Weighted Distances

    Publication Year: 2007 , Page(s): 1046 - 1057
    Cited by:  Papers (34)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1698 KB) |  | HTML iconHTML  

    An interactive algorithm for soft segmentation of natural images is presented in this paper. The user first roughly scribbles different regions of interest, and from them, the whole image is automatically segmented. This soft segmentation is obtained via fast, linear complexity computation of weighted distances to the user-provided scribbles. The adaptive weights are obtained from a series of Gabor filters, and are automatically computed according to the ability of each single filter to discriminate between the selected regions of interest. We present the underlying framework and examples showing the capability of the algorithm to segment diverse images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptual Color Correction Through Variational Techniques

    Publication Year: 2007 , Page(s): 1058 - 1072
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (924 KB) |  | HTML iconHTML  

    In this paper, we present a discussion about perceptual-based color correction of digital images in the framework of variational techniques. We propose a novel image functional whose minimization produces a perceptually inspired color enhanced version of the original. The variational formulation permits a more flexible local control of contrast adjustment and attachment to data. We show that a numerical implementation of the gradient descent technique applied to this energy functional coincides with the equation of automatic color enhancement (ACE), a particular perceptual-based model of color enhancement. Moreover, we prove that a numerical approximation of the Euler-Lagrange equation reduces the computational complexity of ACE from O(N2) to O(NlogN), where N is the total number of pixels in the image View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast and Stable Bayesian Image Expansion Using Sparse Edge Priors

    Publication Year: 2007 , Page(s): 1073 - 1084
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3219 KB) |  | HTML iconHTML  

    Smoothness assumptions in traditional image expansion cause blurring of edges and other high-frequency content that can be perceptually disturbing. Previous edge-preserving approaches are either ad hoc, statistically untenable, or computationally unattractive. We propose a new edge-driven stochastic prior image model and obtain the maximum a posteriori (MAP) estimate under this model. The MAP estimate is computationally challenging since it involves the inversion of very large matrices. An efficient algorithm is presented for expansion by dyadic factors. The technique exploits diagonalization of convolutional operators under the Fourier transform, and the sparsity of our edge prior, to speed up processing. Visual and quantitative comparison of our technique with other popular methods demonstrates its potential and promise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary Weighted Averaging of an Ensemble of Coherently Collected Image Frames

    Publication Year: 2007 , Page(s): 1085 - 1100
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3353 KB) |  | HTML iconHTML  

    Recent interest in the collection of remote laser radar imagery has motivated novel systems that process temporally contiguous frames of collected imagery to produce an average image that reduces laser speckle, increases image SNR, decreases the deleterious effects of atmospheric distortion, and enhances image detail. This research seeks an algorithm based on Bayesian estimation theory to select those frames from an ensemble that increases spatial resolution compared to simple unweighted averaging of all frames. The resulting binary weighted motion-compensated frame average is compared to the unweighted average using simulated and experimental data collected from a fielded laser vision system. Image resolution is significantly enhanced as quantified by the estimation of the atmospheric seeing parameter through which the average image was formed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deblurring of Color Images Corrupted by Impulsive Noise

    Publication Year: 2007 , Page(s): 1101 - 1111
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1534 KB) |  | HTML iconHTML  

    We consider the problem of restoring a multichannel image corrupted by blur and impulsive noise (e.g., salt-and-pepper noise). Using the variational framework, we consider the L1 fidelity term and several possible regularizers. In particular, we use generalizations of the Mumford-Shah (MS) functional to color images and Gamma-convergence approximations to unify deblurring and denoising. Experimental comparisons show that the MS stabilizer yields better results with respect to Beltrami and total variation regularizers. Color edge detection is a beneficial by-product of our methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Detection Statistic for Random-Valued Impulse Noise

    Publication Year: 2007 , Page(s): 1112 - 1120
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2833 KB) |  | HTML iconHTML  

    This paper proposes an image statistic for detecting random-valued impulse noise. By this statistic, we can identify most of the noisy pixels in the corrupted images. Combining it with an edge-preserving regularization, we obtain a powerful two-stage method for denoising random-valued impulse noise, even for noise levels as high as 60%. Simulation results show that our method is significantly better than a number of existing techniques in terms of image restoration and noise detection View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Class-Adaptive Spatially Variant Mixture Model for Image Segmentation

    Publication Year: 2007 , Page(s): 1121 - 1130
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1340 KB) |  | HTML iconHTML  

    We propose a new approach for image segmentation based on a hierarchical and spatially variant mixture model. According to this model, the pixel labels are random variables and a smoothness prior is imposed on them. The main novelty of this work is a new family of smoothness priors for the label probabilities in spatially variant mixture models. These Gauss-Markov random field-based priors allow all their parameters to be estimated in closed form via the maximum a posteriori (MAP) estimation using the expectation-maximization methodology. Thus, it is possible to introduce priors with multiple parameters that adapt to different aspects of the data. Numerical experiments are presented where the proposed MAP algorithms were tested in various image segmentation scenarios. These experiments demonstrate that the proposed segmentation scheme compares favorably to both standard and previous spatially constrained mixture model-based segmentation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary Partition Tree Analysis Based on Region Evolution and Its Application to Tree Simplification

    Publication Year: 2007 , Page(s): 1131 - 1138
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2663 KB) |  | HTML iconHTML  

    Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Coupled Statistical Model for Face Shape Recovery From Brightness Images

    Publication Year: 2007 , Page(s): 1139 - 1151
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7454 KB) |  | HTML iconHTML  

    We focus on the problem of developing a coupled statistical model that can be used to recover facial shape from brightness images of faces. We study three alternative representations for facial shape. These are the surface height function, the surface gradient, and a Fourier basis representation. We jointly capture variations in intensity and the surface shape representations using a coupled statistical model. The model is constructed by performing principal components analysis on sets of parameters describing the contents of the intensity images and the facial shape representations. By fitting the coupled model to intensity data, facial shape is implicitly recovered from the shape parameters. Experiments show that the coupled model is able to generate accurate shape from out-of-training-sample intensity images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Family of Nonredundant Transforms Using Hybrid Wavelets and Directional Filter Banks

    Publication Year: 2007 , Page(s): 1152 - 1167
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7379 KB) |  | HTML iconHTML  

    We propose a new family of nonredundant geometrical image transforms that are based on wavelets and directional filter banks. We convert the wavelet basis functions in the finest scales to a flexible and rich set of directional basis elements by employing directional filter banks, where we form a nonredundant transform family, which exhibits both directional and nondirectional basis functions. We demonstrate the potential of the proposed transforms using nonlinear approximation. In addition, we employ the proposed family in two key image processing applications, image coding and denoising, and show its efficiency for these applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic-Based Surveillance Video Retrieval

    Publication Year: 2007 , Page(s): 1168 - 1181
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1902 KB) |  | HTML iconHTML  

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003