By Topic

Medical Imaging, IEEE Transactions on

Issue 4 • Date April 2004

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Iterative tomographic image reconstruction using Fourier-based forward and back-projectors

    Page(s): 401 - 412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (622 KB) |  | HTML iconHTML  

    Iterative image reconstruction algorithms play an increasingly important role in modern tomographic systems, especially in emission tomography. With the fast increase of the sizes of the tomographic data, reduction of the computation demands of the reconstruction algorithms is of great importance. Fourier-based forward and back-projection methods have the potential to considerably reduce the computation time in iterative reconstruction. Additional substantial speed-up of those approaches can be obtained utilizing powerful and cheap off-the-shelf fast Fourier transform (FFT) processing hardware. The Fourier reconstruction approaches are based on the relationship between the Fourier transform of the image and Fourier transformation of the parallel-ray projections. The critical two steps are the estimations of the samples of the projection transform, on the central section through the origin of Fourier space, from the samples of the transform of the image, and vice versa for back-projection. Interpolation errors are a limitation of Fourier-based reconstruction methods. We have applied min-max optimized Kaiser-Bessel interpolation within the nonuniform FFT (NUFFT) framework and devised ways of incorporation of resolution models into the Fourier-based iterative approaches. Numerical and computer simulation results show that the min-max NUFFT approach provides substantially lower approximation errors in tomographic forward and back-projection than conventional interpolation methods. Our studies have further confirmed that Fourier-based projectors using the NUFFT approach provide accurate approximations to their space-based counterparts but with about ten times faster computation, and that they are viable candidates for fast iterative image reconstruction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast fully 3-D image reconstruction in PET using planograms

    Page(s): 413 - 425
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (487 KB) |  | HTML iconHTML  

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracer kinetic modeling of 11C-acetate applied in the liver with positron emission tomography

    Page(s): 426 - 432
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    It is well known that 40%-50% of hepatocellular carcinoma (HCC) do not show increased 18F-fluorodeoxyglucose (FDG) uptake. Recent research studies have demonstrated that 11C-acetate may be a complementary tracer to FDG in positron emission tomography (PET) imaging of HCC in the liver. Quantitative dynamic modeling is, therefore, conducted to evaluate the kinetic characteristics of this tracer in HCC and nontumor liver tissue. A three-compartment model consisting of four parameters with dual inputs is proposed and compared with that of five parameters. Twelve regions of dynamic datasets of the liver extracted from six patients are used to test the models. Estimation of the adequacy of these models is based on Akaike Information Criteria (AIC) and Schwarz Criteria (SC) by statistical study. The forward clearance K=K1*k3/(k2+k3) is estimated and defined as a new parameter called the local hepatic metabolic rate-constant of acetate (LHMRAct) using both the weighted nonlinear least squares (NLS) and the linear Patlak methods. Preliminary results show that the LHMRAct of the HCC is significantly higher than that of the nontumor liver tissue. These model parameters provide quantitative evidence and understanding on the kinetic basis of 11C-acetate for its potential role in the imaging of HCC using PET. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object dependency of resolution in reconstruction algorithms with interiteration filtering applied to PET data

    Page(s): 433 - 446
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (383 KB) |  | HTML iconHTML  

    In this paper, we study the resolution properties of those algorithms where a filtering step is applied after every iteration. As concrete examples we take filtered preconditioned gradient descent algorithms for the Poisson log likelihood for PET emission data. For nonlinear estimators, resolution can be characterized in terms of the linearized local impulse response (LLIR). We provide analytic approximations for the LLIR for the class of algorithms mentioned above. Our expressions clearly show that when interiteration filtering (with linear filters) is used, the resolution properties are, in most cases, spatially varying, object dependent and asymmetric. These nonuniformities are solely due to the interaction between the filtering step and the Poisson noise model. This situation is similar to penalized likelihood reconstructions as studied previously in the literature. In contrast, nonregularized and postfiltered maximum-likelihood expectation maximization (MLEM) produce images with nearly "perfect" uniform resolution when convergence is reached. We use the analytic expressions for the LLIR to propose three different approaches to obtain nearly object independent and uniform resolution. Two of them are based on calculating filter coefficients on a pixel basis, whereas the third one chooses an appropriate preconditioner. These three approaches are tested on simulated data for the filtered MLEM algorithm or the filtered separable paraboloidal surrogates algorithm. The evaluation confirms that images obtained using our proposed regularization methods have nearly object independent and uniform resolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved watershed transform for medical image segmentation using prior information

    Page(s): 447 - 458
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (730 KB) |  | HTML iconHTML  

    The watershed transform has interesting properties that make it useful for many different image segmentation applications: it is simple and intuitive, can be parallelized, and always produces a complete division of the image. However, when applied to medical image analysis, it has important drawbacks (oversegmentation, sensitivity to noise, poor detection of thin or low signal to noise ratio structures). We present an improvement to the watershed transform that enables the introduction of prior information in its calculation. We propose to introduce this information via the use of a previous probability calculation. Furthermore, we introduce a method to combine the watershed transform and atlas registration, through the use of markers. We have applied our new algorithm to two challenging applications: knee cartilage and gray matter/white matter segmentation in MR images. Numerical validation of the results is provided, demonstrating the strength of the algorithm for medical image segmentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated optimization of JPEG 2000 encoder options based on model observer performance for detecting variable signals in X-ray coronary angiograms

    Page(s): 459 - 474
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (659 KB) |  | HTML iconHTML  

    Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [nonprewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Microwave image reconstruction from 3-D fields coupled to 2-D parameter estimation

    Page(s): 475 - 484
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    An efficient Gauss-Newton iterative imaging technique utilizing a three-dimensional (3-D) field solution coupled to a two-dimensional (2-D) parameter estimation scheme (3-D/2-D) is presented for microwave tomographic imaging in medical applications. While electromagnetic wave propagation is described fully by a 3-D vector field, a 3-D scalar model has been applied to improve the efficiency of the iterative reconstruction process with apparently limited reduction in accuracy. In addition, the image recovery has been restricted to 2-D but is generalizable to three dimensions. Image artifacts related primarily to 3-D effects are reduced when compared with results from an entirely two- dimensional inversion (2-D/2-D). Important advances in terms of improving algorithmic efficiency include use of a block solver for computing the field solutions and application of the dual mesh scheme and adjoint approach for Jacobian construction. Methods which enhance the image quality such as the log-magnitude/unwrapped phase minimization were also applied. Results obtained from synthetic measurement data show that the new 3-D/2-D algorithm consistently outperforms its 2-D/2-D counterpart in terms of reducing the effective imaging slice thickness in both permittivity and conductivity images over a range of inclusion sizes and background medium contrasts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic generation of noise-free time-activity curve with gated blood-pool emission tomography using deformation of a reference curve

    Page(s): 485 - 491
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    This paper describes a new method for assessing clinical parameters from a noisy regional time-activity curve (TAC) in tomographic gated blood-pool ventriculography. This method is based on a priori knowledge on the shape of a TAC, and shape approximation. The rejection method was used to generate different random Poisson deviates, covering standard count levels, of six representative TACs in order to test and compare the proposed method with harmonic and multiharmonic reconstruction methods. These methods were compared by evaluating four clinical parameters: time of end systole, amplitude, peak ejection and filling rates. Overall, the accuracy of assessment of these parameters was found to be better with the method described in this paper than with standard multiharmonic fits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experimental fluorescence tomography of tissues with noncontact measurements

    Page(s): 492 - 500
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (497 KB) |  | HTML iconHTML  

    Noncontact optical measurements from diffuse media could facilitate the use of large detector arrays at multiple angles that are well suited for diffuse optical tomography applications. Such imaging strategy could eliminate the need for individual fibers in contact with tissue, restricted geometries, and matching fluids. Thus, it could significantly improve experimental procedures and enhance our ability to visualize functional and molecular processes in vivo. In this paper, we describe the experimental implementation of this novel concept and demonstrate capacity to perform small animal imaging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ridge-based vessel segmentation in color images of the retina

    Page(s): 501 - 509
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (586 KB) |  | HTML iconHTML  

    A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. . The results show that our method is significantly better than the two rule-based methods (p<0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anatomical-based FDG-PET reconstruction for the detection of hypo-metabolic regions in epilepsy

    Page(s): 510 - 519
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    Positron emission tomography (PET) of the cerebral glucose metabolism has shown to be useful in the presurgical evaluation of patients with epilepsy. Between seizures, PET images using fluorodeoxyglucose (FDG) show a decreased glucose metabolism in areas of the gray matter (GM) tissue that are associated with the epileptogenic region. However, detection of subtle hypo-metabolic regions is limited by noise in the projection data and the relatively small thickness of the GM tissue compared to the spatial resolution of the PET system. Therefore, we present an iterative maximum-a-posteriori based reconstruction algorithm, dedicated to the detection of hypo-metabolic regions in FDG-PET images of the brain of epilepsy patients. Anatomical information, derived from magnetic resonance imaging data, and pathophysiological knowledge was included in the reconstruction algorithm. Two Monte Carlo based brain software phantom experiments were used to examine the performance of the algorithm. In the first experiment, we used perfect, and in the second, imperfect anatomical knowledge during the reconstruction process. In both experiments, we measured signal-to-noise ratio (SNR), root mean squared (rms) bias and rms standard deviation. For both experiments, bias was reduced at matched noise levels, when compared to post-smoothed maximum-likelihood expectation-maximization (ML-EM) and maximum a posteriori reconstruction without anatomical priors. The SNR was similar to that of ML-EM with optimal post-smoothing, although the parameters of the prior distributions were not optimized. We can conclude that the use of anatomical information combined with prior information about the underlying pathology is very promising for the detection of subtle hypo-metabolic regions in the brain of patients with epilepsy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theoretical and numerical aspects of transmit SENSE

    Page(s): 520 - 525
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (361 KB)  

    The ideas of parallel imaging techniques, designed to shorten the acquisition time by the simultaneous use of multiple receive coils, can be adapted for parallel transmission of a spatially selective multidimensional RF pulse. In analogy to data acquisition, a multidimensional RF pulse follows a certain trajectory in k-space. Shortening this trajectory shortens the pulse duration. The use of multiple transmit coils, each with its own time-dependent waveform and spatial sensitivity, compensates for the missing parts of k-space. This results in a maintained spatial definition of the pulse profile while its duration is reduced. This paper describes the basic equations of parallel transmission with arbitrarily shaped transmit coils ("Transmit SENSE") focusing on two-dimensional RF pulses. Results of numerical studies are presented demonstrating the theoretical feasibility of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Three-gamma annihilation imaging in positron emission tomography

    Page(s): 525 - 529
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (155 KB)  

    It is argued that positron annihilation into three photons, although quite rare, could still be used as a new imaging modality of positron emission tomography. The information gained when the three decay photons are detected is significantly higher than in the case of 511 keV two-gamma annihilation. The performance of three-gamma imaging in terms of the required detector properties, spatial resolution and counting rates is discussed. A simple proof-of-principle experiment confirms the feasibility of the new imaging method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Medical Imaging Conference (MIC 2004)

    Page(s): 530
    Save to Project icon | Request Permissions | PDF file iconPDF (684 KB)  
    Freely Available from IEEE
  • Special issue on vascular imaging

    Page(s): 531
    Save to Project icon | Request Permissions | PDF file iconPDF (867 KB)  
    Freely Available from IEEE
  • Special issue on molecular imaging

    Page(s): 532
    Save to Project icon | Request Permissions | PDF file iconPDF (106 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Medical Imaging (T-MI) encourages the submission of manuscripts on imaging of body structures, morphology and function, and imaging of microscopic biological entities. The journal publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. Studies involving highly technical perspectives are most welcome. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Milan Sonka
Iowa Institute for Biomedical Imaging
3016B SC, Department of Electrical and Computer Engineering
The University of Iowa
Iowa City, IA  52242  52242  USA
milan-sonka@uiowa.edu