By Topic

Medical Imaging, IEEE Transactions on

Issue 9 • Date Sept. 2011

Filter Results

Displaying Results 1 - 17 of 17
  • Table of contents

    Publication Year: 2011 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (109 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging publication information

    Publication Year: 2011 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Multidimensional X-Space Magnetic Particle Imaging

    Publication Year: 2011 , Page(s): 1581 - 1590
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (551 KB) |  | HTML iconHTML  

    Magnetic particle imaging (MPI) is a promising new medical imaging tracer modality with potential applications in human angiography, cancer imaging, in vivo cell tracking, and inflammation imaging. Here we demonstrate both theoretically and experimentally that multidimensional MPI is a linear shift-invariant imaging system with an analytic point spread function. We also introduce a fast image reconstruction method that obtains the intrinsic MPI image with high signal-to-noise ratio via a simple gridding operation in x-space. We also demonstrate a method to reconstruct large field-of-view (FOV) images using partial FOV scanning, despite the loss of first harmonic image information due to direct feedthrough contamination. We conclude with the first experimental test of multidimensional x-space MPI. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unmixing Dynamic Fluorescence Diffuse Optical Tomography Images With Independent Component Analysis

    Publication Year: 2011 , Page(s): 1591 - 1604
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1986 KB) |  | HTML iconHTML  

    Dynamic fluorescence diffuse optical tomography (D-FDOT) is important for drug delivery research. However, the low spatial resolution of FDOT and the complex kinetics of drug limit the ability of D-FDOT in resolving metabolic processes of drug throughout whole body of small animals. In this paper, we propose an independent component analysis (ICA)-based method to perform D-FDOT studies. When applied to D-FDOT images, ICA not only generates a set of independent components (ICs) which can illustrate functional structures with different kinetic behaviors, but also provides a set of associated time courses (TCs) which can represent normalized time courses of drug in corresponding functional structures. Further, the drug concentration in specific functional structure at different time points can be recovered by an inverse ICA transformation. To evaluate the performance of the proposed algorithm in the study of drug kinetics at whole-body level, simulation study and phantom experiment are both performed on a full-angle FDOT imaging system with line-shaped excitation pattern. In simulation study, the nanoparticle delivery of indocynaine green (ICG) throughout whole body of a digital mouse is simulated and imaged. In phantom experiment, four tubes containing different ICG concentrations are imaged and used to imitate the uptake and excretion of ICG in organs. The results suggest that we can not only illustrate ICG distributions in different functional structures, but also recover ICG concentrations in specific functional structure at different time points, when ICA is applied to D-FDOT images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Statistical Model for Quantification and Prediction of Cardiac Remodelling: Application to Tetralogy of Fallot

    Publication Year: 2011 , Page(s): 1605 - 1616
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (965 KB) |  | HTML iconHTML  

    Cardiac remodelling plays a crucial role in heart diseases. Analyzing how the heart grows and remodels over time can provide precious insights into pathological mechanisms, eventually resulting in quantitative metrics for disease evaluation and therapy planning. This study aims to quantify the regional impacts of valve regurgitation and heart growth upon the end-diastolic right ventricle (RV) in patients with tetralogy of Fallot, a severe congenital heart defect. The ultimate goal is to determine, among clinical variables, predictors for the RV shape from which a statistical model that predicts RV remodelling is built. Our approach relies on a forward model based on currents and a diffeomorphic surface registration algorithm to estimate an unbiased template. Local effects of RV regurgitation upon the RV shape were assessed with Principal Component Analysis (PCA) and cross-sectional multivariate design. A generative 3-D model of RV growth was then estimated using partial least squares (PLS) and canonical correlation analysis (CCA). Applied on a retrospective population of 49 patients, cross-effects between growth and pathology could be identified. Qualitatively, the statistical findings were found realistic by cardiologists. 10-fold cross-validation demonstrated a promising generalization and stability of the growth model. Compared to PCA regression, PLS was more compact, more precise and provided better predictions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Brain Extraction Across Datasets and Comparison With Publicly Available Methods

    Publication Year: 2011 , Page(s): 1617 - 1634
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3116 KB) |  | HTML iconHTML  

    Automatic whole-brain extraction from magnetic resonance images (MRI), also known as skull stripping, is a key component in most neuroimage pipelines. As the first element in the chain, its robustness is critical for the overall performance of the system. Many skull stripping methods have been proposed, but the problem is not considered to be completely solved yet. Many systems in the literature have good performance on certain datasets (mostly the datasets they were trained/tuned on), but fail to produce satisfactory results when the acquisition conditions or study populations are different. In this paper we introduce a robust, learning-based brain extraction system (ROBEX). The method combines a discriminative and a generative model to achieve the final result. The discriminative model is a Random Forest classifier trained to detect the brain boundary; the generative model is a point distribution model that ensures that the result is plausible. When a new image is presented to the system, the generative model is explored to find the contour with highest likelihood according to the discriminative model. Because the target shape is in general not perfectly represented by the generative model, the contour is refined using graph cuts to obtain the final segmentation. Both models were trained using 92 scans from a proprietary dataset but they achieve a high degree of robustness on a variety of other datasets. ROBEX was compared with six other popular, publicly available methods (BET, BSE, FreeSurfer, AFNI, BridgeBurner, and GCUT) on three publicly available datasets (IBSR, LPBA40, and OASIS, 137 scans in total) that include a wide range of acquisition hardware and a highly variable population (different age groups, healthy/diseased). The results show that ROBEX provides significantly improved performance measures for almost every method/dataset combination. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structural Analysis of Articular Cartilage Using Multiphoton Microscopy: Input for Biomechanical Modeling

    Publication Year: 2011 , Page(s): 1635 - 1648
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2074 KB) |  | HTML iconHTML  

    The 3-D morphology of chicken articular cartilage was quantified using multiphoton microscopy (MPM) for use in continuum-mechanical modeling. To motivate this morphological study we propose aspects of a new, 3-D finite strain constitutive model for articular cartilage focusing on the essential load-bearing morphology: an inhomogeneous, poro-(visco)elastic solid matrix reinforced by an anisotropic, (visco)elastic dispersed fiber fabric which is saturated by an incompressible fluid residing in strain-dependent pores. Samples of fresh chicken cartilage were sectioned in three orthogonal planes and imaged using MPM, specifically imaging the collagen fibers using second harmonic generation. Employing image analysis techniques based on Fourier analysis, we derived the principal directionality and dispersion of the collagen fiber fabric in the superficial layer. In the middle layer, objective thresholding techniques were used to extract the volume fraction occupied by extracellular collagen matrix. In conjunction with information available in the literature, or additional experimental testing, we show how this data can be used to derive a 3-D map of the initial solid volume fraction and Darcy permeability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast Wavelet-Based Reconstruction Method for Magnetic Resonance Imaging

    Publication Year: 2011 , Page(s): 1649 - 1660
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3515 KB) |  | HTML iconHTML  

    In this work, we exploit the fact that wavelets can represent magnetic resonance images well, with relatively few coefficients. We use this property to improve magnetic resonance imaging (MRI) reconstructions from undersampled data with arbitrary k-space trajectories. Reconstruction is posed as an optimization problem that could be solved with the iterative shrinkage/thresholding algorithm (ISTA) which, unfortunately, converges slowly. To make the approach more practical, we propose a variant that combines recent improvements in convex optimization and that can be tuned to a given specific k-space trajectory. We present a mathematical analysis that explains the performance of the algorithms. Using simulated and in vivo data, we show that our nonlinear method is fast, as it accelerates ISTA by almost two orders of magnitude. We also show that it remains competitive with TV regularization in terms of image quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Partitioning Histopathological Images: An Integrated Framework for Supervised Color-Texture Segmentation and Cell Splitting

    Publication Year: 2011 , Page(s): 1661 - 1677
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2516 KB) |  | HTML iconHTML  

    For quantitative analysis of histopathological images, such as the lymphoma grading systems, quantification of features is usually carried out on single cells before categorizing them by classification algorithms. To this end, we propose an integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method. For the segmentation part, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For the splitting part, given a connected component of the segmentation map, we initially differentiate whether it is a touching-cell clump or a single nontouching cell. The differentiation is mainly based on the distance between the most likely radial-symmetry center and the geometrical center of the connected component. The boundaries of touching-cell clumps are smoothed out by Fourier shape descriptor before carrying out an iterative, concave-point and radial-symmetry based splitting algorithm. To test the validity, effectiveness and efficiency of the framework, it is applied to follicular lymphoma pathological images, which exhibit complex background and extracellular texture with nonuniform illumination condition. For comparison purposes, the results of the p- oposed segmentation algorithm are evaluated against the outputs of superpixel, graph-cut, mean-shift, and two state-of-the-art pathological image segmentation methods using ground-truth that was established by manual segmentation of cells in the original images. Our segmentation algorithm achieves better results than the other compared methods. The results of splitting are evaluated in terms of under-splitting, over-splitting, and encroachment errors. By summing up the three types of errors, we achieve a total error rate of 5.25% per image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity of Photon-Counting Based {\rm K} -Edge Imaging in X-ray Computed Tomography

    Publication Year: 2011 , Page(s): 1678 - 1690
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3314 KB) |  | HTML iconHTML  

    The feasibility of K-edge imaging using energy-resolved, photon-counting transmission measurements in X-ray computed tomography (CT) has been demonstrated by simulations and experiments. The method is based on probing the discontinuities of the attenuation coefficient of heavy elements above and below the K-edge energy by using energy-sensitive, photon counting X-ray detectors. In this paper, we investigate the dependence of the sensitivity of K-edge imaging on the atomic number Z of the contrast material, on the object diameter D , on the spectral response of the X-ray detector and on the X-ray tube voltage. We assume a photon-counting detector equipped with six adjustable energy thresholds. Physical effects leading to a degradation of the energy resolution of the detector are taken into account using the concept of a spectral response function R(E,U) for which we assume four different models. As a validation of our analytical considerations and in order to investigate the influence of elliptically shaped phantoms, we provide CT simulations of an anthropomorphic Forbild-Abdomen phantom containing a gold-contrast agent. The dependence on the values of the energy thresholds is taken into account by optimizing the achievable signal-to-noise ratios (SNR) with respect to the threshold values. We find that for a given X-ray spectrum and object size the SNR in the heavy element's basis material image peaks for a certain atomic number Z. The dependence of the SNR in the high-Z basis-material image on the object diameter is the natural, exponential decrease with particularly deteriorating effects in the case where the attenuation from the object itself causes a total signal loss below the K-edge. The influence of the energy-response of the detector is very important. We observed that the optimal SNR values obtained with an ideal detector and with a CdTe pixel detector whose response, showing significant tailing, has been de- ermined at a synchrotron differ by factors of about two to three. The potentially very important impact of scattered X-ray radiation and pulse pile-up occurring at high photon rates on the sensitivity of the technique is qualitatively discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • BOLD Contrast and Noise Characteristics of Densely Sampled Multi-Echo fMRI Data

    Publication Year: 2011 , Page(s): 1691 - 1703
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1694 KB) |  | HTML iconHTML  

    Blood oxygenation level dependent (BOLD) contrast in functional magnetic resonance imaging (fMRI) can be enhanced using multi-echo imaging and postprocessing techniques that combine the echoes in weighted summation. Here, existing echo-weighting methods are reassessed in the context of an explicit physiological noise model, and a new method is introduced: weights that scale linearly with echo time. Additionally, a method using data-driven weights defined using principal component analysis (PCA) is included for comparison. Differences in BOLD contrast enhancement between methods were compared analytically where possible, and using Monte Carlo simulations for different noise conditions and different combinations of acquisition parameters. The comparisons were also validated through densely sampled (256-echo) multi-echo fMRI experimental data acquired at 1.5T and 3.0T. Results indicated that the contrast-to-noise ratio (CNR) of the studied weighting methods have a strong dependence on the physiological noise, echo spacing and the width of the sampling window. With low noise correlations between echoes, contrast gain for all weighting methods was shown to have a square root dependence on the echo sampling density, and in typical experimental noise conditions, increasing the sampling window beyond 3·T2* produced marginal additional benefit. Simulations and experiments also emphasized that noise correlations between echoes are likely the main factor limiting the potential CNR gains achievable by densely sampled multi-echo fMRI. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bias Field Inconsistency Correction of Motion-Scattered Multislice MRI for Improved 3D Image Reconstruction

    Publication Year: 2011 , Page(s): 1704 - 1712
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1310 KB) |  | HTML iconHTML  

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Reduced Order Explicit Dynamic Finite Element Algorithm for Surgical Simulation

    Publication Year: 2011 , Page(s): 1713 - 1721
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB) |  | HTML iconHTML  

    Reduced order modelling, in which a full system response is projected onto a subspace of lower dimensionality, has been used previously to accelerate finite element solution schemes by reducing the size of the involved linear systems. In the present work we take advantage of a secondary effect of such reduction for explicit analyses, namely that the stable integration time step is increased far beyond that of the full system. This phenomenon alleviates one of the principal drawbacks of explicit methods, compared with implicit schemes. We present an explicit finite element scheme in which time integration is performed in a reduced basis. Futhermore, we present a simple procedure for imposing inhomogeneous essential boundary conditions, thus overcoming one of the principal deficiencies of such approaches. The computational benefits of the procedure within a GPU-based execution framework are examined, and an assessment of the errors introduced is given. It is shown that speedups approaching an order of magnitude are feasible, without introduction of prohibitive errors, and without hardware modifications. The procedure may have applications in interactive simulation and medical image-guidance problems, in which both speed and accuracy are vital. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on medical imaging in computational physiology

    Publication Year: 2011 , Page(s): 1722
    Save to Project icon | Request Permissions | PDF file iconPDF (1053 KB)  
    Freely Available from IEEE
  • 2011 IEEE membership form

    Publication Year: 2011 , Page(s): 1723 - 1724
    Save to Project icon | Request Permissions | PDF file iconPDF (1361 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging Information for authors

    Publication Year: 2011 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE
  • Blank page [back cover]

    Publication Year: 2011 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Medical Imaging (T-MI) encourages the submission of manuscripts on imaging of body structures, morphology and function, and imaging of microscopic biological entities. The journal publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. Studies involving highly technical perspectives are most welcome. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Michael Insana
Beckman Institute for Advanced Science and Technology
Department of Bioengineering
University of Illinois at Urbana-Champaign
Urbana, IL 61801 USA
m.f.i@ieee.org