By Topic

Medical Imaging, IEEE Transactions on

Issue 6 • Date June 2013

Filter Results

Displaying Results 1 - 23 of 23
  • Table of Contents

    Publication Year: 2013 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (196 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging publication information

    Publication Year: 2013 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (149 KB)  
    Freely Available from IEEE
  • RubiX: Combining Spatial Resolutions for Bayesian Inference of Crossing Fibers in Diffusion MRI

    Publication Year: 2013 , Page(s): 969 - 982
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2998 KB) |  | HTML iconHTML  

    The trade-off between signal-to-noise ratio (SNR) and spatial specificity governs the choice of spatial resolution in magnetic resonance imaging (MRI); diffusion-weighted (DW) MRI is no exception. Images of lower resolution have higher signal to noise ratio, but also more partial volume artifacts. We present a data-fusion approach for tackling this trade-off by combining DW MRI data acquired both at high and low spatial resolution. We combine all data into a single Bayesian model to estimate the underlying fiber patterns and diffusion parameters. The proposed model, therefore, combines the benefits of each acquisition. We show that fiber crossings at the highest spatial resolution can be inferred more robustly and accurately using such a model compared to a simpler model that operates only on high-resolution data, when both approaches are matched for acquisition time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind Color Decomposition of Histological Images

    Publication Year: 2013 , Page(s): 983 - 994
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2006 KB) |  | HTML iconHTML  

    Cancer diagnosis is based on visual examination under a microscope of tissue sections from biopsies. But whereas pathologists rely on tissue stains to identify morphological features, automated tissue recognition using color is fraught with problems that stem from image intensity variations due to variations in tissue preparation, variations in spectral signatures of the stained tissue, spectral overlap and spatial aliasing in acquisition, and noise at image acquisition. We present a blind method for color decomposition of histological images. The method decouples intensity from color information and bases the decomposition only on the tissue absorption characteristics of each stain. By modeling the charge-coupled device sensor noise, we improve the method accuracy. We extend current linear decomposition methods to include stained tissues where one spectral signature cannot be separated from all combinations of the other tissues' spectral signatures. We demonstrate both qualitatively and quantitatively that our method results in more accurate decompositions than methods based on non-negative matrix factorization and independent component analysis. The result is one density map for each stained tissue type that classifies portions of pixels into the correct stained tissue allowing accurate identification of morphological features that may be linked to cancer. View full abstract»

    Open Access
  • Segmentation and Shape Tracking of Whole Fluorescent Cells Based on the Chan–Vese Model

    Publication Year: 2013 , Page(s): 995 - 1006
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2541 KB) |  | HTML iconHTML  

    We present a fast and robust approach to tracking the evolving shape of whole fluorescent cells in time-lapse series. The proposed tracking scheme involves two steps. First, coherence-enhancing diffusion filtering is applied on each frame to reduce the amount of noise and enhance flow-like structures. Second, the cell boundaries are detected by minimizing the Chan-Vese model in the fast level set-like and graph cut frameworks. To allow simultaneous tracking of multiple cells over time, both frameworks have been integrated with a topological prior exploiting the object indication function. The potential of the proposed tracking scheme and the advantages and disadvantages of both frameworks are demonstrated on 2-D and 3-D time-lapse series of rat adipose-derived mesenchymal stem cells and human lung squamous cell carcinoma cells, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmenting the Etiological Agent of Schistosomiasis for High-Content Screening

    Publication Year: 2013 , Page(s): 1007 - 1018
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1854 KB) |  | HTML iconHTML  

    Schistosomiasis is a parasitic disease with a global health impact second only to malaria. The World Health Organization has classified schistosomiasis as an illness for which new therapies are urgently needed. However, the causative parasite is refractory to current high-throughput drug screening due to the diversity and complexity of shape, appearance and movement-based phenotypes exhibited in response to putative drugs. Currently, there is no automated image-based approach capable of relieving this deficiency. We propose and validate an image segmentation algorithm designed to overcome the distinct challenges posed by schistosomes and macroparasites in general, including irregular shapes and sizes, dense groups of touching parasites and the unpredictable effects of drug exposure. Our approach combines a region-based distributing function with a novel edge detector derived from phase congruency and grayscale thinning by threshold superposition. The method is sufficiently rapid, robust and accurate to be used for quantitative analysis of diverse parasite phenotypes in high-throughput and high-content screening. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Superpixel Classification Based Optic Disc and Optic Cup Segmentation for Glaucoma Screening

    Publication Year: 2013 , Page(s): 1019 - 1032
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1953 KB) |  | HTML iconHTML  

    Glaucoma is a chronic eye disease that leads to vision loss. As it cannot be cured, detecting the disease in time is important. Current tests using intraocular pressure (IOP) are not sensitive enough for population based glaucoma screening. Optic nerve head assessment in retinal fundus images is both more promising and superior. This paper proposes optic disc and optic cup segmentation using superpixel classification for glaucoma screening. In optic disc segmentation, histograms, and center surround statistics are used to classify each superpixel as disc or non-disc. A self-assessment reliability score is computed to evaluate the quality of the automated optic disc segmentation. For optic cup segmentation, in addition to the histograms and center surround statistics, the location information is also included into the feature space to boost the performance. The proposed segmentation methods have been evaluated in a database of 650 images with optic disc and optic cup boundaries manually marked by trained professionals. Experimental results show an average overlapping error of 9.5% and 24.1% in optic disc and optic cup segmentation, respectively. The results also show an increase in overlapping error as the reliability score is reduced, which justifies the effectiveness of the self-assessment. The segmented optic disc and optic cup are then used to compute the cup to disc ratio for glaucoma screening. Our proposed method achieves areas under curve of 0.800 and 0.822 in two data sets, which is higher than other methods. The methods can be used for segmentation and glaucoma screening. The self-assessment will be used as an indicator of cases with large errors and enhance the clinical deployment of the automatic segmentation and screening. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    Publication Year: 2013 , Page(s): 1033 - 1042
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1381 KB) |  | HTML iconHTML  

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator's performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Groupwise Conditional Random Forests for Automatic Shape Classification and Contour Quality Assessment in Radiotherapy Planning

    Publication Year: 2013 , Page(s): 1043 - 1057
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2032 KB) |  | HTML iconHTML  

    Radiation therapy is used to treat cancer patients around the world. High quality treatment plans maximally radiate the targets while minimally radiating healthy organs at risk. In order to judge plan quality and safety, segmentations of the targets and organs at risk are created, and the amount of radiation that will be delivered to each structure is estimated prior to treatment. If the targets or organs at risk are mislabelled, or the segmentations are of poor quality, the safety of the radiation doses will be erroneously reviewed and an unsafe plan could proceed. We propose a technique to automatically label groups of segmentations of different structures from a radiation therapy plan for the joint purposes of providing quality assurance and data mining. Given one or more segmentations and an associated image we seek to assign medically meaningful labels to each segmentation and report the confidence of that label. Our method uses random forests to learn joint distributions over the training features, and then exploits a set of learned potential group configurations to build a conditional random field (CRF) that ensures the assignment of labels is consistent across the group of segmentations. The CRF is then solved via a constrained assignment problem. We validate our method on 1574 plans, consisting of 17 579 segmentations, demonstrating an overall classification accuracy of 91.58%. Our results also demonstrate the stability of RF with respect to tree depth and the number of splitting variables in large data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • From Complex {\rm B}_{1} Mapping to Local SAR Estimation for Human Brain MR Imaging Using Multi-Channel Transceiver Coil at 7T

    Publication Year: 2013 , Page(s): 1058 - 1067
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1787 KB) |  | HTML iconHTML  

    Elevated specific absorption rate (SAR) associated with increased main magnetic field strength remains a major safety concern in ultra-high-field (UHF) magnetic resonance imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radio-frequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based registration of ex vivo and in vivo MRI of the prostate using elastography

    Publication Year: 2013 , Page(s): 1068 - 1080
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2048 KB) |  | HTML iconHTML  

    Registration of histopathology to in vivo magnetic resonance imaging (MRI) of the prostate is an important task that can be used to optimize in vivo imaging for cancer detection. Such registration is challenging due to the change in volume and deformation of the prostate during excision and fixation. One approach towards this problem involves the use of an ex vivo MRI of the excised prostate specimen, followed by in vivo to ex vivo MRI registration of the prostate. We propose a novel registration method that uses a patient-specific biomechanical model acquired using magnetic resonance elastography to deform the in vivo volume and match it to the surface of the ex vivo specimen. The forces that drive the deformations are derived from a region-based energy, with the elastic potential used for regularization. The incorporation of elastography data into the registration framework allows inhomogeneous elasticity to be assigned to the in vivo volume. We show that such inhomogeneity improves the registration results by providing a physical regularization of the deformation map. The method is demonstrated and evaluated on six clinical cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Broadside-Split-Ring Resonator-Based Coil for MRI at 7 T

    Publication Year: 2013 , Page(s): 1081 - 1084
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (579 KB) |  | HTML iconHTML  

    A coil design termed as broadside-coupled loop (BCL) coil and based on the broadside-coupled split ring resonator (BC-SRR) is proposed as an alternative to a conventional loop design at 7T. The BCL coil has an inherent uniform current which assures the rotational symmetry of the radio-frequency field around the coil axis. A comparative analysis of the signal-to-noise ratio provided by BCL coils and conventional coils has been carried out by means of numerical simulations and experiments in a 7T whole body system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computing Ischemic Regions in the Heart With the Bidomain Model—First Steps Towards Validation

    Publication Year: 2013 , Page(s): 1085 - 1096
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1313 KB) |  | HTML iconHTML  

    We investigate whether it is possible to use the bidomain model and body surface potential maps (BSPMs) to compute the size and position of ischemic regions in the human heart. This leads to a severely ill posed inverse problem for a potential equation. We do not use the classical inverse problems of electrocardiography, in which the unknown sources are the epicardial potential distribution or the activation sequence. Instead we employ the bidomain theory to obtain a model that also enables identification of ischemic regions transmurally. This approach makes it possible to distinguish between subendocardial and transmural cases, only using the BSPM data. The main focus is on testing a previously published algorithm on clinical data, and the results are compared with images taken with perfusion scintigraphy. For the four patients involved in this study, the two modalities produce results that are rather similar: The relative differences between the center of mass and the size of the ischemic regions, suggested by the two modalities, are 10.8% ± 4.4% and 7.1% ± 4.6%, respectively. We also present some simulations which indicate that the methodology is robust with respect to uncertainties in important model parameters. However, in contrast to what has been observed in investigations only involving synthetic data, inequality constraints are needed to obtain sound results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full-Wave Iterative Image Reconstruction in Photoacoustic Tomography With Acoustically Inhomogeneous Media

    Publication Year: 2013 , Page(s): 1097 - 1110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2973 KB) |  | HTML iconHTML  

    Existing approaches to image reconstruction in photoacoustic computed tomography (PACT) with acoustically heterogeneous media are limited to weakly varying media, are computationally burdensome, and/or cannot effectively mitigate the effects of measurement data incompleteness and noise. In this work, we develop and investigate a discrete imaging model for PACT that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations. A key contribution of the work is the establishment of a procedure to implement a matched forward and backprojection operator pair associated with the discrete imaging model, which permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise. The forward and backprojection operators are based on the k-space pseudospectral method for computing numerical solutions to the PA wave equation in the time domain. The developed reconstruction methodology is investigated by use of both computer-simulated and experimental PACT measurement data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Burn Depth Analysis Using Multidimensional Scaling Applied to Psychophysical Experiment Data

    Publication Year: 2013 , Page(s): 1111 - 1120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1085 KB) |  | HTML iconHTML  

    In this paper a psychophysical experiment and a multidimensional scaling (MDS) analysis are undergone to determine the physical characteristics that physicians employ to diagnose a burn depth. Subsequently, these characteristics are translated into mathematical features, correlated with these physical characteristics analysis. Finally, a study to verify the ability of these mathematical features to classify burns is performed. In this study, a space with axes correlated with the MDS axes has been developed. 74 images have been represented in this space and a k-nearest neighbor classifier has been used to classify these 74 images. A success rate of 66.2% was obtained when classifying burns into three burn depths and a success rate of 83.8% was obtained when burns were classified as those which needed grafts and those which did not. Additional studies have been performed comparing our system with a principal component analysis and a support vector machine classifier. Results validate the ability of the mathematical features extracted from the psychophysical experiment to classify burns into their depths. In addition, the method has been compared with another state-of-the-art method and the same database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attributed Relational Graphs for Cell Nucleus Segmentation in Fluorescence Microscopy Images

    Publication Year: 2013 , Page(s): 1121 - 1131
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1484 KB) |  | HTML iconHTML  

    More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind Compressive Sensing Dynamic MRI

    Publication Year: 2013 , Page(s): 1132 - 1145
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2973 KB) |  | HTML iconHTML  

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the nonorthogonal nature of the dictionary basis functions. Since the number of degrees-of-freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting l1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler subproblems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the l1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the l0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD - ethod, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic magnetic resonance imaging applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comments on “Comparative Study With New Accuracy Metrics for Target Volume Contouring in PET Image Guided Radiation Therapy”

    Publication Year: 2013 , Page(s): 1146 - 1148
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    Phantom measurements with glass inserts in a hot background are still used frequently for calibration and performance assessment of volume delineation algorithms in positron emission tomography (PET). Taking as an example the recent paper “Comparative study with new accuracy metrics for target volume contouring in PET image guided radiation therapy” (Shepherd, 2012), we demonstrate that this is not a valid approach due to the discontinuity introduced in the background by the cold walls of the glass inserts. We, moreover, emphasize that in order to define a sensible ground truth for performance assessment of contouring algorithms in patient data it is necessary to average over a sizable number of experienced observers and lesions in order to compensate for the substantial inter-observer variability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reply to the Comments on “Comparative Study with New Accuracy Metrics for Target Volume Contouring in PET Image Guided Radiation Therapy”

    Publication Year: 2013 , Page(s): 1148 - 1149
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (157 KB)  

    This communication is submitted in response to the letter of van den Hoff and Hofheinz (2013). Based on findings in their earlier study (Hofheinz , 2010) the letter criticizes the use of a physical positron emission tomography (PET) phantom with “cold wall” volumes of interest, in part of the evaluation of PET segmentation tools in our experiment reported in this issue (Shepherd , 2012). In addition, the letter raises concerns about the low number of independent expert (manual) delineations used in Shepherd , (2012) to assess accuracy of tumor segmentation in patient images, and disambiguates the details of one of the segmentation methods involved in Shepherd , (2012). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Open Access

    Publication Year: 2013 , Page(s): 1150
    Save to Project icon | Request Permissions | PDF file iconPDF (1156 KB)  
    Freely Available from IEEE
  • IEEE Xplore Digital Library

    Publication Year: 2013 , Page(s): 1151
    Save to Project icon | Request Permissions | PDF file iconPDF (1372 KB)  
    Freely Available from IEEE
  • IEEE Global History Network

    Publication Year: 2013 , Page(s): 1152
    Save to Project icon | Request Permissions | PDF file iconPDF (3171 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging information for authors

    Publication Year: 2013 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (95 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Medical Imaging (T-MI) encourages the submission of manuscripts on imaging of body structures, morphology and function, and imaging of microscopic biological entities. The journal publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. Studies involving highly technical perspectives are most welcome. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Michael Insana
Beckman Institute for Advanced Science and Technology
Department of Bioengineering
University of Illinois at Urbana-Champaign
Urbana, IL 61801 USA
m.f.i@ieee.org