By Topic

Medical Imaging, IEEE Transactions on

Issue 7 • Date July 2014

Filter Results

Displaying Results 1 - 18 of 18
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (185 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Medical Imaging publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • Development and Application of a Suite of 4-D Virtual Breast Phantoms for Optimization and Evaluation of Breast Imaging Systems

    Page(s): 1401 - 1409
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3090 KB) |  | HTML iconHTML  

    Mammography is currently the most widely utilized tool for detection and diagnosis of breast cancer. However, in women with dense breast tissue, tissue overlap may obscure lesions. Digital breast tomosynthesis can reduce tissue overlap. Furthermore, imaging with contrast enhancement can provide additional functional information about lesions, such as morphology and kinetics, which in turn may improve lesion identification and characterization. The performance of these imaging techniques is strongly dependent on the structural composition of the breast, which varies significantly among patients. Therefore, imaging system and imaging technique optimization should take patient variability into consideration. Furthermore, optimization of imaging techniques that employ contrast agents should include the temporally varying breast composition with respect to the contrast agent uptake kinetics. To these ends, we have developed a suite of 4-D virtual breast phantoms, which are incorporated with the kinetics of contrast agent propagation in different tissues and can realistically model normal breast parenchyma as well as benign and malignant lesions. This development presents a new approach in performing simulation studies using truly anthropomorphic models. To demonstrate the utility of the proposed 4-D phantoms, we present a simplified example study to compare the performance of 14 imaging paradigms qualitatively and quantitatively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Numerical Characterization of Quasi-Static Ultrasound Elastography for the Detection of Deep Tissue Injuries

    Page(s): 1410 - 1421
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2263 KB) |  | HTML iconHTML  

    Deep tissue injuries are subcutaneous regions of tissue breakdown associated with excessive mechanical pressure for extended period of time. These wounds are currently clinically undetectable in their early stages and result in severe burdens on not only the patients who suffer from them, but the health care system as well. The goal of this work was to numerically characterize the use of quasi-static ultrasound elastography for detecting formative and progressive deep tissue injuries. In order to numerically characterize the technique, finite-element models of sonographic B-mode imaging and tissue deformation were created. These models were fed into a local strain-estimation algorithm to determine the detection sensitivity of the technique on various parameters. Our work showed that quasi-static ultrasound elastography was able to detect and characterize deep tissue injuries over a range of lesion parameters. Simulations were validated using a physical phantom model. This work represents a step along the path to developing a clinically relevant technique for detecting and diagnosing early deep tissue injuries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synthetic Generation of Myocardial Blood–Oxygen-Level-Dependent MRI Time Series Via Structural Sparse Decomposition Modeling

    Page(s): 1422 - 1433
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2054 KB) |  | HTML iconHTML  

    This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FMT-PCCT: Hybrid Fluorescence Molecular Tomography—X-Ray Phase-Contrast CT Imaging of Mouse Models

    Page(s): 1434 - 1446
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1334 KB)  

    The implementation of hybrid fluorescence molecular tomography (FMT) and X-ray computed tomography (CT) has been shown to be a necessary development, not only for combining anatomical with functional and molecular contrast, but also for generating optical images of high accuracy. FMT affords highly sensitive 3-D imaging of fluorescence bio-distribution, but in stand-alone form it offers images of low resolution. It was shown that FMT accuracy significantly improves by considering anatomical priors from CT. Conversely, CT generally suffers from low soft tissue contrast. Therefore utilization of CT data as prior information in FMT inversion is challenging when different internal organs are not clearly differentiated. Instead, we combined herein FMT with emerging X-ray phase-contrast CT (PCCT). PCCT relies on phase shift differences in tissue to achieve soft tissue contrast superior to conventional CT. We demonstrate for the first time FMT-PCCT imaging of different animal models, where FMT and PCCT scans were performed in vivo and ex vivo, respectively. The results show that FMT-PCCT expands the potential of FMT in imaging lesions with otherwise low or no CT contrast, while retaining the cost benefits of CT and simplicity of hybrid device realizations. The results point to the most accurate FMT performance to date. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space

    Page(s): 1447 - 1463
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3178 KB) |  | HTML iconHTML  

    In this paper, we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Surface-Constrained Nonrigid Registration for Dose Monitoring in Prostate Cancer Radiotherapy

    Page(s): 1464 - 1474
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1795 KB) |  | HTML iconHTML  

    This paper addresses the issue of cumulative dose estimation from cone beam computed tomography (CBCT) images in prostate cancer radiotherapy. It focuses on the dose received by the surfaces of the main organs at risk, namely the bladder and rectum. We have proposed both a surface-constrained dose accumulation approach and its extensive evaluation. Our approach relied on the nonrigid registration (NRR) of daily acquired CBCT images on the planning CT image. This proposed NRR method was based on a Demons-like algorithm, implemented in combination with mutual information metric. It allowed for different levels of geometrical constraints to be considered, ensuring a better point to point correspondence, especially when large deformations occurred, or in high dose gradient areas. The three following implementations: 1) full iconic NRR; 2) iconic NRR constrained with landmarks (LCNRR); 3) NRR constrained with full delineation of organs (DBNRR). To obtain reference data, we designed a numerical phantom based on finite-element modeling and image simulation. The methods were assessed on both the numerical phantom and real patient data in order to quantify uncertainties in terms of dose accumulation. The LCNRR method appeared to constitute a good compromise for dose monitoring in clinical practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying the Neuroanatomical Basis of Cognitive Impairment in Alzheimer's Disease by Correlation- and Nonlinearity-Aware Sparse Bayesian Learning

    Page(s): 1475 - 1487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2783 KB) |  | HTML iconHTML  

    Predicting cognitive performance of subjects from their magnetic resonance imaging (MRI) measures and identifying relevant imaging biomarkers are important research topics in the study of Alzheimer's disease. Traditionally, this task is performed by formulating a linear regression problem. Recently, it is found that using a linear sparse regression model can achieve better prediction accuracy. However, most existing studies only focus on the exploitation of sparsity of regression coefficients, ignoring useful structure information in regression coefficients. Also, these linear sparse models may not capture more complicated and possibly nonlinear relationships between cognitive performance and MRI measures. Motivated by these observations, in this work we build a sparse multivariate regression model for this task and propose an empirical sparse Bayesian learning algorithm. Different from existing sparse algorithms, the proposed algorithm models the response as a nonlinear function of the predictors by extending the predictor matrix with block structures. Further, it exploits not only inter-vector correlation among regression coefficient vectors, but also intra-block correlation in each regression coefficient vector. Experiments on the Alzheimer's Disease Neuroimaging Initiative database showed that the proposed algorithm not only achieved better prediction performance than state-of-the-art competitive methods, but also effectively identified biologically meaningful patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Polyp Detection in Colon Capsule Endoscopy

    Page(s): 1488 - 1502
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2560 KB) |  | HTML iconHTML  

    Colorectal polyps are important precursors to colon cancer, a major health problem. Colon capsule endoscopy is a safe and minimally invasive examination procedure, in which the images of the intestine are obtained via digital cameras on board of a small capsule ingested by a patient. The video sequence is then analyzed for the presence of polyps. We propose an algorithm that relieves the labor of a human operator analyzing the frames in the video sequence. The algorithm acts as a binary classifier, which labels the frame as either containing polyps or not, based on the geometrical analysis and the texture content of the frame.We assume that the polyps are characterized as protrusions that are mostly round in shape. Thus, a best fit ball radius is used as a decision parameter of the classifier. We present a statistical performance evaluation of our approach on a data set containing over 18 900 frames from the endoscopic video sequences of five adult patients. The algorithm achieves 47% sensitivity per frame and 81% sensitivity per polyp at a specificity level of 90%. On average, with a video sequence length of 3747 frames, only 367 false positive frames need to be inspected by an operator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-Dimensional Tumor Detection in Automated Whole Breast Ultrasound Using Topographic Watershed

    Page(s): 1503 - 1511
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1263 KB) |  | HTML iconHTML  

    Automated whole breast ultrasound (ABUS) is becoming a popular screening modality for whole breast examination. Compared to conventional handheld ultrasound, ABUS achieves operator-independent and is feasible for mass screening. However, reviewing hundreds of slices in an ABUS image volume is time-consuming. A computer-aided detection (CADe) system based on watershed transform was proposed in this study to accelerate the reviewing. The watershed transform was applied to gather similar tissues around local minima to be homogeneous regions. The likelihoods of being tumors of the regions were estimated using the quantitative morphology, intensity, and texture features in the 2-D/3-D false positive reduction (FPR). The collected database comprised 68 benign and 65 malignant tumors. As a result, the proposed system achieved sensitivities of 100% (133/133), 90% (121/133), and 80% (107/133) with FPs/pass of 9.44, 5.42, and 3.33, respectively. The figure of merit of the combination of three feature sets is 0.46 which is significantly better than that of other feature sets (p-value <; 0.05). In summary, the proposed CADe system based on the multi-dimensional FPR using the integrated feature set is promising in detecting tumors in ABUS images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Volumetric Quantification of Airway Wall in CT via Collision-Free Active Surface Model: Application to Asthma Assessment

    Page(s): 1512 - 1526
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4238 KB) |  | HTML iconHTML  

    Emerging idea in asthma phenotyping, incorporating local morphometric information on the airway wall thickness would be able to better account for the process of airway remodeling as indicator of pathology or therapeutic impact. It is thus important that such information be provided uniformly along the airway tree, not on a sparse (cross-section) sampling basis. The volumetric segmentation of the airway wall from CT data is the issue addressed in this paper by exploiting a patient-specific surface active model. An original aspect taken into account in the proposed deformable model is the management of auto-collisions for this complex morphology. The analysis of several solutions ended up with the design of a motion vector field specific to the patient geometry to guide the deformation. The segmentation result, presented as two embedded inner/outer surfaces of the wall, allows the quantification of the tissue thickness based on a locally-defined measure sensitive to even small surface irregularities. The method is validated with respect to several ground truth simulations of pulmonary CT data with different airway geometries and acquisition protocols showing accuracy within the CT resolution range. Results from an ongoing clinical study on moderate and severe asthma are presented and discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Quantification and Longitudinal Analysis of Pulmonary Emphysema With a Hidden Markov Measure Field Model

    Page(s): 1527 - 1540
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3359 KB) |  | HTML iconHTML  

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Tolerance Limits to the Characterization of Image Registration Performance

    Page(s): 1541 - 1550
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1150 KB) |  | HTML iconHTML  

    Deformable image registration is used increasingly in image-guided interventions and other applications. However, validation and characterization of registration performance remain areas that require further study. We propose an analysis methodology for deriving tolerance limits on the initial conditions for deformable registration that reliably lead to a successful registration. This approach results in a concise summary of the probability of registration failure, while accounting for the variability in the test data. The (β, γ) tolerance limit can be interpreted as a value of the input parameter that leads to successful registration outcome in at least 100β% of cases with the 100γ% confidence. The utility of the methodology is illustrated by summarizing the performance of a deformable registration algorithm evaluated in three different experimental setups of increasing complexity. Our examples are based on clinical data collected during MRI-guided prostate biopsy registered using publicly available deformable registration tool. The results indicate that the proposed methodology can be used to generate concise graphical summaries of the experiments, as well as a probabilistic estimate of the registration outcome for a future sample. Its use may facilitate improved objective assessment, comparison and retrospective stress-testing of deformable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Unified Graphical Models Framework for Automated Mitosis Detection in Human Embryos

    Page(s): 1551 - 1562
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3111 KB) |  | HTML iconHTML  

    Time lapse microscopy has emerged as an important modality for studying human embryo development, as mitosis events can provide insight into embryo health and fate. Mitosis detection can happen through tracking of embryonic cells (tracking based), or from low level image features and classifiers (tracking free). Tracking based approaches are challenged by high dimensional search space, weak features, outliers, missing data, multiple deformable targets, and weak motion model. Tracking free approaches are data driven and complement tracking based approaches. We pose mitosis detection as augmented simultaneous segmentation and classification in a conditional random field (CRF) framework that combines both approaches. It uses a rich set of discriminative features and their spatiotemporal context. It performs a dual pass approximate inference that addresses the high dimensionality of tracking and combines results from both components. For 312 clinical sequences we measured division events to within 30 min and observed an improvement of 25.6% and a 32.9% improvement over purely tracking based and tracking free approach respectively, and close to an order of magnitude over a traditional particle filter. While our work was motivated by human embryo development, it can be extended to other detection problems in image sequences of evolving cell populations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ML-Reconstruction for TOF-PET With Simultaneous Estimation of the Attenuation Factors

    Page(s): 1563 - 1572
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3114 KB) |  | HTML iconHTML  

    In positron emission tomography (PET), attenuation correction is typically done based on information obtained from transmission tomography. Recent studies show that time-of-flight (TOF) PET emission data allow joint estimation of activity and attenuation images. Mathematical analysis revealed that the joint estimation problem is determined up to a scale factor. In this work, we propose a maximum likelihood reconstruction algorithm that jointly estimates the activity image together with the sinogram of the attenuation factors. The algorithm is evaluated with 2-D and 3-D simulations as well as clinical TOF-PET measurements of a patient scan and compared to reference reconstructions. The robustness of the algorithm to possible imperfect scanner calibration is demonstrated with reconstructions of the patient scan ignoring the varying detector sensitivities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HEp-2 Cell Classification Using Shape Index Histograms With Donut-Shaped Spatial Pooling

    Page(s): 1573 - 1580
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2918 KB) |  | HTML iconHTML  

    We present a new method for automatic classification of indirect immunoflourescence images of HEp-2 cells into different staining pattern classes. Our method is based on a new texture measure called shape index histograms that captures second-order image structure at multiple scales. Moreover, we introduce a spatial decomposition scheme which is radially symmetric and suitable for cell images. The spatial decomposition is performed using donut-shaped pooling regions of varying sizes when gathering histogram contributions. We evaluate our method using both the ICIP 2013 and the ICPR 2012 competition datasets. Our results show that shape index histograms are superior to other popular texture descriptors for HEp-2 cell classification. Moreover, when comparing to other automated systems for HEp-2 cell classification we show that shape index histograms are very competitive; especially considering the relatively low complexity of the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Medical Imaging information for authors

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (95 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Medical Imaging (T-MI) encourages the submission of manuscripts on imaging of body structures, morphology and function, and imaging of microscopic biological entities. The journal publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. Studies involving highly technical perspectives are most welcome. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Milan Sonka
Iowa Institute for Biomedical Imaging
3016B SC, Department of Electrical and Computer Engineering
The University of Iowa
Iowa City, IA  52242  52242  USA
milan-sonka@uiowa.edu