By Topic

Medical Imaging, IEEE Transactions on

Issue 4 • Date Aug. 1998

Filter Results

Displaying Results 1 - 17 of 17
  • Segmentation of intrathoracic airway trees: a fuzzy logic approach

    Publication Year: 1998 , Page(s): 489 - 497
    Cited by:  Papers (36)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (239 KB)  

    Three-dimensional (3-D) analysis of airway trees extracted from computed tomography (CT) image data can provide objective information about lung structure and function. However, manual analysis of 3-D lung CT images is tedious, time consuming and, thus, impractical for routine clinical care. The authors have previously reported an automated rule-based method for extraction of airway trees from 3-D CT images using a priori knowledge about airway-tree anatomy. Although the method's sensitivity was quite good, its specificity suffered from a large number of falsely detected airways. The authors present a new approach to airway-tree detection based on fuzzy logic that increases the method's specificity without compromising its sensitivity. The method was validated in 32 CT image slices randomly selected from five volumetric canine electron-beam CT data sets. The fuzzy-logic method significantly outperformed the previously reported rule-based method (p<0.002). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of microcalcifications in digital mammograms using wavelets

    Publication Year: 1998 , Page(s): 498 - 509
    Cited by:  Papers (73)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB)  

    This paper presents an approach for detecting micro-calcifications in digital mammograms employing wavelet-based subband image decomposition. The microcalcifications appear in small clusters of few pixels with relatively high intensity compared with their neighboring pixels. These image features can be preserved by a detection system that employs a suitable image transform which can localize the signal characteristics in the original and the transform domain. Given that the microcalcifications correspond to high-frequency components of the image spectrum, detection of microcalcifications is achieved by decomposing the mammograms into different frequency subbands, suppressing the low-frequency subband, and, finally, reconstructing the mammogram from the subbands containing only high frequencies. Preliminary experiments indicate that further studies are needed to investigate the potential of wavelet-based subband image decomposition as a tool for detecting microcalcifications in digital mammograms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated seeded lesion segmentation on digital mammograms

    Publication Year: 1998 , Page(s): 510 - 517
    Cited by:  Papers (80)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (490 KB)  

    Segmenting lesions is a vital step in many computerized mass-detection schemes for digital (or digitized) mammograms. The authors have developed two novel lesion segmentation techniques-one based on a single feature called the radial gradient index (RGI) and one based on simple probabilistic models to segment mass lesions, or other similar nodular structures, from surrounding background. In both methods a series of image partitions is created using gray-level information as well as prior knowledge of the shape of typical mass lesions. With the former method the partition that maximizes the RGI is selected. In the latter method, probability distributions for gray-levels inside and outside the partitions are estimated, and subsequently used to determine the probability that the image occurred for each given partition. The partition that maximizes this probability is selected as the final lesion partition (contour). The authors tested these methods against a conventional region growing algorithm using a database of biopsy-proven, malignant lesions and found that the new lesion segmentation algorithms more closely match radiologists' outlines of these lesions. At an overlap threshold of 0.30, gray level region growing correctly delineates 62% of the lesions in the authors' database while the RGI and probabilistic segmentation algorithms correctly segment 92% and 96% of the lesions, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image contrast enhancement based on a histogram transformation of local standard deviation

    Publication Year: 1998 , Page(s): 518 - 531
    Cited by:  Papers (41)  |  Patents (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1026 KB)  

    The adaptive contrast enhancement (ACE) algorithm, which uses contrast gains (CGs) to adjust the high-frequency components of images, is a well-known technique for medical image processing. Conventionally, the CG is either a constant or inversely proportional to the local standard deviation (LSD). However, it is known that conventional approaches entail noise overenhancement and ringing artifacts. In this paper, the authors present a new ACE algorithm that eliminates these problems. First, a mathematical model for the LSD distribution is proposed by extending Hunt's (1976) image model. Then, the CG is formulated as a function of the LSD. The function, which is nonlinear, is determined by the transformation between the LSD histogram and a desired LSD distribution. Using the authors' formulation, it can be shown that conventional ACEs use linear functions to compute the new CGs. It is the proposed nonlinear function that produces an adequate CG resulting in little noise overenhancement and fewer ringing artifacts. Finally, simulations using some X-ray images are provided to demonstrate the effectiveness of the the authors' new algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Speckle reduction and contrast enhancement of echocardiograms via multiscale nonlinear processing

    Publication Year: 1998 , Page(s): 532 - 540
    Cited by:  Papers (92)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (477 KB)  

    This paper presents an algorithm for speckle reduction and contrast enhancement of echocardiographic images. Within a framework of multiscale wavelet analysis, the authors apply wavelet shrinkage techniques to eliminate noise while preserving the sharpness of salient features. In addition, nonlinear processing of feature energy is carried out to enhance contrast within local structures and along object boundaries. The authors show that the algorithm is capable of not only reducing speckle, but also enhancing features of diagnostic importance, such as myocardial walls in two-dimensional echocardiograms obtained from the parasternal short-axis view. Shrinkage of wavelet coefficients via soft thresholding within finer levels of scale is carried out on coefficients of logarithmically transformed echocardiograms. Enhancement of echocardiographic features is accomplished via nonlinear stretching followed by hard thresholding of wavelet coefficients within selected (midrange) spatial-frequency levels of analysis. The authors formulate the denoising and enhancement problem, introduce a class of dyadic wavelets, and describe their implementation of a dyadic wavelet transform. Their approach for speckle reduction and contrast enhancement was shown to be less affected by pseudo-Gibbs phenomena. The authors show experimentally that this technique produced superior results both qualitatively and quantitatively when compared to results obtained from existing denoising methods alone. A study using a database of clinical echocardiographic images suggests that such denoising and enhancement may improve the overall consistency of expert observers to manually defined borders. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterization of visually similar diffuse diseases from B-scan liver images using nonseparable wavelet transform

    Publication Year: 1998 , Page(s): 541 - 549
    Cited by:  Papers (33)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (443 KB)  

    This paper describes a new approach for texture characterization, based on nonseparable wavelet decomposition, and its application for the discrimination of visually similar diffuse diseases of liver. The proposed feature-extraction algorithm applies nonseparable quincunx wavelet transform and uses energies of the transformed regions to characterize textures. Classification experiments on a set of three different tissue types show that the scale/frequency approach, particularly one based on the nonseparable wavelet transform, could be a reliable method for a texture characterization and analysis of B-scan liver images. Comparison between the quincunx and the traditional wavelet decomposition suggests that the quincunx transform is more appropriate for characterization of noisy data, and practical applications, requiring description with lower rotational sensitivity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantitative microwave imaging with a 2.45-GHz planar microwave camera

    Publication Year: 1998 , Page(s): 550 - 561
    Cited by:  Papers (56)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (850 KB)  

    This paper presents microwave tomographic reconstructions of the complex permittivity of lossy dielectric objects immersed in water from experimental multiview near-field data obtained with a 2.35-GHz planar active microwave camera. An iterative reconstruction algorithm based on the Levenberg-Marquardt method was used to solve the nonlinear matrix equation which results when applying a moment method to the electric field integral representation. The effects of uncertainties in experimental parameters such as the exterior medium complex permittivity the imaging system geometry and the incident field at the object location are illustrated by means of reconstructions from synthetic data. It appears that the uncertainties in the incident field have the strongest impact on the reconstructions. A receiver calibration procedure has been implemented and some ways: to access to the incident field at the object location have been assessed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence and stability assessment of Newton-Kantorovich reconstruction algorithms for microwave tomography

    Publication Year: 1998 , Page(s): 562 - 570
    Cited by:  Papers (43)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (814 KB)  

    For newly developed iterative Newton-Kantorovitch reconstruction techniques, the quality of the final image depends on both experimental and model noise. Experimental noise is inherent to any experimental acquisition scheme, while model noise refers to the accuracy of the numerical model, used in the reconstruction process, to reproduce the experimental setup. This paper provides a systematic assessment of the major sources of experimental and model noise on the quality of the final image. This assessment is conducted from experimental data obtained with a microwave circular scanner operating at 2.33 GHz. Targets to be imaged include realistic biological structures, such as a human forearm, as well as calibrated samples for the sake of accuracy evaluation. The results provide a quantitative estimation of the effect of experimental factors, such as temperature of the immersion medium, frequency, signal-to-noise ratio, and various numerical parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual assessment of the accuracy of retrospective registration of MR and CT images of the brain

    Publication Year: 1998 , Page(s): 571 - 585
    Cited by:  Papers (45)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (605 KB)  

    In a previous study (J.B. West et al., J. Comput. Assist. Tomogr., vol. 21, p. 554-66, 1997) the authors demonstrated that automatic retrospective registration algorithms can frequently register magnetic resonance (MR) and computed tomography (CT) images of the brain with an accuracy of better than 2 mm, but in that same study the authors found that such algorithms sometimes fail, leading to errors of 6 mm or more. Before these algorithms can be used routinely in the clinic, methods must be provided for distinguishing between registration solutions that are clinically satisfactory and those that are not. One approach is to rely on a human observer to inspect the registration results and reject images that have been registered with insufficient accuracy. In this paper, the authors present a methodology for evaluating the efficacy of the visual assessment of registration accuracy. Since the clinical requirements for level of registration accuracy are likely to be application dependent, the authors have evaluated the accuracy of the observer's estimate relative to 6 thresholds: 1-6 mm. The performance of the observers was evaluated relative to the registration solution obtained using external fiducial markers that are screwed into the patient's skull and that are visible in both MR and CT images. This fiducial marker system provides the gold standard for the authors' study. Its accuracy is shown to be approximately 0.5 mm. Two experienced, blinded observers viewed 5 pairs of clinical MR and CT brain images, each of which had each been misregistered with respect to the gold standard solution. Fourteen misregistrations were assessed for each image pair with misregistration errors distributed between 0 and 10 mm with approximate uniformity. For each misregistered image pair each observer estimated the registration error (in millimeters) at each of 5 locations distributed around the head using each of 3 assessment methods. These estimated errors were compared with the e- - rrors as measured by the gold standard to determine agreement relative to each of the 6 thresholds, where agreement means that the 2 errors lie on the same side of the threshold. The effect of error in the gold standard itself is taken into account in the analysis of the assessment methods. The results were analyzed by means of the Kappa statistic, the agreement rate, and the area of receiver-operating-characteristic (ROC) curves. No assessment performed well at 1 mm, but all methods performed well at 2 mm and higher. For these 5 thresholds, 2 methods agreed with the standard at least 80% of the time and exhibited mean ROC areas greater than 0.84. One of these same methods exhibited Kappa statistics that indicated good agreement relative to chance (Kappa>0.6) between the pooled observers and the standard for these same 5 thresholds. Further analysis demonstrates that the results depend strongly on the choice of the distribution of misregistration errors presented to the observers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of similarity measures for use in 2-D-3-D medical image registration

    Publication Year: 1998 , Page(s): 586 - 595
    Cited by:  Papers (231)  |  Patents (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    A comparison of six similarity measures for use in intensity-based two-dimensional-three-dimensional (2-D-3-D) image registration is presented. The accuracy of the similarity measures are compared to a "gold-standard" registration which has been accurately calculated using fiducial markers. The similarity measures are used to register a computed tomography (CT) scan of a spine phantom to a fluoroscopy image of the phantom. The registration is carried out within a region-of-interest in the fluoroscopy image which is user defined to contain a single vertebra. Many of the problems involved in this type of registration are caused by features which were not modeled by a phantom image alone. More realistic "gold-standard" data sets were simulated using the phantom image with clinical image features overlaid. Results show that the introduction of soft-tissue structures and interventional instruments into the phantom image can have a large effect on the performance of some similarity measures previously applied to 2-D-3-D image registration. Two measures were able to register accurately and robustly even when soft-tissue structures and interventional instruments were present as differences between the images. These measures were pattern intensity and gradient difference. Their registration accuracy, for all the rigid-body parameters except for the source to film translation, was within a root-mean-square (rms) error of 0.53 mm or degrees to the "gold-standard" values. No failures occurred while registering using these measures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization of PET activation studies based on the SNR measured in the 3-D Hoffman brain phantom

    Publication Year: 1998 , Page(s): 596 - 605
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (278 KB)  

    This work investigates the noise properties of O-15 water positron emission tomography (PET) images in an attempt to increase the sensitivity of activation studies. A method for computing the amount of noise within a region of interest (ROI) from the uncertainty in the raw data was implemented for three-dimensional (3-D) PET. The method was used to study the signal-to-noise ratio (SNR) of regions-of-interest (ROIs) inside a 3-D Hoffman brain phantom. Saturation occurs at an activity concentration of 2.2 mCi/l, which corresponds to a 75-mCi O-15 water injection into a normal person of average weight. This establishes the upper limit for injections for human brain studies using 3-D PET on the Siemens ECAT 921 EXACT scanner. Data from human brain activation studies on four normal volunteers using two-dimensional (2-D) PET were analyzed. The biological variation was found to be 5% 1-ml ROIs. The variance for a complete activation study was calculated, for a variety of protocols, by combining the Poisson noise propagated from the raw data in the phantom experiments with the biological variation. A protocol that is predicted to maximize the SNR in dual-condition activation experiments while remaining below the radiation safety limit is: ten scans with 45 mCi per injection. The data should not be corrected for random or scatter events since they do not help in the identification of activation sites while they do add noise to the image. Due to the lower noise level of 3-D PET, the threshold for detecting a true change in activity concentration is 10%-20% lower than 2-D PET. Because of this, a 3-D activation experiment using the Siemens 921 scanner requires fewer subjects fur equal statistical power. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mapping the human retina

    Publication Year: 1998 , Page(s): 606 - 619
    Cited by:  Papers (76)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1275 KB)  

    The new therapeutic method of scotoma-based photocoagulation (SBP) developed at the Vienna Eye Clinic for diagnosis and treatment of age-related macular degeneration requires retinal maps from scanning laser ophthalmoscope images. This paper describes in detail all necessary image analysis steps for map generation. A prototype software system for fully automatic map generation has been implemented and tested on a representative dataset selected from a clinical study with 50 patients. The map required for the SBP treatment can be reliably extracted in all cases. Thus, algorithms presented in this paper should be directly applicable in daily clinical routine without major modifications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A vision-based technique for objective assessment of burn scars

    Publication Year: 1998 , Page(s): 620 - 633
    Cited by:  Papers (20)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (974 KB)  

    In this paper a method for the objective assessment of burn scars is proposed. The quantitative measures developed in this research provide an objective way to calculate elastic properties of burn scars relative to the surrounding areas. The approach combines range data and the mechanics and motion dynamics of human tissues. Active contours are employed to locate regions of interest and to find displacements of feature points using automatically established correspondences. Changes in strain distribution over time are evaluated. Given images at two time instances and their corresponding features, the finite element method is used to synthesize strain distributions of the underlying tissues. This results in a physically based framework for motion and strain analysis. Relative elasticity of the burn scar is then recovered using iterative descent search for the best nonlinear finite element model that approximates stretching behavior of the region containing the burn scar. The results from the skin elasticity experiments illustrate the ability to objectively detect differences in elasticity between normal and abnormal tissue. These estimated differences in elasticity are correlated against the subjective judgments of physicians that are presently the practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geodesic deformable models for medical image analysis

    Publication Year: 1998 , Page(s): 634 - 641
    Cited by:  Papers (29)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB)  

    In this paper implicit representations of deformable models for medical image enhancement and segmentation are considered. The advantage of implicit models over classical explicit models is that their topology can be naturally adapted to objects in the scene. A geodesic formulation of implicit deformable models is especially attractive since it has the energy minimizing properties of classical models. The aim of this paper is twofold. First, a modification to the customary geodesic deformable model approach is introduced by considering all the level sets in the image as energy minimizing contours. This approach is used to segment multiple objects simultaneously and for enhancing and segmenting cardiac computed tomography (CT) and magnetic resonance images. Second, the approach is used to effectively compare implicit and explicit models for specific tasks. This shows the complementary character of implicit models since in case of poor contrast boundaries or gaps in boundaries, e.g. due to partial volume effects, noise, or motion artifacts, they do not perform well, since the approach is completely data-driven. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An objective comparison of 3-D image interpolation methods

    Publication Year: 1998 , Page(s): 642 - 652
    Cited by:  Papers (58)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (753 KB)  

    To aid in the display, manipulation, and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation. Traditional techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a much greater (typically 4-10×) amount of data. To mitigate this problem, a method called shape-based interpolation of binary data was developed. Resides significantly reducing user time, this method has been shown to provide more accurate results than grey-level interpolation. The authors proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data. This method has characteristics similar to those of the binary shape-based method. In particular, the authors showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods. In this paper, concentrating on the three-dimensional (3-D) interpolation problem, the authors compare statistically the accuracy of 8 different methods: nearest-neighbor, linear grey-level, grey-level cubic spline, grey-level modified cubic spline, Goshtasby et al. (1992), and 3 methods from the grey-level shape-based class. A population of patient magnetic resonance and computed tomography images, corresponding to different parts of the human anatomy, coming from different 3-D imaging applications, are utilized for comparison. Each slice in these data sets is estimated by each interpolation method and compared to the original slice at the same location using 3 measures: mean-squared difference, number of sites of disagreement, and largest difference. The methods are statistically compared pairwise based on these measures. The shape-based methods statistically significantly outperformed all other methods in all - - measures in all applications considered here with a statistical relevance ranging from 10% to 32% (mean=15%) for mean-squared difference. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Standing-wave and RF penetration artifacts caused by elliptic geometry: an electrodynamic analysis of MRI

    Publication Year: 1998 , Page(s): 653 - 662
    Cited by:  Papers (82)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (686 KB)  

    Motivated by the observation that the diagonal pattern of intensity nonuniformity usually associated with linearly polarized radio-frequency (RF) coils is often present in neurological scans using circularly polarized coils, a theoretical analysis has been conducted of the intensity nonuniformity inherent in imaging an elliptically shaped object using 1.5-T magnets and circularly polarized RF coils. This first principle analysis clarifies, for the general case of conducting objects, the relationship between the excitation field and the reception sensitivity of circularly and linearly polarized coils. The results, validated experimentally using a standard spin-echo imaging sequence and an in vivo B 1 field mapping technique, are shown to be accurate to within 1%-2% root mean square, suggesting that these electromagnetic interactions with the object account for most of the intensity nonuniformity observed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal CT scanning plan for long-bone 3-D reconstruction

    Publication Year: 1998 , Page(s): 663 - 666
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (166 KB)  

    Digital computed tomographic (CT) data are widely used in three-dimensional (3-D) construction of bone geometry and density features for 3-D modelling purposes. During in vivo CT data acquisition the number of scans must be limited in order to protect patients from the risks related to X-ray absorption. The aim of this work is to automatically define, given a finite number of CT slices, the scanning plan which returns the optimal 3-D reconstruction of a bone segment from in vivo acquired CT images. An optimization algorithm based on a Discard-Insert-Exchange technique has been developed. In the proposed method the optimal scanning sequence is searched by minimizing the overall reconstruction error of a two-dimensional (2-D) prescanning image: an anterior-posterior (AP) X-ray projection of the bone segment. This approach has been validated in vitro on 3 different femurs. The 3-D reconstruction errors obtained through the optimization of the scanning plan on the 3-D prescanning images and on the corresponding 3-D data sets have been compared. 2-D and 3-D data sets have been reconstructed by linear interpolation along the longitudinal axis. Results show that direct 3-D optimization yields root mean square reconstruction errors which are only 4%-7% lower than the 2-D-optimized plan, thus proving that 2-D-optimization provides a good suboptimal scanning plan for 3-D reconstruction. Further on, 3-D reconstruction errors given by the optimized scanning plan and a standard radiological protocol for long bones have been compared. Results show that the optimized plan yields 20%-50% lower 3-D reconstruction errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Medical Imaging (T-MI) encourages the submission of manuscripts on imaging of body structures, morphology and function, and imaging of microscopic biological entities. The journal publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. Studies involving highly technical perspectives are most welcome. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Michael Insana
Beckman Institute for Advanced Science and Technology
Department of Bioengineering
University of Illinois at Urbana-Champaign
Urbana, IL 61801 USA
m.f.i@ieee.org