By Topic

Medical Imaging, IEEE Transactions on

Issue 1 • Date Feb. 1998

Filter Results

Displaying Results 1 - 11 of 11
  • Cross-reference weighted least square estimates for positron emission tomography

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB)  

    An efficient new method, termed as the cross-reference weighted least square estimate (WLSE) [CRWLSE], is proposed to integrate the incomplete local smoothness information to improve the reconstruction of positron emission tomography (PET) images in the presence of accidental coincidence events and attenuation. The algebraic reconstruction technique (ART) is applied to this new estimate and the convergence is proved. This numerical technique is based on row operations. The computational complexity is only linear in the sizes of pixels and detector tubes. Hence, it is efficient in storage and computation for a large and sparse system. Moreover, the easy incorporation of range limits and spatially variant penalty will not deprive the efficiency. All this makes the new method practically applicable. An automatically data-driven selection method for this new estimate based on the generalized cross validation is also studied. The Monte Carlo studies demonstrate the advantages of this new method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise characteristics of 3-D and 2-D PET images

    Page(s): 9 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    The authors analyzed the noise characteristics of two-dimensional (2-D) and three-dimensional (3-D) images obtained from the GE Advance positron emission tomography (PET) scanner. Three phantoms were used: a uniform 20-cm phantom, a 3-D Hoffman brain phantom, and a chest phantom with heart and lung inserts. Using gated acquisition, the authors acquired 20 statistically equivalent scans of each phantom in 2-D and 3-D modes at several activity levels. From these data, they calculated pixel normalized standard deviations (NSD's), scaled to phantom mean, across the replicate scans, which allowed them to characterize the radial and axial distributions of pixel noise. The authors also performed sequential measurements of the phantoms in 2-D and 3-D modes to measure noise (from interpixel standard deviations) as a function of activity. To compensate for the difference in axial slice width between 2-D and 3-D images (due to the septa and reconstruction effects), they developed a smoothing kernel to apply to the 2-D data. After matching the resolution, the ratio of image-derived NSD values (NSD 2D/NSD 3D) 2 averaged throughout the uniform phantom was in good agreement with the noise equivalent count (NEC) ratio (NEC 3D/NEC 2D). By comparing different phantoms, the authors showed that the attenuation and emission distributions influence the spatial noise distribution. The estimates of pixel noise for 2-D and 3-D images produced here can be applied in the weighting of PET kinetic data and may be useful in the design of optimal dose and scanning requirements for PET studies. The accuracy of these phantom-based noise formulas should be validated for any given imaging situation, particularly in 3-D, if there is significant activity outside the scanner field of view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Projection space image reconstruction using strip functions to calculate pixels more "natural" for modeling the geometric response of the SPECT collimator

    Page(s): 24 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (623 KB)  

    The spatially varying geometric response of the collimator-detector system in single photon emission computed tomography (SPECT) causes loss in resolution, shape distortions, reconstructed density nonuniformity, and quantitative inaccuracies. A projection space image reconstruction algorithm is used to correct these reconstruction artifacts. The projectors F use strip functions to calculate pixels more "natural" for modeling the two-dimensional (2-D) geometric response of the SPECT collimator transaxially to the axis of rotation. These projectors are defined by summing the intersection of an array of multiple strips rotated at equal angles to approximate the ideal system geometric response of the collimator. Two projection models were evaluated for modeling the system geometric response function. For one projector each strip is of equal weight, for the other projector a Gaussian weighting is used. Parallel beam and fan beam projections of a physical three-dimensional (3-D) Hoffman brain phantom and a Jaszczak cold rod phantom were used to evaluate the geometric response correction. Reconstructions were obtained by using the singular value decomposition (SVD) method and the iterative conjugate gradient algorithm to solve for q in the imaging equation FGq=p, where p is the projection measurement. The projector F included the new models for the geometric response, whereas, the backprojector G did not always model the geometric response in order to increase the computational speed. The final reconstruction was obtained by sampling the backprojection Gq at a discrete array of points. Reconstructions produced by the two proposed projectors showed improved resolution when compared against a unit-strip "natural" pixel model, the conventional image pixelized model with ray tracing to calculate the geometric response, and the filtered backprojection algorithm. When the reconstruction is displayed on fine grid points, the continuity and resolution of the image is preserved wit- - hout the ring artifacts seen in the unit-strip "natural" pixel model. With present computing power, the geometric response correction using the proposed projection space reconstruction approach is not yet feasible for routine clinical use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic detection of the boundary of the calcaneus from ultrasound parametric images using an active contour model; clinical assessment

    Page(s): 45 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (225 KB)  

    Presents a computerized method for automated detection of the boundary of the os calcis on in vivo ultrasound parametric images, using an active dynamic contour model. The initial contour, defined without user interaction, is an iso-contour extracted from the textural feature space. The contour is deformed through the action of internal and external forces, until stability is reached. The external forces, which characterize image features, are a combination of gray-level information and second-order textural features arising from local cooccurrence matrices. The broadband ultrasound attenuation (BUA) value is then averaged within the contour obtained. The method was applied to 381 clinical images. The contour was correctly detected in the great majority of the cases. For the short-term reproducibility study, the mean coefficient of variation was equal to 1.81% for BUA values and 4.95% for areas in the detected region. Women with osteoporosis had a lower BUA than age-matched controls (p=0.0005). In healthy women, the age-related decline was -0.45 dB/MHz/yr. In the group of healthy post-menopausal women, years since menopause, weight and age were significant predictors of BUA. These results are comparable to those obtained when averaging BUA values in a small region of interest. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Errors in biased estimators for parametric ultrasonic imaging

    Page(s): 53 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (322 KB)  

    Maximum likelihood (ML) methods are widely used in acoustic parameter estimation. Although ML methods are often unbiased, the variance is unacceptably large for many applications, including medical imaging. For such cases, Bayesian estimators can reduce variance and preserve contrast at the cost of an increased bias. Consequently, including prior knowledge about object and noise properties in the estimator can improve low-contrast target detectability of parametric ultrasound images by improving the precision of the estimates. In this paper, errors introduced by biased estimators are analyzed and approximate closed-form expressions are developed. The task-specific nature of the estimator performance is demonstrated through analysis, simulation, and experimentation. A strategy for selecting object priors is proposed. Acoustic scattering from kidney tissue is the emphasis of this paper, although the results are more generally applicable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Standardization in the field of medical image management: the contribution of the MIMOSA model

    Page(s): 62 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (131 KB)  

    This paper deals with the development of standards in the field of medical imaging and picture archiving and communication systems (PACS's), and notably concerning the interworking between PACS's and hospital information systems (HIS). It explains, in detail, how a conceptual model of the management of medical images, such as the medical image management in an open system architecture (MIMOSA) model, can contribute to the development of standards for medical image management and PACS's. This contribution is twofold: 1) Since the model lists and structures the concepts and resources involved to make the images available to the users when and where they are required, and describes the interactions between PACS components and HIS, the MIMOSA work helps by defining a reference architecture which includes an external description of the various components of a PACS, and a logical structure for assembling them. 2) The model and the implementation of a demonstrator based on this model allow the relevance of the Digital Imaging and Communications in Medicine (DICOM) standard with respect to image management issues to be assessed, highlighting some current limitations of this standard and proposing extensions. Such a twofold action is necessary in order both to bring solutions, even partial, in the short term, and to allow for the convergence, in the long term, of the standards developed by independent standardization groups in medical informatics (e.g., those within Technical Committee 251 of CEN: Comite Europeen de Normalisation). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Partial-volume Bayesian classification of material mixtures in MR volume data using voxel histograms

    Page(s): 74 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (387 KB)  

    The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ρ(x), from the samples and then looking at the distribution of values that ρ(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A nonparametric method for automatic correction of intensity nonuniformity in MRI data

    Page(s): 87 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (266 KB)  

    A novel approach to correcting for intensity nonuniformity in magnetic resonance (MR) data is described that achieves high performance without requiring a model of the tissue classes present. The method has the advantage that it can be applied at an early stage in an automated data analysis, before a tissue model is available. Described as nonparametric nonuniform intensity normalization (N3), the method is independent of pulse sequence and insensitive to pathological data that might otherwise violate model assumptions. To eliminate the dependence of the field estimate on anatomy, an iterative approach is employed to estimate both the multiplicative bias field and the distribution of the true tissue intensities. The performance of this method is evaluated using both real and simulated MR data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fully automatic segmentation of the brain in MRI

    Page(s): 98 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (186 KB)  

    A robust fully automatic method for segmenting the brain from head magnetic resonance (MR) images has been developed, which works even in the presence of radio frequency (RF) inhomogeneities. It has been successful in segmenting the brain in every slice from head images acquired from several different MRI scanners, using different-resolution images and different echo sequences. The method uses an integrated approach which employs image processing techniques based on anisotropic filters and "snakes" contouring techniques, and a priori knowledge, which is used to remove the eyes, which are tricky to remove based on image intensity alone. It is a multistage process, involving first removal of the background noise leaving a head mask, then finding a rough outline of the brain, then refinement of the rough brain outline to a final mask. The paper describes the main features of the method, and gives results for some brain studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derivation of optimal filters for the detection of coronary arteries

    Page(s): 108 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB)  

    Optimal filters for the detection of coronary arteries with a diameter range of 0.5-6.0 mm in digital X-ray images are derived using a computational approach. This approach is based on the two requirements for optimal detection. First, the filter should maximize the number of detected true edges and minimize the number of detected false edges. Second, if an edge has been detected, its position should be as close as possible to the true edge position in the image. Since the grey value profile in a digital X-ray image associated with an arterial vessel is asymmetric, the theory on edge detection derived by Canny has been expanded with two additional boundary constraints to make it suitable for the derivation of filters for asymmetric edges. It is demonstrated that it is possible to derive optimal filters for coronary segments. The localization error, defined by the square root of the sum of the squared systematic and random errors in the assessment of the arterial diameter, depends on the size of the coronary artery and the amount of noise in the image. Here, an evaluation study is described to assess the relationship between localization error and the amount of noise upon the vessel profile. For that purpose, an analytical description of the vessel profile in an angiographic image was derived. For the larger arteries the relation between noise and localization error was found to be linear and no systematic over- or underestimations were observed, even if the noise level was very high. However, it can be shown that the smallest diameter that can be measured depends on the amount of noise present in the data. Even for images that contain only a low amount of noise, arterial diameters below 0.7 mm cannot be measured accurately. If the noise in the image increases, the lowest measurable arterial diameter value also increases. Also the random error increases rapidly for vessel diameters below 1.2 mm, but with a limited amount of noise and a diameter value above 0.7 mm the- - random error is still acceptable [0.15 mm (21%) for 0.7-mm vessels, 0.06 mm (6%) for 1-mm vessels]. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A viewpoint determination system for stenosis diagnosis and quantification in coronary angiographic image acquisition

    Page(s): 121 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    This paper describes the usefulness of computer assistance in the acquisition of "good" images for stenosis diagnosis and quantification in coronary angiography. The system recommends the optimal viewpoints from which stenotic lesions can be observed clearly based on images obtained from initial viewpoints. First, the viewpoint dependency of the apparent severity of a stenotic lesion is experimentally analyzed using software phantoms in order to show the seriousness of the problem. The implementation of the viewpoint determination system is then described. The system provides good user-interactive tools for the semi-automated estimation of the orientation and diameter of stenotic segments and the three-dimensional (3-D) reconstruction of vessel structures. Using these tools, viewpoints that will not give rise to foreshortening and vessel overlap can be efficiently determined. Experiments using real coronary angiograms show the system to be capable of the reliable diagnosis and quantification of stenosis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Medical Imaging (T-MI) encourages the submission of manuscripts on imaging of body structures, morphology and function, and imaging of microscopic biological entities. The journal publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. Studies involving highly technical perspectives are most welcome. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Milan Sonka
Iowa Institute for Biomedical Imaging
3016B SC, Department of Electrical and Computer Engineering
The University of Iowa
Iowa City, IA  52242  52242  USA
milan-sonka@uiowa.edu