By Topic

Nuclear Science Symposium and Medical Imaging Conference, 1994., 1994 IEEE Conference Record

Date 30 Oct-5 Nov 1994

Go

Filter Results

Displaying Results 1 - 25 of 95
  • Time-evolution analysis of differential features on 3D surfaces of the heart walls

    Page(s): 1807 - 1811 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB)  

    Differential feature computation such as curvature computation proves itself useful in 3D shape reconstruction, segmentation and is potentially interesting in matching and motion estimation. Several methods have been developed, generally dependent on the surface representation. The authors focus here on discrete voxel-based surface representations and they use the now classical algorithm proposed by Sander and Zucker (1990) for which the authors assess the accuracy. With the aim of analyzing and synthesizing the motion of 3D deformable models, the authors track the evolution of the curvatures through 2 different approaches: one based on global differential features calculation while the other one visualizes the amount of deformation locally. These methods are applied to synthetic data generated from parametric models, and to a sequence of 18 3D X-ray data of the left ventricle of the heart. The simulations try to assess the temporal stability and accuracy of the curvature measures of such deformable surfaces. The experiments on real data of the left ventricular walls exhibit highly deformed regions opposite to quasi-stationary regions that can be used advantageously in 3D motion estimation or in a matching process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detailed investigation of transmission and emission data smoothing protocols and their effects on emission images

    Page(s): 1568 - 1572 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    Measured attenuation correction in PET, is routinely performed using transmission scans. Acquisition time and noise considerations necessitate low pass filtering of the transmission data, before generating the attenuation correction matrix. This smoothing operation reduces noise propagation from transmission into emission data, but also introduces image artifacts which are mostly pronounced around areas of strongly varying attenuation coefficients. The source of these artifacts, which lies in the mismatch of the spatial resolutions of emission and transmission data, was investigated in this study. The effects of different transmission and emission sinogram smoothing protocols on the emission images were also investigated. A method is proposed that addresses the problem, in coordination with the filtering step during reconstruction. Instead of the standard low pass filtering of the emission data during reconstruction, emission and transmission sinograms can be filtered to the desired reconstructed image resolution prior to reconstruction. This operation reduces or eliminates resolution mismatch and consequent image artifacts, which can be significant, especially in cardiac studies. The proposed method improves the accuracy of the activity distribution in the emission images, with minimal computational and SNR cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extraction of rounded and line objects for the improvement of medical image pattern recognition

    Page(s): 1802 - 1806 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB)  

    In the field of computer-aided diagnosis (CADx), the investigators have encountered various diseases and normal anatomical structure patterns. Two major image patterns that are often targeted for extraction prior to further analyses are rounded and line objects. Here, the authors employed an enhanced Hough transform to extract both objects from the pre-defined image areas. This method can also be applied to the high frequency associated subbands of the wavelet domain where line objects are more distinct. Typically, rounded objects are associated with disease and need to be further analyzed. High intensity line objects are related to normal anatomical structures. Once the line objects are extracted and eliminated, a compensation process must be taken so that the modified pixels are filled by the gray value of the surrounding area. The authors used the ellipse extraction method to search for suspected lung nodules on chest radiographs. The line extraction method was used to detect the edge of ribs which can interfere with the final determination process analyzed by a convolution neural network (CNN). In this experiment, the authors found that the ellipse extraction method performed slightly better than the previous proposed profile matching method. The line removal technique, however, improved the performance of the convolution neural network by 4%. The receiver operating characteristic (ROC) studies indicated that the convolution neural network can achieve a performance of Az=0.90 based on the authors' database when each suspected area was processed by the line removal technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Separation of veins from activated brain tissue in functional magnetic resonance images at 1.5 T

    Page(s): 1534 - 1536 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    A feasibility study was conducted to segment 1.5 T functional magnetic resonance images (fMRIs) into grey matter and large veins using individual pixel intensity difference and temporal phase delay as two correlated parameters in 1.5 T gradient echo images. The time-course of each pixel in gradient echo images acquired during visual stimulation with a checkerboard flashing at 8 Hz was correlated to the stimulation `on'-`off' sequence to identify activated pixels, and the temporal delay of each activated pixel was computed by fitting its time-course to a reference sine function. A histogram of the product pixel-intensity x temporal delay could be fitted to a bimodal distribution, which was then used to segment the functional image into veins or activated brain tissue. The results show relatively good demarcation between large veins and activated grey matter using this method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Factor analysis for quantitation of myocardial blood flow (MBF) using 13N-ammonia dynamic PET imaging

    Page(s): 1930 - 1934 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    Regional myocardial blood flow (MBF) can be measured by 13 N-ammonia PET dynamic imaging using the conventional modeling approach that requires blood sampling, region-of-interest (ROI) drawing and a time-consuming nonlinear regression on each time activity curve (TAC). In this study, the factor analysis of dynamic structures (FADS) was used to extract the “pure” blood pool TAC and generate a parametric image of MBF (pixel unit: ml/min/g) which can map the myocardial perfusion accurately, Ten dynamic 13N-ammonia dog PET studies (3 baseline, 5 hyperemia, and 2 occlusion) were included. Three factors (TACs) and their corresponding factor images (the tight and left ventricular (RV and LV) blood pools and myocardial activities) were extracted from each study. The LV factors matched well with the plasma TACs, The factor image of myocardium was then converted to parametric image of MBF using a relationship derived from a two-compartment model, The results showed that the MBF obtained from PADS correlated well with MBF by two-compartment model fitting (correlation coefficient (r)t 0.98, slope=0.83) and by microsphere technique (r=0.98, slope=0.95). The FADS generated MIBF images have good image quality and lower noise levels compared to those generated by Patlak graphical analysis (PGA). It is concluded that regional myocardial blood flow can be measured accurately and noninvasively from 13N-ammonia dynamic PET imaging and FADS technique. FADS provides a simple method to map the distribution and magnitude of myocardial perfusion accurately and generate parametric image of MBF without requiring blood sampling and spillover correction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heterogeneity of SPECT bull's-eyes in normal dogs: comparison of attenuation compensation algorithms

    Page(s): 1725 - 1729 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    In normal dogs, SPECT Tc-99m Sestamibi (MIBI) Tl-201 myocardial perfusion images reconstructed with filtered backprojection (FBP) show a large decrease of counts in the septal wall (S) compared to the lateral wall (L). We evaluated the iterative method of Chang at 0 and 1 iterations (Chang0 and Chang1), and the Maximum Likelihood-Expectation Maximization with attenuation compensation (ML-EM-ATN) algorithm on data acquired from 5 normal dogs and from simulated projection data using a homogeneous count-density model of a normal canine myocardium in the attenuation field measured in one dog. Mean counts in the S and L regions were calculated from maximum-count circumferential profile arrays. Our results demonstrate that ML-EM-ATN and Chang1 result in improved uniformity, as measured by the S/L ratio View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coronary angiogram video compression

    Page(s): 1847 - 1851 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    The use of digitized information is rapidly gaining acceptance in radiological applications. Image compression plays an important role in the archiving and transmission of different digital diagnostic modalities. Currently block DCT based compression schemes (i.e. JPEG, MPEG) are usually used for telephone conferencing, cable video transmission and other non-medical applications. This scheme is not suitable for medical video sequences (like angiograms) because of block artifacts resulting from the block based DCT coefficient quantization. The image quality degrades severely with consecutive frame processing due to accumulation of errors across frames. We have developed a compression scheme for angiogram video sequence coding based on full frame wavelet coding. This full frame design exploits the local characteristics of the compensated difference signals (via block matching) and achieves a higher coding gain. At the same compression ratio, the proposed technique outperforms the block DCT method in image quality preservation. Our algorithm not only achieves a higher compression ratio but also maintains high fidelity for the reconstructed image which could be used in the PACS system and telediagnosis environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduction of truncation artifacts in fan beam transmission by using parallel beam emission data

    Page(s): 1563 - 1567 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    We describe a method to reduce the truncation artifacts in the fan beam transmission image reconstruction of a simultaneous transmission and emission SPECT system. Parallel hole collimation is adopted for the measurement of both emission photopeak and Compton scatter photons, which are also used to obtain the body and lung outlines. The lung outlines can be refined using attenuation coefficient (μ) estimates from within the fully sampled region (FSR) of the attenuation map computed using the truncated transmission data. The regions of the lungs and other soft tissues are assigned appropriate attenuation coefficients to create the attenuation map with no truncation, which are then reprojected to augment the transmission projection data. A match of the total attenuation between the real and reprojected data is made at each projection angle in each slice to account for the different attenuation coefficients in different slices. Finally, a reconstruction of the combined measured and augmented data is performed. We demonstrate that this method can significantly reduce the truncation artifacts using two phantom studies. When some portion of the heart falls outside the FSR, the attenuation map estimated from this method can more effectively correct for the attenuation in the emission data than the truncated map View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software for evaluating image reconstruction algorithms

    Page(s): 1940 - 1944 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB)  

    The proliferation of image reconstruction algorithms imposes a need for an efficient and objective evaluation procedure for comparing the efficacy of different algorithms for a particular medical task. In assessing the relative task-oriented performance of reconstruction algorithms, it is desirable to assign statistical significance to claims of superiority of one algorithm over another. However, very often the achievement of statistical significance demands a large number of observations. Performing such an evaluation on mathematical phantoms requires a means of running the competing algorithms on projection data obtained from a large number of randomly generated phantoms. Thereafter, various numerical measures of agreement between the reconstructed images and the original phantoms may be used to reach a conclusion which has some statistical substance. Here, the authors illustrate the software SuperSNARK which automates an evaluation methodology for assigning statistical significance to the observed differences in performance of two or more image reconstruction algorithms. In particular, the authors compare the relative efficacy of the maximum likelihood expectation maximization algorithm and the filtered backprojection method for performing three specific imaging tasks in positron emission tomography View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of nonuniform collimator sensitivity on variance of attenuation-corrected SPECT images

    Page(s): 1898 - 1901 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    The first step in all intrinsic attenuation-correction algorithms is multiplication of each measured projection by a function which compensates for photon attenuation between a line through the center of rotation, parallel to the detector, and the (convex) external object contour nearest the detector. A nonuniform collimator sensitivity profile which accomplishes this compensation physically rather than computationally would reduce the noise in the projections and, consequently, in the reconstructed images. The authors have compared the variance of reconstructed images of cylindrical phantoms of homogeneous activity concentration and attenuation coefficient for three collimator sensitivity profiles using both the original intrinsic attenuation-correction technique of Tretiak-Metz (1980) and Gullberg-Budinger (1981) and the variant developed by Tanaka (1984). The collimator sensitivity profiles the authors considered were the standard profile, with uniform sensitivity along the projection, and two nonuniform sensitivity profiles which were peaked at the center of the projection. The nonuniform collimator sensitivity profiles led to reduced variances throughout most of the image for both attenuation correction algorithms. These reductions in variance would be expected to lead to improved performance in quantitative imaging tasks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correction of cross-talk noise in high energy X-ray computed tomography

    Page(s): 1837 - 1841 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    Effect of cross-talk noise on reconstructed images of high energy X-ray computed tomography is described. Cross-talk noise caused by secondary electrons and/or scattered photons of high energy X-rays generated from adjacent detectors is evaluated using the EGS4 code. Distortion of the projection (Radon transform) is expressed as simultaneous equations based on cross-talk noise and it can be corrected by solving the equations. Random noise contained in the projection, however, is amplified by the correction process. This amplification can be estimated from the determinant of the cross-talk matrix of the simultaneous equations. It is seen that noise amplification does not matter in practical applications when cross-talk noise is less than 0.1 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A proposal of a Monte Carlo method using octree structure [SPECT application]

    Page(s): 1687 - 1690 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB)  

    Object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus, even if the boundary of each medium is represented accurately, high speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. The authors' Monte Carlo code using the octree representation of an object, first, establishes the simulation geometry by reading “octree string,” which is produced by forming an octree structure from a set of serial sections for the object before the simulation, and then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he/she wants to simulate photon trajectories, he/she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple signal source localization from spatio-temporal magnetocardiogram

    Page(s): 1832 - 1836 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    Mosher et al. (1992) have proposed a new method to localize multiple dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) developed in the field of array signal processing. However, the MUSIC fails to produce good solutions in some situations where time series from multiple dipoles are strongly correlated, multiple dipoles are closely spaced, or signal to noise ratio of time series is low. To improve the performance of the MUSIC, we propose new localization methods based on the subspace fitting framework developed by Viberg and Ottersten (1991). Simulation studies demonstrate that the new methods produce better solutions than the MUSIC in the above mentioned ill-conditioned situations. Furthermore, we apply these methods to real magnetocardiograms measured by using 64 channel SQUID magnetometers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised Bayesian segmentation with bootstrap sampling application to eye fundus image coding

    Page(s): 1794 - 1796 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    The authors propose a scheme of retina images coding. First of all, they describe the basis and the algorithm of the unsupervised Bayesian segmentation with the principle of bootstrap sampling. The second part deals with the integration of this quantification within a retinal images coding scheme including an orthogonal transform and variable length coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of a transmission reconstruction algorithm with simultaneous transmission-emission SPECT system in a presence of data truncation

    Page(s): 1578 - 1581 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    A simultaneous transmission-emission SPECT system (STEP) was developed on a three-detector gamma camera (Picker Prism 3000) equipped with fan-beam collimators (65 cm focal length) and a transmission line source. With this system, fan-beam geometry can cause transmission projection data to be truncated. An iterative transmission reconstruction algorithm was formulated to determine the distribution of attenuation coefficients from the system of linear equations for only measured projections. In this paper we evaluated this algorithm using phantom data with varying degree of data truncation. The results showed that with up to 30% truncation, differences in partial attenuation integrals in the non-truncated region were statistically not significant (p<0.05). Also, a study was performed to determine the minimal number of iterations necessary to obtain quantitatively accurate results. It was shown that partial attenuation integrals were not significantly different (p<0.05) when 9 to 100 iterations were performed. We conclude that the described transmission reconstruction algorithm using nine iterations is quantitatively accurate and is able to correct for the truncation of the data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combination of Wiener filtering and singular value decomposition filtering for volume imaging PET

    Page(s): 1600 - 1603 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    Although the 3D multi-slice rebinning (MSRB) algorithm in PET is fast and practical, and provides image quality close to that of a 3D reprojection algorithm, the MSRB image, in general, suffers from the noise amplified by its singular value decomposition (SVD) filtering technique in the axial direction. The authors' aim in this study is to combine the use of the Wiener filter (WF) with the SVD to decrease the noise and improve the image quality. Since the SVD filtering can “deconvolve” the spatially variant response function while the WF can suppress the noise and reduce the blurring caused by the physical processes not modeled by the axial SVD filter, the synthesis of these two techniques combine the advantages of both filters. The authors applied this approach to the volume imaging HEAD PENN-PET brain scanner with an axial extent of 256 mm. This combined filter was evaluated in terms of EWHM, image contrast, signal-to-noise, etc. With several phantoms, such as cold sphere and 3D brain phantoms. Specifically, the authors studied both the SVD filter with an axial Wiener filter and the SVD filter with a 3D Wiener filter, and compared the filtered images to those from the 3D reconstruction projection (3DRP) algorithm. The authors' results indicated that the Wiener filter not only increases the signal/noise ratio but also improves the contrast. For the 3D brain phantom both the gray/white and ventricle/gray ratios were improved from 1.8 to 2.8 and 0.47 to 0.25, respectively. The overall performance is close to that of the 3DRP algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Three-dimensional correction of experimentally determined gamma camera response

    Page(s): 1720 - 1724 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    The response of a gamma camera depends on the source to-camera distance. It results in a two-dimensional blurring of the projection image and in an increasing loss in resolution with increasing distance from the face of the collimator. Here, the authors present a method for experimentally estimating a parametric model for the point source response of a gamma camera with parallel hole collimator. This model is used to incorporate the three-dimensional spatially varying camera response into the projection and back-projection operations of the tomographic reconstruction maximum likelihood algorithm which also compensates for the three-dimensional uniform attenuation. Using phantom experiments, the authors demonstrated a substantial improvement in the quality and the quantitative accuracy of the single photon emission computed tomography images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design studies of a depth encoding large aperture PET camera

    Page(s): 1772 - 1776 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    The feasibility of a whole-body PET tomograph with the capacity to correct for the parallax error induced by the depth-of-interaction of γ-rays is assessed through simulation. The experimental energy, depth, and transverse position resolutions of EGO block detector candidates are the main inputs to a simulation that predicts the point source resolution of the depth encoding large aperture camera (DELAC). The results indicate that a measured depth resolution of 7 mm (FWHM) is sufficient to correct a substantial part of the parallax error for a point source at the edge of the field-of-view. Results from a search for the block specifications and camera ring radius that would optimize the spatial resolution and its uniformity across the field-of-view are also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of an emission-transmission CT system combining X-ray CT and SPECT

    Page(s): 1758 - 1761 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    We describe the design and initial evaluation of a prototype emission-transmission imaging system which incorporates a third-generation X-ray CT scanner and a single-headed SPECT scanner. The two commercial imaging devices are juxtaposed such that the CT table can move the patient directly into the SPECT scanner prior to the X-ray scan. The integrated system is being used to investigate correlation of X-ray CT images with conventional SPECT images and to attempt correction of attenuation errors in the SPECT image. We have evaluated two potential sources of error in the acquisition and analysis of data from this system. First, the observed flux of scattered X-rays passing through the SPECT collimator into the gamma camera is 17,000 cps-well below the rated maximum of 180,000 cps. Second, attenuation correction of SPECT will be performed by generating an attenuation map from the CT image. Previous experience suggests that directly calculated values can be generated within 3% of actual attenuation values, but more complex approaches will be implemented if necessary. Preliminary results of applying CT-derived attenuation correction are presented for a myocardial perfusion phantom View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid deformable models for three-dimensional biomedical image segmentation

    Page(s): 1935 - 1939 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    The authors apply hybrid deformable models to the task of automatically partitioning a medical image into visually sensible and medically plausible regions. In so doing, the authors exploit one to the fundamental strengths of deformable models; their ability to produce smooth closed object boundaries. Deformable modeling techniques can be broadly classified into two categories; boundary-based deformable models and region-based deformable models. Both of these approaches have distinct advantages and disadvantages. Here, the authors describe a hybrid deformable modeling technique which combines the advantages of both approaches and avoids many of their disadvantages. This is accomplished by first minimizing a region-based functional to obtain initial edge strength estimates. Smooth closed object boundaries are then obtained by minimizing a boundary-based functional which is attracted to the initial edge locations. The authors also discuss the theoretical advantages of this hybrid approach over existing image segmentation methods, and show how this technique can be effectively implemented and used for the segmentation of three-dimensional biomedical images. In particular, the authors demonstrate the use of this hybrid technique in identifying body outlines and lung regions in SPECT images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Basal ganglia phantom simulation: improvement of relative quantitation in SPECT [I-123] dynamic neuroreceptor imaging

    Page(s): 1730 - 1734 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    [I-123] labeled radiopharmaceuticals are being used to study neuroreceptor in monkey and human brain with a triple headed SPECT system. To optimize quantitative accuracies in monkey dynamic neuroreceptor studies, a combination of collimators (high-resolution fan-beam collimators (HRF), ultrahigh-resolution fan-beam collimators (UHRF)) with various filters including a Butterworth filtering, a count rate dependent 2-D Wiener (2-DW) prefiltering, 3-D Wiener postfiltering techniques (3-DW) has been studied. SPECT images of a monkey size cylindrical phantom containing two small vials representing regions of specific binding were acquired on a triple headed camera equipped with both HRF and UHRF. The vials were filled with a fixed concentration of [I-123] and the water in the cylinder around them were filled with three different concentrations to simulate the contrast between the basal ganglia (BG) and background (BKGD) for the representative early, middle, and late studies. The acquisition parameters were designed to simulate the dynamic neuroreceptor studies in monkey. The projection data were reconstructed with Butterworth 3.16, 2-DW, and 3-DW. The optimal threshold which yielded true volumes (5.36×2 ml) were found. The concentration ratios (BG/BKGD) were estimated as a function of threshold. The results indicate that the UHRF with 3-DW improves image quantitation in SPECT neuroreceptor imaging View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NECR analysis of 3D brain PET scanner designs

    Page(s): 1657 - 1661 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    A dedicated 3D brain PET scanner has several advantages, most notably increased sensitivity, over a whole body scanner for neurological studies. However brain scanners have higher scatter fractions, random count-rates and deadtime for the same activity concentration. We have used noise effective count-rate (NECR) analysis to compare brain scanners of 53, 60, and 66 cm diameter with the GE ADVANCE whole body scanner (93 cm diameter). Monte Carlo simulations of a brain-sized phantom (16 cm diameter, 13 cm length) in the ADVANCE geometry were used to develop a model for NECR performance, which was reconciled to results from a decay series measurement. The model was then used to predict the performance of the brain scanner designs. The brain scanners have noise effective sensitivities (the slope of the NECR curve at zero activity) as much as 40% higher than ADVANCE. However, their NECR advantage diminishes quickly as the activity concentration increases; the brain scanners' NECR equals ADVANCE at ~0.3 μCi/cc, and ADVANCE has superior NECR performance at higher activity levels. An imaging center concentrating on only very low activity imaging tasks would find the efficiency advantage of a smaller detector diameter valuable, while a center performing higher activity studies such as bolus water injections or 5 mCi FDG injections might prefer the count rate performance of a whole body scanner View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining functional MRI and EEG source imaging

    Page(s): 1547 - 1550 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    Even though the entire data for multislice imaging in functional magnetic resonance imaging (fMRI) can be acquired in 50-100 ms using echo-planar imaging, the effective temporal resolution for studying brain function is limited to 1-2s due to the relatively slow temporal response of the underlying hemodynamics. The objective of the preliminary study reported here was to combine evoked potential measurements (commonly referred to as EEG) with fMRI of the same subject to improve temporal resolution. EEG and 1.5T fMRI data were acquired from the subjects during checkerboard stimulation. The EEG data were recorded on a standard 10-20 electrode placement system and sources were reconstructed using a 4-sphere model based on the anatomical MRI of the subjects. Simulated annealing based algorithms were developed to estimate single equivalent dipole and multiple dipole parameters for both single time-point and spatio-temporal EEG data. The results from both single and multiple dipole estimations indicate good correlation between the dipole locations and fMRI View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • First results from a prototype PET scanner using BaF2 scintillator and photosensitive wire chambers

    Page(s): 1885 - 1887 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    The authors have designed and built a small high resolution PET scanner using BaF2 scintillation crystals and photosensitive wire chambers using TMAE as photosensitive agent. The very first measurements performed on the detector are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of the beveled-edge and reach-through APD structures for PET applications

    Page(s): 1864 - 1868 vol.4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    The electrical characteristics of the beveled-edge and reach-through APD structures are theoretically and experimentally analyzed to determine which is most suitable for incorporation into a Positron Emission Tomography (PET) detector. The beveled edge structure is found to have a lower excess noise factor and lower dark current, leading to a superior signal-to-noise ratio. The reach-through structure is found to have a lower operating voltage and faster speed of response. Because signal-to-noise ratio is fundamentally important to achieving the energy and timing resolutions required for PET detectors, we believe the beveled-edge Structure is more appropriate than the reach-through structure for PET applications. This finding is experimentally verified by comparing the performance of several commercially available beveled-edge and reach-through APDs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.