By Topic

Nuclear Science Symposium Conference Record, 2000 IEEE

Date 15-20 Oct. 2000

Go

Filter Results

Displaying Results 1 - 25 of 208
  • 2000 IEEE Nuclear Science Symposium Conference [front matter]

    Publication Year: 2000 , Page(s): 0_1 - 0_16
    Save to Project icon | Request Permissions | PDF file iconPDF (913 KB)  
    Freely Available from IEEE
  • Search algorithm of a maximum rectangular region for object description

    Publication Year: 2000 , Page(s): 20/56 - 20/59 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    Monte Carlo methods are becoming important in evaluating data correction methods in nuclear medicine. In the Monte Carlo methods, time-consuming calculations are required to determine the path length of a photon in an object that is composed of several media with complicated boundaries. Object modeling is therefore important for performing photon transport efficiently. The authors have proposed a new algorithm called the maximum rectangular region (MRR) method. The results of simulation showed that the calculation time in MRR was about from 30% to 50% of that in the conventional algorithm. But the disadvantage with this algorithm is that it takes too long a calculation time to decide MRR for each voxel. In this paper the authors propose a new method by using a genetic algorithm to decide the MRR for each voxel. The results showed that the calculation time could be reduced by 30% of the conventional algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessment of performance for the dynamic SPECT (dSPECT) method

    Publication Year: 2000 , Page(s): 18/109 - 18/113 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    The four-dimensional images reconstructed using the dSPECT method may be used to quantify temporal changes in regional radiotracer concentrations within a patient which are believed to reflect the functional ability of the investigated organ. The accuracy of the diagnosis will certainly depend on the accuracy of the dynamic reconstruction. Therefore, before the method can be used clinically it is important to evaluate its performance and limitations. The accuracy of the dSPECT reconstructions may be assessed using several different approaches and here the authors present a summary of their research in this area and discuss different validation techniques. Analysis of simulation data created using 6 different acquisition protocols with three camera configurations demonstrated that the errors in reconstructed dynamic activity distributions range from about 50% for single head cameras to just a few percent for triple head systems. The accuracy of kinetic parameter evaluation depends strongly on the modelled situation, but in general the shapes of the reconstructed time activity curves match closely the shapes of the true curves. Quantitative analysis of the data showed good agreement for washout half-lives in the range 3-8 minutes but larger errors were present for longer T1/2 and for multi exponential time-activity functions. These findings were confirmed in phantom experiments, although the results were more difficult to qualify in this case. Numerical values of the renal GFR measurements performed using blood sampling method, dynamic planar and dSPECT scans for normal volunteers are reported View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of dynamic PET images using cluster analysis

    Publication Year: 2000 , Page(s): 18/126 - 18/130 vol.3
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    Quantitative PET studies can provide in-vivo measurements of dynamic physiological and biochemical processes in humans. A limitation of PET is its inability to provide precise anatomic localisation due to relatively poor spatial resolution when compared to MR imaging. Manual placement of regions of interest (ROIs) is commonly used in the clinical and research settings in analysis of PET datasets. However, this approach is operator dependent and time-consuming. Semi- or fully-automated ROI delineation (or segmentation) methods offer advantages by reducing operator error and subjectivity and thereby improving reproducibility. In this work, the authors describe an approach to automatically segment dynamic PET images using cluster analysis, and they validate their approach with a simulated phantom study and asses its performance in segmentation of dynamic lung data. The authors' preliminary results suggest that cluster analysis can be used to automatically segment tissues in dynamic PET studies and has the potential to replace manual ROI delineation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A knowledge-based image smoothing technique for dynamic PET studies

    Publication Year: 2000 , Page(s): 18/114 - 18/117 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB)  

    Many techniques have been proposed to reduce image noise in dynamic positron emission tomography (PET) imaging. However, these smoothing methods are usually based on the spatial domain and local statistical properties. Smoothing algorithms specifically designed for dynamic image data have not previously been investigated in detail. The authors present a knowledge-based smoothing technique that aims to diminish the noise and improve the quality of the dynamic images. By taking advantage of domain specific physiological kinetic knowledge, this technique can provide dynamic images with high noise reduction while preserving edges and subtle details View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tomography applied to actinides detection by photofission

    Publication Year: 2000 , Page(s): 27/21 - 27/26 vol.3
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    The development of nondestructive methods to inspect nuclear waste is important for nuclear waste management and nonproliferation purposes. Among methods using nuclear radiation as a probe, instrumental photon activation analysis (IPAA) seems to be a promising way. With IPAA, we can assay actinides masses in large volume nuclear packages. It consists of irradiating actinides with high energy photons and thus producing photofission reactions. Delayed neutrons counting, following photofission reactions, allow us to locate and assay the mass (activity) of actinides by tomography. For this purpose, we use a simulation tool named OPERA, to get information needed for tomography part, and SAPHIR facility, a linear accelerator setup for the experimental part. OPERA is able to track particles from electrons accelerated by linac to delayed neutrons counting. High energy photons of 15 MeV are produced with a tungsten bremsstrahlung target. The setup part consists of waste package surrounded by 7 detection units. First experimental results obtained on true waste package are given. They proved the good agreement between experimental part and simulation part and the possibility to get a good localisation of actinides and a detectability limit lesser than 1 g View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NEMA NU 2-2000 performance measurements on an ADAC MCD camera

    Publication Year: 2000 , Page(s): 16/33 - 16/38 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    The clinically oriented NEMA-NU2 performance standards have been performed on an ADAC MCD dual head camera. At 1 cm radius the transverse resolution FWHM (FWTM) was measured to be 5.2 (10.0) mm and the axial was 5.0 (10.6) mm. At 10 cm radius the transverse radial resolution was 4.7 (7.4) mm and the transverse tangential was 8.5 (20.9) mm. The axial resolution was 14.0 (41.1) mm. The peak true count rate was 4365 counts/sec at a concentration of 4.3 kBq/ml. The peak noise equivalent count rate (NECR) was 1617 count/sec at 2.9 kBq/ml. The system scatter fraction was 48%. The maximum relative count rate error observed at the NECR peak was -42%. The sensitivity was 1014 cts/sec/MBq in the centre and 867 cts/sec/MBq at an offset of 10 cm. The contrast for the hot spheres ranged from 18.5 (11.6)% to 11.6 (7.9)% for the contrast ratio of 8:1 (4:1). For the cold spheres it ranged from 41% to 25% with an overall background variability of 7%. The residual error in scatter and attenuation correction was approximately 50%. All measurements were performed at the maximum allowed crystal-to-crystal distance of 810 mm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-parametric and non-rigid registration method applied to myocardial gated SPECT

    Publication Year: 2000 , Page(s): 18/26 - 18/30 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    We propose a non-parametric and non-rigid registration method totally unrestricted regarding the allowed deformations. It requires a segmentation contour of the organ of interest in both the image to register and the image template. We assume that the contour template can be deduced from the contour in the image to register by a series of locally normal elementary deformations. This assumption allows us to compute unique deformation vectors. We extend these elementary deformation computations to the whole image domain using level sets (sets of embedded parallel contours with the organ contour as a reference). This series of elementary deformations provides a way to perform an iterative registration by indicating the successive motions from a point in the image to register to its final position in the registered image. Myocardial gated SPECT acquisitions are more susceptible to noise since the average count number per projection is N times lower than equivalent non-gated acquisitions (N is the number of sequence frames). We have applied our method to perform a motion compensation of the frames, i.e. to non-rigidly register N-1 frames with respect to the Nth one. The N registered frames were then added together resulting in an image with similar noise characteristics to an equivalent non-gated image but without motion-induced blurring. Since our method is based on contour deformation, it guarantees that the registered contour exactly matches the contour template. In this application of sum after motion compensation, we are sure that no point inside the organ in an image will be added to a point of the background of another one. Results on a 3D simulated sequence are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deformable model of the heart with fiber structure

    Publication Year: 2000 , Page(s): 20/114 - 20/118 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    A kinematic model of the heart with incompressibility constraints was implemented. It accounts for the effects of the heart fiber structure, which plays a major role in defining the exact motion of the heart during the cardiac cycle. The volume of the heart was divided into small cubical elements, and in each element the fiber direction was specified. This allows implementation of nearly any fiber structure and any geometry. We performed preliminary testing of the model. The model was deformed from its initial state to a final configuration, assuming that fibers shorten or elongate to some known new value for each element. This can simulate a beating heart if the elongations are known. The model was also deformed using imaging data as a priori information. Simple geometries of the cylinder and ellipsoid were used. We see the model as a tool to help in understanding the movement of the myocardium during the heart cycle and the impact of infarctions on that movement. We will use the model imaging information from experimental gated SPECT and PET. The validation of the model will be done with the tagged MRI imaging View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation of a 3D positron emission tomography Monte-Carlo simulator

    Publication Year: 2000 , Page(s): 20/68 - 20/72 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    Reports the implementation of a realistic Monte-Carlo simulator, dedicated to PET simulation. The anthropomorphic voxelised Zubal phantom is used as the propagation volume. Relative activities can be set in each organ and in analytically defined sources. Thus it is possible to simulate turners and background activities. Monte-Carlo methods are used to simulate the emission and annihilation of positrons, the propagation of the two resulting 511 keV photons, the detection, coincidence and radioactive decay processes. Probabilities of photon interactions are computed in each voxel according to the density and the chemical nature of tissues and photon energy. The geometry of the detection system is widely adjustable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing and modeling Ethernet switches and networks for use in ATLAS high-level triggers

    Publication Year: 2000 , Page(s): 26/45 - 26/49 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    The ATLAS second level trigger will use a multilayered LAN network to transfer 5 Gbyte/s detector data from ~1500 buffers to a few hundred processors. A model of the network has been constructed to evaluate its performance. A key component of the network model is a model of an individual switch, reproducing the behavior measured in real devices. A small number of measurable parameters are used to model a variety of commercial Ethernet switches. Using parameters measured on real devices, the impact on the overall network performance is modeled. In the Atlas context, both 100 Mbit and Gigabit Ethernet links are required. A system is described which is capable of characterizing the behavior of commercial switches with the required number of nodes under traffic conditions resembling those to be encountered in the Atlas experiment. Fast Ethernet traffic is provided by a high density, custom built tester based on FPGAs, programmed in Handel-C and VHDL, while the Gigabit Ethernet traffic is generated using Alteon NICs with custom firmware. The system is currently being deployed with 32 100 Mbit ports and 16 Gigabit ports, and will be expanded to ~256 nodes of 100 Mbit and ~50 GBE nodes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Beam dump for high current electron beam at JNC

    Publication Year: 2000 , Page(s): 27/27 - 27/31 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    A high current electron beam is required for transmuting fission products using gamma-ray. Elemental technology for a linac that generates a high current beam in an efficient and stable manner is being developed at the Japan Nuclear Cycle development institute (JNC). A beam dump for the high current, low energy electron beam (20 mA, 10 MeV) from this accelerator has been constructed and tested at JNC. A ring and disk (RD) structure was adopted to absorb the beam safely and to analyze the beam condition in real-time. The thermal and stress analysis showed that a 200 kW electron beam could be securely stopped. The performance of the beam dump was evaluated using a beam of 7.0 MeV and an average current of 0.84 mA. The measured results showed that the electrons transported from the accelerator were completely absorbed. In addition, the beam dump was found to be capable to monitor the beam profile directly from the temperature distributions of the rings View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Breast tumor imaging using a tiltable head SPECT camera

    Publication Year: 2000 , Page(s): 22/77 - 22/81 vol.3
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    An axially Tiltable-Head SPECT (TH-SPECT) system with 2 heads was used to image fillable breast and torso phantoms containing multiple lesions at various tilt angles. Breast, liver, and myocardial activity were included in order to simulate direct contamination and Compton scattering expected in clinical scans. High count planar images were acquired for comparison with TH-SPECT, and the TH-SPECT data was reconstructed using an OS-FM algorithm which accounted for the tilted geometry. In order to characterize axial blurring effects inherent with TH-SPECT reconstructions, two cylindrical disk Defrise phantoms, one large Defrise phantom and one mini-Defrise phantom placed inside the fillable breast phantom, were imaged at various tilt angles. Reconstructions of the combined fillable breast and torso phantoms containing two 1.15 ml lesions, one centered axially and one proximal to the anterior chest wall within the breast, were most clearly visible in the 30° reconstructed TH-SPECT images compared to the high-count planar images, providing lesion SNR and contrast improvements of nearly three times View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of blood glucose on myocardium FDG uptake

    Publication Year: 2000 , Page(s): 18/45 - 18/46 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    PET with FDG has become the gold standard for myocardium viability. However, myocardial FDG uptake is too low or non-existent in some patients, preventing adequate imaging. We studied the frequency of this phenomenon in 225 oncologic patients referred for PET (112 male, 113 female). All patients were asked to fast for at least 4 hours and known diabetic subjects were not included. Blood glucose was measured in all patients prior to the FDG injection. Emission scans acquired on a GE Advance scanner (2-D mode) starting 50-60 minutes p.i. were corrected for attenuation. The images were qualitatively classified as adequate or very poor/non-existent heart visualization. In 68 patients (30.2%) a substantial number, the left myocardium was not well identified (26.8% male and 33.6% female, p<0.05). Blood glucose was 91.2 mg/dL±29.2 (SD) in the 68 patients and 88.2±16.0 in those with adequate heart visualization (p<0.05). We concluded that poor/nonvisualization of the heart with FDG is common. Glycemic levels do not correlate with FDG myocardial uptake, not predicting the heart visualization. We are using these results for the development of new protocols to improve the visualization of the heart View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An investigation of coded aperture imaging for small animal SPECT

    Publication Year: 2000 , Page(s): 21/81 - 21/85 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    Coded apertures provide a substantial gain in detection efficiency compared with conventional collimation, and are well suited to imaging small volumes. In this work, the authors investigated several aspects of coded aperture design for a small animal SPECT system, including aperture/detector configuration and susceptibility to scatter. They simulated various source distributions and detection systems which included 1, 2 and 4 stationary detectors placed around the object, each with a pinhole array or a Fresnel zone plate in front of the detector. Image volumes were reconstructed using an iterative successive over-relaxation algorithm with a penalised weighted least squares cost function. Multiple pinhole arrays performed better than Fresnel zone plates in terms of reconstructed mean squared error and signal-to-noise. The authors' design goals of <2 mm spatial resolution (full width at half maximum) and >1% detection efficiency can be achieved with a 4 detector system with arrays of 100 pinholes per detector and the scatter fraction for a 4 cm diameter object is <5%. It is concluded that a coded aperture design shows great promise for small animal SPECT View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Monte Carlo study of high resolution PET with granulated dual layer detectors

    Publication Year: 2000 , Page(s): 20/11 - 20/15 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    New arrays of Avalanche Photodiodes (APD) allow the design of novel highly granulated detector modules. Monte Carlo simulations were used to evaluate to what extent this feature can be used for high resolution, high sensitivity PET. Based on a fixed crystal front face of 2 mm2 and a fixed number of crystals, sensitivity and scatter fraction for three different geometries were determined: (a) Ring with 143 mm diameter; (b) Ring with only 71 mm diameter but double the axial extent (37 mm); (c) Ring with 71 mm diameter and two radial crystal layers. The sensitivity (a:b:c) was 0.3%:1.1%:1.5% for a line source in air. Studies using a simple mouse-like phantom showed the highest scatter fraction for (b) and comparable sensitivities for (b) and (c). The large diameter of (a) reduced the scatter fraction at the expenses of high sensitivity losses. Line source simulations showed a resolution of about 1.6 mm for (c) at the center for the field of view (FOV). Within a region of 20 mm within the FOV, the resolution of (c) remained close to 2 mm. Geometry (c) is being implemented in the new tomograph MADPET View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rebinning errors in coincidence imaging due to depth of interaction

    Publication Year: 2000 , Page(s): 20/85 - 20/88 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    In positron emission tomography, uncertainties in-or lack of knowledge of-depths of interaction lead to errors in the positioning of coincidence events. This paper examines the dependence of these positioning errors on depth of interaction. Here, the authors derive analytic expressions that relate rebinning errors to the uncertainties in the measurement of interaction positions including depth of interaction. The results confirm the intuitive notion that rebinning errors are minimized when depth of interaction is measured at least as well as the transverse position View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An anthropomorphic phantom integrated EGS4 Monte Carlo code and its application in Compton probe

    Publication Year: 2000 , Page(s): 20/119 - 20/122 vol.3
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    An EGS4 Monte Carlo Code incorporated with a digitized anthropomorphic phantom is presented. With this Monte Carlo program, new imaging techniques and devices can be investigated accurately under realistic clinical conditions without conducting experiments in living subjects. Imaging quality can be improved by analyzing the image formation procedure with detailed interaction physics. As an example, its application in predicting the performance of the authors' newly designed Compton prove for prostate imaging is addressed in detail. The authors' initial simulation results show that Compton prove promises good counting efficiency and high spatial resolution. Further study of effects of the background activity on the probe performance indicates that bladder is possible a major source of degradation. Moreover, it indicates that the probe design can be improved using appropriate shielding. Potential use of Compton probe for prostate tumor detection is demonstrated by list-mode likelihood reconstruction of anthropomorphic phantom using the presented EGS4 based Monte Carlo program. This Monte Carlo program can handle 3 million voxels with a simulation speed of 1 million histories per 18 minutes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scintillator crystal optimization by Monte Carlo simulation for photodiode matrix detector

    Publication Year: 2000 , Page(s): 20/16 - 20/19 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    The design of gamma ray imaging probes based on silicon photodiodes and on a CsI(Tl) monocrystal is delicate and complex. Electronic and statistical noise deteriorate energy and spatial resolution. A Monte Carlo simulation is used to set the probe's parameters in order to obtain the best compromise between spatial uniformity energy uniformity, spatial linearity and energy collection. The output distribution of light depends on the physical properties of crystal edges, crystal thickness and refractive index of the coupling grease. A 75×75 mm2 squared CsI(Tl) crystal coupled to a 5 by 5 array of photodiodes (15×15 mm2) has been simulated. Energy and spatial characteristics of the probe were determined for crystal thickness varying from 2 mm and 20 mm and refractive index of coupling grease varying from 1.5 and 2.5. The influence of the aspect of the surfaces has also been studied. The best results were obtained with a crystal thickness of 8 mm, a grease refractive index of 1.9, and a crystal with polished edges and diffusing entrance face. These parameters minimize the statistic fluctuations of the light distribution on the array of photodiode leading to the best compromise between spatial linearity and energy collection View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System design for a 1 mm3 resolution animal PET scanner: microPET II

    Publication Year: 2000
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (31 KB)  

    Summary form only received as follows: The authors are currently developing microPET II, a second-generation dedicated PET scanner for small animal imaging. The design goal is to achieve -1 mm3 volumetric resolution, a larger axial field of view, and higher system sensitivity than the authors' original prototype system. To minimize development time and cost, Monte Carlo simulation was used to determine appropriate dimensions for the detector crystals and for the system geometry. Simulations of the existing microPET system were compared to its measured performance to estimate errors in the simulations. Measurements from a pair of prototype detectors (12×κ12 array of 1×1×10 mm LSO crystals) agreed well with the simulation and provided a validation of the system design. The chosen design for microPET II has 0.975×0.975×12.5 mm LSO crystals arranged in a ring of 7.2 cm radius, and a 4.83 cm axial field of view. These specifications provide a volumetric image resolution <1.05 mm3 within 10 mm radius from the CFOV. The absolute sensitivity is estimated to be 3.6 times that of the current microPET. With 2.7 times the axial field of view, microPET II will have up to 9.7 times the total efficiency of the existing microPET system for whole body studies in mice and rats View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A correction algorithm for partial volume effects in 3D PET imaging: principle and validation

    Publication Year: 2000 , Page(s): 18/62 - 18/66 vol.3
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB)  

    Presents a method for correcting 3D PET data for partial volume effects. The authors' method assumes that the observed signal within a region of interest is a weighted sum of the true activities in the FOV of the scanner. The weighting coefficients represent the recovery and the cross-contamination factors between structures, and are estimated by 3D analytical simulation of a high resolution emitting model, such as a MRI segmented volume registered with the corresponding PET volume. The validation of the algorithm using well-controlled data sets, proved its efficiency to recover true time-activity curves in small structures, with errors smaller than 5% View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monte Carlo methods in portal imaging

    Publication Year: 2000 , Page(s): 24/3 - 24/7 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    An overview of applications of Monte Carlo methods in portal imaging is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantitative cardiac PET imaging: reproducibility in an integrated analysis environment

    Publication Year: 2000 , Page(s): 18/77 - 18/80 vol.3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    The strength of PET imaging in nuclear cardiology relies significantly on the usage of physiological contrast agents such as water or glucose analogues in very small doses in order to assess function, perfusion and metabolism. However, one is continually challenged with maintaining a balance between acquisition capabilities, physiological knowledge and applications in patient care. The more recent use of dynamic studies with high temporal resolution or as function of the heart phases has extended the spectrum of functional, non-invasive cardiac imaging. However, this is paralleled with a drastic increase in data and lack of sufficient means of data analysis. In order to cope with these demands of extracting physiological information from vast amounts of physical data, the analysis environment `MunichHeart' was developed. Three quantitative analysis modules from this tool box are investigated with respect to intra- and interobserver variability: the assessment of myocardial viability, the delineation of absolute myocardial blood flow and the measurement of left ventricular ejection fraction. The results demonstrate that reproducibility, stability and flexibility can be achieved both in a research oriented environment as well as for routine applications. This is the necessary prerequisite to facilitate the transfer of new methods into clinical reality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of conventional gamma cameras for small animal imaging

    Publication Year: 2000 , Page(s): 21/86 - 21/90 vol.3
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    Compared with human imaging, the small volume of the mouse or rat allows certain geometrical advantages to be exploited in nuclear imaging. This paper investigates the options available for small animal imaging given the constraint of using a modern NaI gamma camera. There are two groups of collimation schemes: (1) those employing magnification, namely pinhole, multiple pinhole, and coded aperture; and (2) the collimator methods including parallel hole, “sparse hole”, and converging designs such as fan beam and cone beam collimation. A comparison of sensitivity between a 3 mm tungsten pinhole and a high resolution, parallel hole collimator found that a factor of 2 improvement in sensitivity can be achieved by using the parallel hole collimator with little loss in spatial resolution. The results of this comparison can be used to compute the expected performance of a sparse hole collimator. For large, relatively inexpensive and moderate spatial resolution detectors such as NaI, high resolution can be achieved most readily by using magnification. For next generation solid state detectors with improved intrinsic spatial resolution and (initially) high cost, the sparse hole design achieves good system resolution and keeps the overall size and cost of the device to a minimum View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-invasive extraction of physiological parameters in quantitative PET studies using simultaneous estimation and cluster analysis

    Publication Year: 2000 , Page(s): 18/141 - 18/145 vol.3
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB)  

    Quantitative PET studies usually require invasive blood sampling from a peripheral artery to obtain an input function for accurate modelling. However, blood sampling is impractical in clinical PET studies. The authors recently proposed a non-invasive modelling approach that can simultaneously estimate parameters which describe both the input and output functions using two or more regions of interest (ROIs). However, this approach is still limited by manual delineation of ROIs which is subjective and time-consuming. Here, the authors present an extension to their method where ROI delineation is performed automatically by cluster analysis so that subjectivity is reduced while at the same time ensuring that the extracted time-activity curves have distinct kinetics. The authors' aim was to investigate the feasibility of using the kinetics extracted by cluster analysis for non-invasive quantification of physiological parameters. The estimates and the fitted curves obtained by simultaneous estimation are in good agreement with those obtained by model fitting with the measured input function (gold standard method). It is concluded that cluster analysis is able to identify distinct kinetics and hence can be used for the non-invasive quantification of physiological parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.