By Topic

Biomedical Engineering, IEEE Transactions on

Issue 3 • Date March 2010

Filter Results

Displaying Results 1 - 25 of 35
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (466 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Biomedical Engineering publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 497 - 498
    Save to Project icon | Request Permissions | PDF file iconPDF (167 KB)  
    Freely Available from IEEE
  • A Feasible Solution to the Beam-Angle-Optimization Problem in Radiotherapy Planning With a DNA-Based Genetic Algorithm

    Page(s): 499 - 508
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1037 KB) |  | HTML iconHTML  

    Intensity-modulated radiotherapy (IMRT) is now becoming a powerful clinical technique to improve the therapeutic radio for cancer treatment. It has been demonstrated that selection of suitable beam angles is quite valuable for most of the treatment plans, especially for the complicated tumor cases and when limited number of beams is used. However, beam-angle optimization (BAO) remains a challenging inverse problem mainly due to the huge computation time. This paper introduced a DNA genetic algorithm (DNA-GA) to solve the BAO problem aiming to improve the optimization efficiency. A feasible mapping was constructed between the universal DNA-GA algorithm and the specified engineering problem of BAO. Specifically, a triplet code was used to represent a beam angle, and the angles of several beams in a plan composed a DNA individual. A bit-mutation strategy was designed to set different segments in DNA individuals with different mutation probabilities; and also, the dynamic probability of structure mutation operations was designed to further improve the evolutionary process. The results on simulated and clinical cases showed that DNA-GA is feasible and effective for the BAO problem in IMRT planning, and to some extent, is faster to obtain the optimized results than GA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blood Glucose Prediction Using Stochastic Modeling in Neonatal Intensive Care

    Page(s): 509 - 518
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (501 KB) |  | HTML iconHTML  

    Hyperglycemia is a common metabolic problem in premature, low-birth-weight infants. Blood glucose homeostasis in this group is often disturbed by immaturity of endogenous regulatory systems and the stress of their condition in intensive care. A dynamic model capturing the fundamental dynamics of the glucose regulatory system provides a measure of insulin sensitivity (SI). Forecasting the most probable future SI can significantly enhance real-time glucose control by providing a clinically validated/proven level of confidence on the outcome of an intervention, and thus, increased safety against hypoglycemia. A 2-D kernel model of SI is fitted to 3567 h of identified, time-varying SI from retrospective clinical data of 25 neonatal patients with birth gestational age 23 to 28.9 weeks. Conditional probability estimates are used to determine SI probability intervals. A lag-2 stochastic model and adjustments of the variance estimator are used to explore the bias-variance tradeoff in the hour-to-hour variation of SI. The model captured 62.6% and 93.4% of in-sample SI predictions within the (25th-75th) and (5th-95th) probability forecast intervals. This overconservative result is also present on the cross-validation cohorts and in the lag-2 model. Adjustments to the variance estimator found a reduction to 10%-50% of the original value provided optimal coverage with 54.7% and 90.9% in the (25th-75th) and (5th-95th) intervals. A stochastic model of SI provided conservative forecasts, which can add a layer of safety to real-time control. Adjusting the variance estimator provides a more accurate, cohort-specific stochastic model of SI dynamics in the neonate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and Identification of the Electrohysterographic Volume Conductor by High-Density Electrodes

    Page(s): 519 - 527
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (798 KB) |  | HTML iconHTML  

    The surface electrohysterographic (EHG) signal represents the bioelectrical activity that triggers the mechanical contraction of the uterine muscle. Previous work demonstrated the relevance of the EHG signal analysis for fetal and maternal monitoring as well as for prognosis of preterm labor. However, for the introduction in the clinical practice of diagnostic and prognostic EHG techniques, further insights are needed on the properties of the uterine electrical activation and its propagation through biological tissues. An important contribution for studying these phenomena in humans can be provided by mathematical modeling. A five-parameter analytical model of the EHG volume conductor and the cellular action potential (AP) is proposed here and tested on EHG signals recorded by a grid of 64 high-density electrodes. The model parameters are identified by a least-squares optimization method that uses a subset of electrodes. The parameters representing fat and abdominal muscle thickness are also measured by echography. The mean correlation coefficient and standard deviation of the difference between the echographic and EHG estimates were 0.94 and 1.9 mm, respectively. No bias was present. These results suggest that the model provides an accurate description of the EHG AP and the volume conductor, with promising perspectives for future applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Mesh Refinement Techniques for 3-D Skin Electrode Modeling

    Page(s): 528 - 533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1315 KB) |  | HTML iconHTML  

    In this paper, we develop a 3-D adaptive mesh refinement technique. The algorithm is constructed with an electric impedance tomography forward problem and the finite-element method in mind, but is applicable to a much wider class of problems. We use the method to evaluate the distribution of currents injected into a model of a human body through skin contact electrodes. We demonstrate that the technique leads to a significantly improved solution, particularly near the electrodes. We discuss error estimation, efficiency, and quality of the refinement algorithm and methods that allow for preserving mesh attributes in the refinement process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Longitudinal Falls-Risk Estimation Using Triaxial Accelerometry

    Page(s): 534 - 541
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    Falls among the elderly population are a major cause of morbidity and injury-particularly among the over 65 years age group. Validated clinical tests and associated models, built upon assessment of functional ability, have been devised to estimate an individual's risk of falling in the near future. Those identified as at-risk of falling may be targeted for interventative treatment. The migration of these clinical models estimating falls risk to a surrogate technique, for use in the unsupervised environment, might broaden the reach of falls-risk screening beyond the clinical arena. This study details an approach that characterizes the movements of 68 elderly subjects performing a directed routine of unsupervised physical tasks. The movement characterization is achieved through the use of a triaxial accelerometer. A number of fall-related features, extracted from the accelerometry signals, combined with a linear least squares model, maps to a clinically validated measure of falls risk with a correlation of ?? = 0.81(p < 0.001). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis

    Page(s): 542 - 551
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1800 KB) |  | HTML iconHTML  

    This paper describes a control architecture and intent recognition approach for the real-time supervisory control of a powered lower limb prosthesis. The approach infers user intent to stand, sit, or walk, by recognizing patterns in prosthesis sensor data in real time, without the need for instrumentation of the sound-side leg. Specifically, the intent recognizer utilizes time-based features extracted from frames of prosthesis signals, which are subsequently reduced to a lower dimensionality (for computational efficiency). These data are initially used to train intent models, which classify the patterns as standing, sitting, or walking. The trained models are subsequently used to infer the user's intent in real time. In addition to describing the generalized control approach, this paper describes the implementation of this approach on a single unilateral transfemoral amputee subject and demonstrates via experiments the effectiveness of the approach. In the real-time supervisory control experiments, the intent recognizer identified all 90 activity-mode transitions, switching the underlying middle-level controllers without any perceivable delay by the user. The intent recognizer also identified six activity-mode transitions, which were not intended by the user. Due to the intentional overlapping functionality of the middle-level controllers, the incorrect classifications neither caused problems in functionality, nor were perceived by the user. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and Modeling of Snore Source Flow With Its Preliminary Application to Synthetic Snore Generation

    Page(s): 552 - 560
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (862 KB) |  | HTML iconHTML  

    With the emerging use of snore properties for clinical purposes, there is a need to understand the characteristics of snore source flow (SF)-the acoustic source in snore production. This paper attempts to analyze and model both SF and its derivative (SFD), along with its preliminary application to the generation of synthetic snores. SFs and SFDs were extracted from natural snores via an iterative adaptive inverse filtering approach, and subsequently parameterized into various time- and amplitude-based parameters to quantify the oscillatory maneuvers of snore excitation source (ES). The SF and SFD waveforms were also, respectively, modeled using the first and second derivatives of the Gaussian probability density function. Subjective and objective measures, including paired comparison score and sum-of-squared error, were assessed to appraise the performance of SFD model in producing natural-sounding snores. Results consistently show that: 1) the shapes of SF pulse are different among snores and can be associated with the dynamic biomechanical properties (e.g., compliance and elasticity) of ES; 2) changes to the SF or SFD pulse shape can affect the snore properties, both acoustically and perceptually; and 3) the proposed SFD model can generate close-to-natural sounding snores. Further research in this area can potentially yield valuable benefits to snore-oriented applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised Bayesian Decomposition of Multiunit EMG Recordings Using Tabu Search

    Page(s): 561 - 571
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1089 KB) |  | HTML iconHTML  

    Intramuscular electromyography (EMG) signals are usually decomposed with semiautomatic procedures that involve the interaction with an expert operator. In this paper, a Bayesian statistical model and a maximum a posteriori (MAP) estimator are used to solve the problem of multiunit EMG decomposition in a fully automatic way. The MAP estimation exploits both the likelihood of the reconstructed EMG signal and some physiological constraints, such as the discharge pattern regularity and the refractory period of muscle fibers, as prior information integrated in a Bayesian framework. A Tabu search is proposed to efficiently tackle the nondeterministic polynomial-time-hard problem of optimization w.r.t the motor unit discharge patterns. The method is fully automatic and was tested on simulated and experimental EMG signals. Compared with the semiautomatic decomposition performed by an expert operator, the proposed method resulted in an accuracy of 90.0% ?? 3.8% when decomposing single-channel intramuscular EMG signals recorded from the abductor digiti minimi muscle at contraction forces of 5% and 10% of the maximal force. The method can also be applied to the automatic identification and classification of spikes from other neural recordings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Normalized MSE Analysis of Speech Fundamental Frequency in the Cochlear Implant-Like Spectrally Reduced Speech

    Page(s): 572 - 577
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1637 KB) |  | HTML iconHTML  

    In this paper, we present a quantitative study on the speech fundamental frequency (F0) of the cochlear implant-like spectrally reduced speech (SRS). The SRS was synthesized from the subband amplitude and frequency modulations (AM and FM) of original clean speech utterances selected from the TI-digits database. The SRS synthesis algorithm was derived from the frequency amplitude modulation encoding (FAME) strategy, proposed by Nie et al., 2005. The normalized mses (NMSEs), calculated between the F0 of the original clean speech and that of the SRSs, were analyzed. The NMSEs analysis of F0 revealed the greater F0 distortion in the AM-based SRS, which is the acoustic simulation of present-day cochlear implants, compared to the FAME-based SRS. This evidence supports the fact that current cochlear implant users have difficulty in the speaker recognition task as reported by Zeng et al., 2005. Further, the analysis results showed that it is better to keep the rapidly varying FM components to reduce the F0 distortion in the FAME-based SRS at low spectral resolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Chemometric Approach for Improving VCSEL-Based Glucose Predictions

    Page(s): 578 - 585
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1721 KB) |  | HTML iconHTML  

    Optical methods are one of the painless and promising techniques that can be used for blood glucose predictions for diabetes patients. The use of thermally tunable vertical cavity surface-emitting lasers (VCSELs) as the light source to obtain blood absorption spectra, along with the multivariate technique partial least squares for analysis and glucose estimation, has been demonstrated. With further improvements by using data preprocessing and two VCSELs, we have achieved a clinically acceptable level in the physiological range in buffered solutions. The results of previous experiments conducted using white light showed that increasing the number of wavelength intervals used in the analysis improves the accuracy of prediction. The average prediction error, using absorption spectra from one VCSEL in aqueous solution, is about 1.2 mM. This error is reduced to 0.8 mM using absorption spectra from two VCSELs. This result confirms that increasing the number of VCSELs improves the accuracy of prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Approach to Patient-Tailored Vectorcardiography

    Page(s): 586 - 595
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (518 KB) |  | HTML iconHTML  

    For assessment of specific cardiac pathologies, vectorcardiography is generally considered superior with respect to electrocardiography. Existing vectorcardiography methods operate by calculating the vectorcardiogram (VCG) as a fixed linear combination of ECG signals. These methods, with the inverse Dower matrix method the current standard, are therefore not flexible with respect to different body compositions and geometries. Hence, they cannot be applied with accuracy on patients that do not conform to the fixed standard. Typical examples of such patients are obese patients or fetuses. For the latter category, when recording the fetal ECG from the maternal abdomen the distance of the fetal heart with respect to the electrodes is unknown. Consequently, also the signal attenuation/transformation per electrode is not known. In this paper, a Bayesian method is developed that estimates the VCG and, to some extent, also the signal attenuation in multichannel ECG recordings from either the adult 12-lead ECG or the maternal abdomen. This is done by determining for which VCG and signal attenuation the joint probability over both these variables is maximal given the observed ECG signals. The underlying joint probability distribution is determined by assuming the ECG signals to originate from scaled VCG projections and additive noise. With this method, a VCG, tailored to each specific patient, is determined. The method is compared to the inverse Dower matrix method by applying both methods on standard 12-lead ECG recordings and evaluating the performance in predicting ECG signals from the determined VCG. In addition, to model nonstandard patients, the 12-lead ECG signals are randomly scaled and, once more, the performance in predicting ECG signals from the VCG is compared between both methods. Finally, both methods are also compared on fetal ECG signals that are obtained from the maternal abdomen. For patients conforming to the standard, both methods perform similarly, with t- - he developed method performing marginally better. For scaled ECG signals and fetal ECG signals, the developed method significantly outperforms the inverse Dower matrix method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Across-Frequency Delays Based on the Cochlear Traveling Wave: Enhanced Speech Presentation for Cochlear Implants

    Page(s): 596 - 606
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (799 KB) |  | HTML iconHTML  

    Cochlear implants stimulate the auditory nerve with the outputs of a bank of narrow-band filters. We propose that cochlear implant users are better able to perceive speech when these frequency bands are desynchronized, as occurs in the normal cochlea. The first part of this study was a computational investigation of across-frequency delays on the stimulation patterns generated by the advanced combination encoder (ACE) sound-processing strategy. By offsetting frequency bands from each other, fewer stimuli were discarded from voiced speech by maxima selection. Background noise, however, was not affected in this way. The second part of this study was an assessment of speech perception with across-frequency delays in cochlear implant users with the ACE strategy. In the perception of sentences in noise, three subjects improved with delays, four showed no change and one was worse. For words in quiet, four subjects had improved word recognition and four showed no change. A significant group improvement (P<0.05) was seen for speech in quiet. These results are encouraging for cochlear implant sound processing because across-frequency delays can be incorporated easily and efficiently into existing sound-processing strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving ECG Beats Delineation With an Evolutionary Optimization Process

    Page(s): 607 - 615
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (885 KB) |  | HTML iconHTML  

    As in other complex signal processing tasks, ECG beat delineation algorithms are usually constituted of a set of processing modules, each one characterized by a certain number of parameters (filter cutoff frequencies, threshold levels, time windows, etc.). It is well recognized that the adjustment of these parameters is a complex task that is traditionally performed empirically and manually, based on the experience of the designer. In this paper, we propose a new automated and quantitative method to optimize the parameters of such complex signal processing algorithms. To solve this multiobjective optimization problem, an evolutionary algorithm (EA) is proposed. This method for parameter optimization is applied to a wavelet-transform-based ECG delineator that has previously shown interesting performance. An evaluation of the final delineator, using the optimal parameters, has been performed on the QT database from Physionet and results are compared with previous algorithms reported in the literature. The optimized parameters provide a more accurate delineation, with a global improvement of 7.7%, over all the criteria evaluated, and over the best results found in the literature, which is a proof of interest in the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Spatiotemporal Framework for MEG/EEG Evoked Response Amplitude and Latency Variability Estimation

    Page(s): 616 - 625
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2110 KB) |  | HTML iconHTML  

    This paper presents a spatiotemporal framework for estimating single-trial response latencies and amplitudes from evoked response magnetoencephalographic/electroencephalographic data. Spatial and temporal bases are employed to capture the aspects of the evoked response that are consistent across trials. Trial amplitudes are assumed independent but have the same underlying normal distribution with unknown mean and variance. The trial latency is assumed to be deterministic but unknown. We assume that the noise is spatially correlated with unknown covariance matrix. We introduce a generalized expectation-maximization algorithm called Trial Variability in Amplitude and Latency (TriViAL) that computes the maximum likelihood (ML) estimates of the amplitudes, latencies, basis coefficients, and noise covariance matrix. The proposed approach also performs ML source localization by scanning the TriViAL algorithm over spatial bases corresponding to different locations on the cortical surface. Source locations are identified as the locations corresponding to large likelihood values. The effectiveness of the TriViAL algorithm is demonstrated using simulated data and human evoked response experiments. The localization performance is validated using tactile stimulation of the finger. The efficacy of the algorithm in estimating latency variability is shown using the known dependence of the M100 auditory response latency to stimulus tone frequency. We also demonstrate that estimation of response amplitude is improved when latency is included in the signal model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Detection of Swallowing Events by Acoustical Means for Applications of Monitoring of Ingestive Behavior

    Page(s): 626 - 633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1001 KB) |  | HTML iconHTML  

    Our understanding of etiology of obesity and overweight is incomplete due to lack of objective and accurate methods for monitoring of ingestive behavior (MIB) in the free-living population. Our research has shown that frequency of swallowing may serve as a predictor for detecting food intake, differentiating liquids and solids, and estimating ingested mass. This paper proposes and compares two methods of acoustical swallowing detection from sounds contaminated by motion artifacts, speech, and external noise. Methods based on mel-scale Fourier spectrum, wavelet packets, and support vector machines are studied considering the effects of epoch size, level of decomposition, and lagging on classification accuracy. The methodology was tested on a large dataset (64.5 h with a total of 9966 swallows) collected from 20 human subjects with various degrees of adiposity. Average weighted epoch-recognition accuracy for intravisit individual models was 96.8%, which resulted in 84.7% average weighted accuracy in detection of swallowing events. These results suggest high efficiency of the proposed methodology in separation of swallowing sounds from artifacts that originate from respiration, intrinsic speech, head movements, food ingestion, and ambient noise. The recognition accuracy was not related to body mass index, suggesting that the methodology is suitable for obese individuals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Online Self-Tunable Method to Denoise CGM Sensor Data

    Page(s): 634 - 641
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (610 KB) |  | HTML iconHTML  

    Continuous glucose monitoring (CGM) devices can be very useful in diabetes management. Unfortunately, their use in online applications, e.g., for hypo/hyperalert generation, is made difficult by random noise measurement. Remarkably, the SNR of CGM data varies with the sensor and with the individual. As a consequence, approaches in which filter parameters are not allowed to adapt to the current SNR are likely to be suboptimal. In this paper, we present a new online methodology to reduce noise in CGM signals by a Kalman filter (KF), whose unknown parameters are adjusted in a given individual by a stochastically based smoothing criterion exploiting data of a burn-in interval. The performance of the new KF approach is quantitatively assessed on Monte Carlo simulations and 24 real CGM datasets. Our results are compared with those obtained by a moving-average (MA) filtering approach with fixed parameters currently in use in likely all commercial CGM devices. Results show that the new KF approach performs much better than MA. For instance, on real data, for comparable signal denoising, the delay introduced by KF is about 35% less than that obtained by MA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computerized Image-Based Detection and Grading of Lymphocytic Infiltration in HER2+ Breast Cancer Histopathology

    Page(s): 642 - 653
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4521 KB) |  | HTML iconHTML  

    The identification of phenotypic changes in breast cancer (BC) histopathology on account of corresponding molecular changes is of significant clinical importance in predicting disease outcome. One such example is the presence of lymphocytic infiltration (LI) in histopathology, which has been correlated with nodal metastasis and distant recurrence in HER2+ BC patients. In this paper, we present a computer-aided diagnosis (CADx) scheme to automatically detect and grade the extent of LI in digitized HER2+ BC histopathology. Lymphocytes are first automatically detected by a combination of region growing and Markov random field algorithms. Using the centers of individual detected lymphocytes as vertices, three graphs (Voronoi diagram, Delaunay triangulation, and minimum spanning tree) are constructed and a total of 50 image-derived features describing the arrangement of the lymphocytes are extracted from each sample. A nonlinear dimensionality reduction scheme, graph embedding (GE), is then used to project the high-dimensional feature vector into a reduced 3-D embedding space. A support vector machine classifier is used to discriminate samples with high and low LI in the reduced dimensional embedding space. A total of 41 HER2+ hematoxylin-and-eosin-stained images obtained from 12 patients were considered in this study. For more than 100 three-fold cross-validation trials, the architectural feature set successfully distinguished samples of high and low LI levels with a classification accuracy greater than 90%. The popular unsupervised Varma-Zisserman texton-based classification scheme was used for comparison and yielded a classification accuracy of only 60%. Additionally, the projection of the 50 image-derived features for all 41 tissue samples into a reduced dimensional space via GE allowed for the visualization of a smooth manifold that revealed a continuum between low, intermediate, and high levels of LI. Since it is known that extent of LI in BC biopsy specimens is a - - prognostic indicator, our CADx scheme will potentially help clinicians determine disease outcome and allow them to make better therapy recommendations for patients with HER2+ BC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time Chirp-Coded Imaging With a Programmable Ultrasound Biomicroscope

    Page(s): 654 - 664
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2286 KB) |  | HTML iconHTML  

    Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (??170 frames/s). To demonstrate the capacities of the UBM system, chirp (1.28, 2.56, and 5.12 ??s durations) sequences with matched filter analysis are implemented in real time. Chirp and conventional impulse imaging (31 and 46 MHz center frequencies) of a wire phantom at fast sectorial scanning (0.7?? ms-1, 20 frames/s one-way image rate) are compared. Axial and lateral resolutions at the focus with chirps approach impulse imaging resolutions. Chirps yield 10-15 dB gain in SNR and a 2-3 mm gain in imaging depth. Real-time impulse and chirp-coded imaging (at 10-5 frames/s) are demonstrated in the mouse, in vivo. The system's open structure favors test and implementation of new sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color Graphs for Automated Cancer Diagnosis and Grading

    Page(s): 665 - 674
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2565 KB) |  | HTML iconHTML  

    This paper reports a new structural method to mathematically represent and quantify a tissue for the purpose of automated and objective cancer diagnosis and grading. Unlike the previous structural methods, which quantify a tissue considering the spatial distributions of its cell nuclei, the proposed method relies on the use of distributions of multiple tissue components for the representation. To this end, it constructs a graph on multiple tissue components and colors its edges depending on the component types of their endpoints. Subsequently, it extracts a new set of structural features from these color graphs and uses these features in the classification of tissues. Working with the images of colon tissues, our experiments demonstrate that the color-graph approach leads to 82.65% test accuracy and that it significantly improves the performance of its counterparts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Detection and Segmentation of Large Lesions in CT Colonography

    Page(s): 675 - 684
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1062 KB) |  | HTML iconHTML  

    Computerized tomographic colonography is a minimally invasive technique for the detection of colorectal polyps and carcinoma. Computer-aided diagnosis (CAD) schemes are designed to help radiologists locating colorectal lesions in an efficient and accurate manner. Large lesions are often initially detected as multiple small objects, due to which such lesions may be missed or misclassified by CAD systems. We propose a novel method for automated detection and segmentation of all large lesions, i.e., large polyps as well as carcinoma. Our detection algorithm is incorporated in a classical CAD system. Candidate detection comprises preselection based on a local measure for protrusion and clustering based on geodesic distance. The generated clusters are further segmented and analyzed. The segmentation algorithm is a thresholding operation in which the threshold is adaptively selected. The segmentation provides a size measurement that is used to compute the likelihood of a cluster to be a large lesion. The large lesion detection algorithm was evaluated on data from 35 patients having 41 large lesions (19 of which malignant) confirmed by optical colonoscopy. At five false positive (FP) per scan, the classical system achieved a sensitivity of 78%, while the system augmented with the large lesion detector achieved 83% sensitivity. For malignant lesions, the performance at five FP/scan was increased from 79% to 95%. The good results on malignant lesions demonstrate that the proposed algorithm may provide relevant additional information for the clinical decision process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Quality Visualization of Appendiceal Orifices Using Local Edge Cross-Section Profile Features and Near Pause Detection

    Page(s): 685 - 695
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1216 KB) |  | HTML iconHTML  

    Colonoscopy is an endoscopic technique that allows a physician to inspect the inside of the human colon. The appearance of the appendiceal orifice during colonoscopy indicates a complete traversal of the colon, which is an important quality indicator of the colon examination. In this paper, we present two new algorithms. The first algorithm determines whether an image shows the clearly seen appendiceal orifice. This algorithm uses our new local features based on geometric shape, illumination difference, and intensity changes along the norm direction (cross section) of an edge. The second algorithm determines whether the video is an appendix video (the video showing at least 3 s of the appendiceal orifice inspection). Such a video indicates good visualization of the appendiceal orifice. This algorithm utilizes frame intensity histograms to detect a near camera pause during the apendiceal orifice inspection. We tested our algorithms on 23 videos captured from two types of endoscopy procedures. The average sensitivity and specificity for the detection of appendiceal orifice images with the often seen crescent appendiceal orifice shape are 96.86% and 90.47%, respectively. The average accuracy for the detection of appendix videos is 91.30%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building and Tracking Root Shapes

    Page(s): 696 - 707
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2786 KB) |  | HTML iconHTML  

    An algorithm aiming at robust and simultaneous registrations of a sequence of 3-D shapes was recently presented by Jacq et al. [IEEE Trans. Biomed. Eng., vol. 55, no. 5, 2008]. This algorithm has to carry out an implicit representation of their common root shape (RS). A particular emphasis was put on the median consensus shape, which is a specific type of RS. Unlike this previous study, mainly focusing on the algorithm foundations while dealing with very specific applications examples, this paper attempts to show the versatility of the RS concept through a set of three problems involving a wider scope of application. The first problem copes with the design of prosthetic cortical plates for the hip joint. It shows how an explicit reconstruction of the RS, coming with its consensus map, could bring out an intermediary anatomical support from which pragmatic choices could be made, thereby performing a tradeoff between morphological, surgical, and production considerations. The second problem addresses in vivo real-time shoulder biomechanics through a miniature 3-D video camera. This new protocol implicitly operates through RS tracking of the content of virtual spotlights. It is shown that the current medical-oriented protocol, while operating within expert offices through low-cost equipments, could challenge high-end professional equipments despite some limitations of the 3-D video cameras currently available. The last problem deals with respiratory motions. This is an auxiliary measurement required by some medical imaging systems that can be handled as a basic application case of the former new protocol. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Bin He
Department of Biomedical Engineering