By Topic

Biomedical Engineering Conference, 2008. CIBEC 2008. Cairo International

Date 18-20 Dec. 2008

Filter Results

Displaying Results 1 - 25 of 89
  • 2008 Cairo International Biomedical Engineering Conference

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (16 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 2
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • Microcalcifications Enhancement in Digital Mammograms using Fractal Modeling

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (250 KB) |  | HTML iconHTML  

    Mammogram - breast X-ray imaging - is considered the most effective, low cost, and reliable method in early detection of breast cancer. Clustered microcalcifications are an important early sign of breast cancer. In this paper, we are introducing, as an aid to radiologists, a computer-aided diagnosis (CAD) system, which could be helpful in detecting microcalcifications faster than traditional screening program without the drawback attribute to human factors. The techniques used in this paper for feature extraction is based on the fractal modeling of locally processed image (ROI). Classification between normal and microcalcification is done using the voting K-nearest neighbor classifier and the support vector machine classifier. The two classification techniques used were compared through the system to reach a better classification decision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital Color Doppler Signal Processing

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    In Color Doppler ultrasound imaging system, digital signal processing is mainly based on Hilbert filter and Clutter rejection filters. Hilbert filter is used to filter out the negative frequency component of the real Doppler signal to produce the analytic signal. Clutter rejection filter are used to filter out the low frequency unwanted tissue signals for measurement of the low velocity blood flow. In this paper, two methods for designing a Hilbert filter are implemented. One of these methods is based on shifting sinc function and the other method is window based. In addition, clutter filters are designed using different types of FIR and IIR design methods. The magnitude response of each of the different designed filters is plotted to select the best filter design with the minimum order that reject clutter signals. The assessment is based on the magnitude response specifications. The Hilbert filters results show that the first method is better than other method. The clutter rejection filters results show that IIR filters offer significantly better performance than FIR at the same order. The FIR requires higher order to achieve comparable narrow transition band to IIR. For IIR; Chebyshev II and elliptic types show the best clutter rejection filters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross Correlation based Inter-Transaction Association Rule Mining Technique

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (225 KB) |  | HTML iconHTML  

    Several algorithms have been proposed to solve the problem of mining frequent inter-transaction item set. However, the low efficiency of support calculation for inter-transaction itemsets is still a challenging problem that eliminates the performance of mining algorithms. This paper provides inter-transaction association rule mining algorithm using effective technique for support calculations. The proposed technique is based on cross correlation and bitwise operations. The experimental results show a significant improvement of performance up to several orders of magnitude compared to First Intra Then Inter (FITI) algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Statistical Model Combining Shape and Spherical Harmonics for Face Reconstruction and Recognition

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1179 KB) |  | HTML iconHTML  

    We describe a face reconstruction framework based on a statistical model that combines shape (2D and height maps), appearance (albedo), and spherical harmonics projection (SHP) information. The framework takes a 2D frontal face image under arbitrary illumination as input and outputs the estimated 3D shape and appearance. Face identification is performed using the shape and albedo coefficients of the fitting process, as well as the actual shape/texture reconstructions. Results indicate decent face reconstructions and a perfect recognition rate for the frontal images of the Extended Yale Database B. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multimodal Hand Vein, Hand Geometry, and Fingerprint Prototype Design for High Security Biometrics

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4334 KB) |  | HTML iconHTML  

    Prior research evidenced that the unimodal biometric systems have several tradeoffs like noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. In order for the biometric system to be more secure and to provide higher accuracy, more than one form of biometrics are required. Hence, the need arise for multimodal biometrics using combinations of different biometric modalities. We describe the design and development of whole hands biometrics prototype system that acquires left and right (L/R) index and ring fingerprints (FP), L/R near-infra-red (NIR) dorsal hand vein (HV) patterns, and L/R NIR dorsal hand geometry (HG) shape. Large database of 500-1000 subjects for whole hands is planned for data collection. The acquired sample images were found to have good quality for all features and patterns extraction to all modalities. The designed prototype can be considered for authentication and identification purposes. Advantages of this system over few existing multimodal systems are its being very hard to spoof attacks on the sensory level and the NIR HV and NIR HG thermal images are good signals for liveness detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Retinal Identification

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    The need to identify people is as old as human kind. Biometric devices automate the personal recognition process. Each of us is unique from every other human being. We have unique physical characteristics, such as fingerprints, blood vessel patterns and hand shape. Biometric devices measure and record these characteristics for automated comparison and verification. The aim of this study is accurately identifying people using their retinal image. A new algorithm is proposed for the extraction of the retinal characteristic features based on image analysis and image statistics. These features are extracted from the plane of the fundus image captured by a fundus camera. These features can be used for either the verification or identification process. The algorithm was tested with sixty fundus images. Success rate of 94.5% is reached. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructing Suffix Array During Decompression

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB) |  | HTML iconHTML  

    The suffix array is an indexing data structure used in a wide range of applications in Bioinformatics. Biological DNA sequences are available to download from public servers in the form of compressed files, where the popular lossless compression program gzip [1] is employed. The straightforward method to construct the suffix array for this data involves decompressing the sequence file, storing it on disk, and then calling a suffix array construction program to build the suffix array. This scenario, albeit feasible, requires disk access and throws away valuable information in the compressed file. In this paper, we present an algorithm that constructs the suffix array during the decompression requiring no disk access and making use of the decompression information to construct the suffix array. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gene Regulatory Network Modeling using Bayesian Networks and Cross Correlation

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    Gene regulatory network is a set of genes which interact with each other indirectly and thereby rates of gene expression to mRNA are controlled. DNA microarrays can measure the expression levels of thousands of genes. Because of noise in microarray data and probabilistic nature of Bayesian networks (BN), we used them to model causal relations between genes. One difficulty with this technique is that learning the BN structure is an NP-hard problem, as the number of possible structures is superexponential in the number of genes. So in this paper, genes are clustered based on gene ontology and then BN is applied to model relationships between co-clustered genes. On the other hand since time delay information exists in a real regulatory network and BN is a static network, we proposed a novel method that uses cross-correlation between co-clustered genes to incorporate time information in BN. This method is applied to reconstruct the regulatory network of 84 yeast genes from Saccharomyces cerevisiae cell cycle dataset. Comparing the simulations results with the KEGG pathway map show that using cross-correlation increases accuracy from 66% to 72%. Sensitivity of model is improved too, as the number of inferred links is increased from 70 to 101. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative Analysis of Three Efficient Approaches for Retrieving Protein 3D Structures

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    In this paper, comparative analysis is presented of our three 3D structure-based approaches for the efficient retrieval of protein molecules. All approaches rely on the 3D structure of the proteins. In the first approach, the Spherical Trace Transform is applied to protein 3D structures in order to produce geometry based descriptors. Additionally, some biological properties of the protein are taken, thus forming better integrated descriptor. In the second approach, some modification of the ray based descriptor is applied on the backbone of the protein molecule. In the third approach, wavelet transformation is applied on the distance matrix of the Calpha atoms which form the backbone of the protein. The SCOP database was used to evaluate the retrieval accuracy. We provide some experimental results of the retrieval accuracy of our three approaches. The results show that the ray based approach gives the best retrieval accuracy (97,5%), while it is simpler and faster than the other two approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving Accuracy of Non-Invasive Glucose Monitoring Through Non-local Data Denoising

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB) |  | HTML iconHTML  

    Correlation and clinical interpretation, in respect to the true glucose value of patient is imperative for optimum therapy and disease management. Accuracy of optical glucometer is hampered by many debilitating factors such as concentration range, sampling environment, tongue-to-spectrometer interface, changes in wavelength, polarization or intensity of light, to name a few. Regression techniques are used in such devices to build patient glucose model. This work is an extension to our previous work regarding multivariate calibration for glucose level prediction in noninvasive human tongue spectra. Here, we present our results for noise reduction and data conditioning during glucose spectrum isolation phase. We embed our 'Indicator Function (IF)' scheme into two popular techniques known as Outlier Sample Removal (OSR) and Descriptor Selection (DS). Methodology is tested on dataset 'OCATNE20' obtained from a public domain website and results are compared at both OSR and DS for a wide range of blood serum samples. Our results show that outlier samples identification and removal in early stage significantly increase the prediction of unknown samples typically in the range of 7.95% to 9.84%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VisCHAINER: Visualizing Genome Comparison

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Visualization of genome comparison data is valuable for identifying genomic structural variations and determining evolutionary events. Although there are many software tools with varying degrees of sophistication for displaying such comparisons, there is no tool for displaying dot plots of multiple genome comparisons. The dot plot mode of visualization is more appropriate and convenient than the traditional linear mode, particularly for detecting large scale genome deletions, duplications, and rearrangements. In this paper, we present VisCHAINER, which addresses this limitation, and displays dot plots of multiple genome comparisons in addition to the traditional linear mode. VisCHAINER is a stand-alone interactive visualization that effectively handles large amounts of genome comparison data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced PIELG: A Protein Interaction Extraction System using a Link Grammar Parser from biomedical abstracts

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    Due to the ever growing amount of publications about protein-protein interactions, information extraction from text is increasingly recognized as one of crucial technologies in bioinformatics. This paper investigates the effect of adding a new module - Complex Sentence Processor (CSP) - to the PIELG system. PIELG is a Protein Interaction Extraction System using a Link Grammar Parser from biomedical abstracts (PIELG). PIELG uses linkage given by the Link Grammar Parser to start a case based analysis of contents of various syntactic roles as well as their linguistically significant and meaningful combinations. The system uses phrasal-prepositional verbs patterns to overcome preposition combinations problems. The recall and precision are enhanced to 49.33 % and 65.16 % respectively. Experimental evaluations with two other state-of-the-art extraction systems indicate that enhanced PIELG system achieves better performance. The result shows that the performance is remarkably promising. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • avaBLAST: a fast way of doing all versus all BLAST

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB) |  | HTML iconHTML  

    All versus all BLAST sequence comparison is now a standard procedure in the comparative analysis of large numbers of genomes. Several approaches have been developed to speed up general BLAST searches but these are focused towards searching a limited number of sequences against a large database and thus do not address the computational issues faced when carrying out an all versus all BLAST. Furthermore, optimal speed ups in BLAST searches using existing approaches could not be obtained due to additional overheads such as re-calculation of original BLAST E-values and unnecessary copying of query or database fragments that causes messaging overload when using communication libraries. We have developed a program, called avaBLAST that significantly reduces running times for large-scale all versus all BLAST searches. In contrast to earlier approaches, avaBLAST provides a significant speed up by dividing up the large query set of sequences and searching small chunks of queries against the complete database. avaBLAST avoids additional overheads as it does not require re-calculation of BLAST E-values and it do not require any communication libraries. In an evaluation, comparing multiple datasets from 32 fungal genomes against each other using 32 processors at NW-GRID cluster, avaBLAST achieves speed ups of up to 150 times over mpiBLAST. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fine Tuning the Enhanced Suffix Array

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (177 KB) |  | HTML iconHTML  

    The enhanced suffix array is an indexing data structure used for a wide range of applications in Bioinformatics. It is basically the suffix array but enhanced with extra tables that provide extra information to improve the performance in theory and in practice. In this paper, we present a number of improvements to the enhanced suffix array: 1) We show how to find a pattern of length m in O(m) time, i.e., independent of the alphabet size. 2) We present an improved representation of the bucket table. 3) We improve the access time of addressing the LCP (longest common prefix) table when one byte per entry is used in representing it. The basic idea behind these improvements is the extensive use of the minimal perfect hashing technique, by which n static items can be stored in linear space while retaining O(1) access time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • BioSNI: A Semantic Network Integration Approach for Biological Data

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (601 KB) |  | HTML iconHTML  

    Recently, the world is hunting for using life sciences data in solving the problems of fighting hunger in the next coming years. Integrating this data can be useful in areas of agricultural bioinformatics and other disciplines. However, efficient integration techniques must be developed to biological data since biological data has its own challenging characteristics, such as the existence of huge data existence, heterogeneous distributed data, and frequently updated data. In the current work, a semantic network for biological data integration is proposed, utilizing both ontology provided at OBO and atomic data provided at various biological databases to encompass an integrated data layer that can be queried using XQuery. Human and Yeast proteins are used as examples from UniProt release 14, integrated with other protein-related data, such as protein-protein interaction, protein domain, protein function, protein subcellular location, and related chemical reactions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Imaging Method for Early Breast Cancer Dtection

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (297 KB) |  | HTML iconHTML  

    A theoretical model is presented to investigate the possibility of using a hybrid imaging method for early breast cancer detection. It uses the combined microwave and acoustic excitations to exploit contrasts in both the dielectric and elastic properties of malignant and normal breast tissues. The calculated results presented in this paper show that value of the scattering contrast between the malignant and healthy tissues is increased significantly when using the proposed hybrid method, which means an increase in the detection capability of the imaging system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-the-Fly Detection of Changes in Mucosal Tissue Architecture During Endoscopy - A Flag as to Where and When to Take Biopsies

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (330 KB) |  | HTML iconHTML  

    In this paper we present a segmentation technique that raises a flag on-the-fly as to when a transition occurs between different mucosal architectures while traveling along the surface of the tissue for careful and subsequent examination. The presented segmentation technique has the potential to enhance the endoscopist's ability to locate and identify abnormal mucosal architectures; and to help in the decision as to when and where to take biopsies; steps that should lead to improvement in the diagnostic yield. The segmentation scheme is based on detecting an abrupt change in the parameters extracted from the stochastic decomposition method (SDM) that models the scattered signal. Subsequent evaluation of the homogeneous regions separated by the found edges for further mucosal type classification is then possible. The tests are performed on two types of animal tissue data collected from rat colon and rabbit colon in vitro. The results demonstrate the effectiveness of the proposed segmentation scheme to detect the transition between different mucosal structures on-the-fly with sensitivity reaching 40 mum which corresponds to the sliding segmentation window step size used in our experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of Nerve Fiber Tracking Methodologies using Diffusion Tensor Magnetic Resonance Imaging

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    Neural connectivity studies are extremely important for interpreting functional magnetic resonance imaging (FMRI) data and brain in vivo studies. By assuming that the largest principal axis of the diffusion tensor aligns with the predominant fiber orientation in an MRI voxel, we can obtain 2D or 3D vector fields that represent the fiber orientation at each voxel. An algorithm was developed for tracking brain white matter fibers using diffusion tensor magnetic resonance imaging (DT-MRI), which is the only approach now to non-invasively study the architecture of white matter tracts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconstructive Ultrasound Elastography to Determine the Shear Modulus of Prostate Cancer Tissue

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2567 KB) |  | HTML iconHTML  

    In the field of medical diagnosis, there is a strong need to determine mechanical properties of biological tissue, which are of histological and pathological relevance. In order to obtain noninvasively quantitative mechanical properties of tissue, we propose in this work a new imaging modality which can be additionally used for the careful assessment of tumors in different soft tissues. This novel modality was named reconstructive ultrasound elastography, which is an inverse approach which can estimate the spatial distribution of the relative shear modulus of tissue from the measured axial deformation. First, during the solution of the mechanical forward problem the biological tissue was modeled as a linear isotropic incompressible elastic medium and a 2-D plane strain state model was used. Furthermore, to develop an inverse elastography reconstruction procedure, finite element simulations were performed for a number of biological tissue object models. The results obtained from finite element analysis were confirmed in the ultrasonic experiments on a set of tissue-like phantoms with known acoustical and mechanical properties. Finally, using numerical solution models and solving the inverse problem using two different methods we deduce the relative shear modulus of the sample. The used method is an iterative method for solving the inverse elasticity problem and is based on recasting the problem as a non-linear optimization problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Face and Facial Features in digital Images and Video Frames

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    In the recent few decades, automatic detection and tracking of face and facial features such as eyes and mouth, in image and video sequences has become an active research area in machine vision applications such as Human-Computer Interaction (HCI). In this paper, a new algorithm for detection of face and facial features is proposed that can localize eyes and mouth very accurately in images. In this method, a combination of luminance, color and edge properties of image is used. This method is compared to the method introduced by Rein Lien Hsu, in which color and luminance information is used, and it is shown that the new algorithm is more robust and accurate in locating eyes and mouth in facial images with maximum 30 degrees of lateral rotation. Both methods are implemented and tested on a database containing 103 different images of face, and it is shown that the proposed method increases the accuracy by 4 percent and reaches to 91.26% of accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Objective Analysis Of Ultrasound Images By Use Of A Computational Observer

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (153 KB) |  | HTML iconHTML  

    We present a computer-based computational observer method for the analysis and evaluation of digitized ultrasound images of the contrast-detail phantom, which was developed earlier. This evaluation method evolved from image evaluation studies, which demonstrates that human observer performance is not sufficiently consistent or accurate for objective evaluation of images or imaging system. The computational observer measures the detection threshold for imaged targets of known contrast, and computes contrast-detail data from this information. We find that our new method meets the criteria of being objective, accurate, reproducible, transportable, and relevant to human observer image evaluations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Evaluation of Cardiac MRI Image Denoising Techniques

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1475 KB) |  | HTML iconHTML  

    Black-blood cardiac magnetic resonance imaging (MRI) plays an important role in diagnosing a number of heart diseases. The technique suffers inherently from low contrast-to-noise ratio between the myocardium and the blood. In this work, we examined the performance of different classification techniques that can be used. The three techniques successfully removed the noise with different performance. Numerical simulation has been done to quantitatively evaluate the performance of each technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cardiac MRI Steam Images Denoising using Bayes Classifier

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (801 KB) |  | HTML iconHTML  

    Imaging of the heart anatomy and function using magnetic resonance imaging (MRI) is an important diagnosis tool for heart diseases. Several techniques have been developed to increase the contrast-to-noise ratio (CNR) between myocardium and background. Recently, a technique that acquires cine cardiac images with black-blood contrast has been proposed. Although the technique produces cine sequence of high contrast, it suffers from elevated noise which limits the CNR. In this paper, we study the performance and efficiency of applying a Bayes classifier to remove background noise. Real MRI data is used to test and validate the proposed method; In addition, a quantitative comparison is done between the proposed method and other thresholding-based classifications techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.