Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Computer-Based Medical Systems, 2009. CBMS 2009. 22nd IEEE International Symposium on

Date 2-5 Aug. 2009

Filter Results

Displaying Results 1 - 25 of 93
  • [Title page]

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (179 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (246 KB)  
    Freely Available from IEEE
  • A middleware agnostic infrastructure for neuro-imaging analysis

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (686 KB) |  | HTML iconHTML  

    Large-scale neuroscience research projects are necessary in order to make significant progress in the study of degenerative brain diseases. At present the effectiveness of such efforts is being somewhat restricted by the absence of specifically tailored computing infrastructures. The neuGRID project aims to address this through the provision of a high-level service oriented infrastructure that enables complex neuro-science research. One of the principle aims of this work is to develop portable services that can be re-used in a larger set of related medical applications to access distributed computing resources. These services will provide high-level functionality that will support workflow authoring and planning, provenance storage and retrieval, querying against heterogeneous data sources as well as security and data anonymization amongst others. This paper introduces the neuGRID service architecture and outlines the design of two specific services, namely the Pipeline Service and the Glueing Service. A proof of concept implementation to evaluate the neuGRID design approach has been developed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Federating health information systems to enable population level research

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (351 KB) |  | HTML iconHTML  

    Epidemiology requires large-scale, high-resolution, representative population data sets; data extracted from electronic health record systems meets these criteria. However, within the UK, there is no single electronic health record, and the record of a patient's healthcare is fragmented over multiple systems and multiple organizations. In the SHORE project we have developed a proof of concept system that addresses these problems by retaining control of patient data at a local level, where it can be effectively interpreted and governed, and by overlaying on these data sources privacy-preserving record linkage providing a unified view of the health and care of the population. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vision-based, real-time retinal image quality assessment

    Publication Year: 2009 , Page(s): 1 - 6
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (591 KB) |  | HTML iconHTML  

    Real-time medical image quality is a critical requirement in a number of healthcare environments, including ophthalmology where studies suffer loss of data due to unusable (ungradeable) retinal images. Several published reports indicate that from 10% to 15% of images are rejected from studies due to image quality. With the transition of retinal photography to lesser trained individuals in clinics, image quality will suffer unless there is a means to assess the quality of an image in real-time and give the photographer recommendations for correcting technical errors in the acquisition of the photograph. The purpose of this research was to develop and test a methodology for evaluating a digital image from a fundus camera in real-time and giving the operator feedback as to the quality of the image. By providing real-time feedback to the photographer, corrective actions can be taken and loss of data or inconvenience to the patient eliminated. The methodology was tested against image quality as perceived by the ophthalmologist. We successfully applied our methodology on over 2,000 images from four different cameras acquired through dilated and undilated imaging conditions. We showed that the technique was equally effective on uncompressed and compressed (JPEG) images. We achieved a 100 percent sensitivity and 96 percent specificity in identifying ldquorejectedrdquo images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enabling effective curation of cancer biomarker research data

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (481 KB) |  | HTML iconHTML  

    The dramatic increase in data in the area of cancer research has elevated the importance of effectively managing the quality and consistency of research results from multiple providers. The U.S. National Cancer Institute's Early Detection Research Network (EDRN) is a prime example of a virtual organization, sponsoring distributed, collaborative work at dozens of institutions around the country. As part of a comprehensive informatics infrastructure, The NASA Jet Propulsion Laboratory, in collaboration with Dartmouth Medical School, has developed a web application for the curation of cancer biomarker research results. In this paper, we describe and evaluate the application in the context of the EDRN content management process, and detail our experience using the tool in an operational environment to capture and annotate biomarker research data generated by the EDRN. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An ontology-based framework to support nonintrusive storage and analysis of radiological diagnosis data

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB) |  | HTML iconHTML  

    The purpose of the present work was to create a computational framework for radiological registry and diagnosis, which, by means of a nonintrusive approach, allows the users freedom annotation while storing related information in a structured and standard format. To get this goal, this work started with the investigation of lexical and semantic domain of radiological texts, followed by the design and implementation of an ontology, called RadOn, and the modeling of a database based on it. The next step was the development of a set of integrated software components, called E-Rad, based on text mining techniques, which identifies and extracts words, terms and expressions from free texts, adjusts them to the ontology structure and stores them. Finally, a query engine that works on the E-Rad structure, having a user interface, was implemented. The outcome is a computational environment that, by means of the implemented ontology and related tools, makes the data semantic explicit and allows the use of the context information in an optimal and standard way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extending CRISP-DM to incorporate temporal data mining of multidimensional medical data streams: A neonatal intensive care unit case study

    Publication Year: 2009 , Page(s): 1 - 5
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    Using a neonatal intensive care unit (NICU) case study, this work investigates the current cross industry standard process for Data Mining (CRISP-DM) approach for modeling intelligent data analysis (IDA)-based systems that perform temporal data mining (TDM). The case study highlights the need for an extended CRISP-DM approach when modeling clinical systems applying data mining (DM) and temporal abstraction (TA). As the number of such integrated TA/DM systems continues to grow, this limitation becomes significant and motivated our proposal of an extended CRISP-DM methodology to support TDM, known as CRISP-TDM. This approach supports clinical investigations on multi-dimensional time series data. This research paper has three key objectives: 1) Present a summary of the extended CRISP-TDM methodology; 2) Demonstrate the applicability of the proposed model to the NICU data, focusing on the challenges associated with multi-dimensional time series data; and 3) Describe the proposed IDA architecture for applying integrated TDM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enriching concept descriptions in an amphibian ontology with vocabulary extracted from wordnet

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB) |  | HTML iconHTML  

    An important task of ontology learning is to enrich the vocabulary for domain ontologies using different sources of information. WordNet, an online lexical database covering many domains, has been widely used as a source from which to mine new vocabulary for ontology enrichment. However, since each word submitted to WordNet may have several different meanings (senses), existing approaches still face the problem of semantic disambiguation in order to select the correct sense for the new vocabulary to be added. In this paper, we present a similarity computation method that allows us to efficiently select the correct WordNet sense for a concept-word in a given ontology. Once the correct sense is identified, we can then enrich the concept's vocabularly using nearby words in WordNet. Experimental results using an amphibian ontology show that the similarity computation method reach a good average accuracy and our approach is able to enrich the vocabulary of each concept with words mined from WordNet synonyms and hypernyms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A medical image retrieval framework in correlation enhanced visual concept feature space

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    This paper presents a medical image retrieval framework that uses visual concepts in a feature space employing statistical models built using a probabilistic multi-class support vector machine (SVM). The images are represented using concepts that comprise color and texture patches from local image regions in a multi-dimensional feature space. A major limitation of concept feature representation is that the structural relationship or spatial ordering between concepts are ignored. We present a feature representation scheme as visual concept structure descriptor (VCSD) that overcomes this challenge and captures both the concept frequency similar to a color histogram and the local spatial relationships of the concepts. A probabilistic framework makes the descriptor robust against classification and quantization errors. Evaluation of the proposed image retrieval framework on a biomedical image dataset with different imaging modalities validates its benefits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shared genomics: A platform for emerging interpretation of genetic epidemiology

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (783 KB) |  | HTML iconHTML  

    The study of the genetics of diseases has been revolutionised by the advent of genome-wide genotyping technologies. Increasingly, genome-wide association studies are being used to identify positions within the human genome that have a link with a disease condition. These new data sets require the use of distributed resources, both for the statistical analysis and for the interpretation of the analysis results. Aiding the latter will be be crucial for the statistical analysis process to be successful. In this paper we report our experiences in developing a user-friendly High Performance Computing (HPC) statistical genetics analysis platform for use by clinical researchers. Specifically, we report work on supporting the interpretation process through the automatic annotation of the statistical analysis results with relevant biological information. Retrieval of the biological annotation is performed by high-volume invocation of multiple Web-services orchestrated via pre-existing scientific workflows. We also report work on developing tools to aid the capture and replay of the processes performed by a user when exploring analysis results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extraction of coexpression relationship among genes from biomedical text using dynamic conditional random fields

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB) |  | HTML iconHTML  

    Text mining tools and algorithms are being successfully used for information extraction especially on large corpus like biomedical publications. These tools not only aid in information extraction but also in forming new theories and relationships between various fields of biomedical research. Extraction of gene-gene or gene-disease relationship is one such application. In this paper, we introduce a method to detect coexpressed genes from text, using the grammatical dependencies among the words within sentences and Dynamic Conditional Random Fields (DCRFs). Determining the coexpression relationship between and among genes can help in identifying important concepts such as the functionality of gene(s) involved, their pathogenic mechanism, and in deciphering protein-protein interactions. This work attempts to extract relevant sentences by labeling the genes involved as well as the word representing the relationship, from full-length papers collected from PubMed. The results obtained were compared with that of Support Vector Machine (SVM) and Nearest Neighbor with generalization (NNge), and have been found to outperform both. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SSDCVA: Support system to the diagnostic of cerebral vascular accident for physiotherapy students

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (383 KB) |  | HTML iconHTML  

    The aim of this paper is to provide an intelligent pedagogical agents based on cases to aid in the diagnosis and treatment of patients with neurological disorders. The study proposes an architecture which facilitates the decision making process of students in the healthcare field, in order to advice adequate treatment for patients suffering a cerebral vascular accident (CVA). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network approach to multi-biomarker panel development based on LC/MS/MS proteomics profiles: A case study in breast cancer

    Publication Year: 2009 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    Liquid chromatography tandem mass spectrometry (LC/MS/MS) based plasma proteomics profiling technique is a promising technology platform to study candidate protein biomarkers for complex human diseases such as cancer. Factors such as inherent variability, protein detectability limitation, and peptide discovery biases among LC/MS/MS platforms have made the classification and prediction of proteomics profiles challenging. In this paper, we developed a proteomics data analysis method to identify multi-protein biomarker panels for breast cancer diagnosis based on artificial neural networks. Using this method, we first applied standard analysis of variance (ANOVA) to derive a list of single candidate biomarkers that significantly changed from plasma proteomics profiles between breast cancer and controls. Next, we constructed a feed forward neural network (FFNN) for each combination of single marker proteins and trained with plasma proteomics results derived from 40 breast cancer women and 40 control women. We evaluated the results for best five-marker panel and ten-marker panels on a testing data set of similar cohort of 80 plasma proteomics profiles, of which half are breast cancer women and half are controls, using both statistical methods (receiver operating characteristics curve comparisons) and biological literature validation. We found that five-marker panel using two-variable FFNN output achieved the best prediction performance in testing data set, with 82.5% in sensitivity and 82.5% in specificity. Our computational method can serve as a general model for multi-biomarker panel discovery applications in other diseases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scenario-oriented information extraction from electronic health records

    Publication Year: 2009 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (603 KB) |  | HTML iconHTML  

    Providing a comprehensive set of relevant information at the point of care is crucial for making correct clinical decisions in a timely manner. Retrieval of scenario specific information from an extensive electronic health record (EHR) is a tedious, time consuming and error prone task. In this paper, we propose a model and a technique for extracting relevant clinical information with respect to the most probable diagnostic hypotheses in a clinical scenario. In the proposed technique, we first model the relationship between diseases, symptoms, signs and other clinical information as a graph and apply concept lattice analysis to extract all possible diagnostic hypotheses related to a specific scenario. Next, we identify relevant information regarding the extracted hypotheses and search for matching evidences in the patient's EHR. Finally, we rank the extracted information according to their relevancy to the hypotheses. We have assessed the usefulness of our approach in a clinical setting by modeling a challenging clinical problem as a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting risk of complications following a drug eluting stent procedure: A SVM approach for imbalanced data

    Publication Year: 2009 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    Drug Eluting Stents (DES) have distinct advantages over other Percutaneous Coronary Intervention procedures, but have recently been associated with the development of serious complications after the procedure. There is a growing need for understanding the risk of these complications, which has led to the development of simple statistical models. In this work, we have developed a predictive model based on Support Vector Machines on a real world live dataset consisting of clinical variables of patients being treated at a cardiac care facility to predict the risk of complications at 12 months following a DES procedure. A significant challenge in this work, common to most clinical machine learning datasets, was imbalanced data, and our results showed the effectiveness of the Synthetic Minority Over-sampling Technique (SMOTE) to address this issue. The developed predictive model provided an accuracy of 94% with a 0.97 AUC (Area under ROC curve), indicating high potential to be used as a decision support for management of patients following a DES procedure in real-world cardiac care facilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ADONIS: Automated diagnosis system based on sound and precise logical descriptions

    Publication Year: 2009 , Page(s): 1 - 8
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    Automated medical diagnosis systems based on knowledge-oriented descriptions have gained momentum with the emergence of Semantic Descriptions. However, soundness and efficiency of the underlying logics in these descriptions are critical to harness the potential of these systems. In this paper, we provide a well-structured ontology for automated diagnosis and a three-fold formalization based on Predicate Logic, Description Logic and Rules. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Service oriented approach for multi backend retrieval in medical systems

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (569 KB) |  | HTML iconHTML  

    This paper describes a software architecture approach which leads to a simplified solution for information retrieval in multiple backend systems. It is mainly based on the core ideas of service oriented and pattern oriented software architectures and supports the creation of simple structured, changeable retrieval systems with a unified user interface. It is mainly focused on the special needs of typical healthcare information system landscapes. The architectural approach has already been tested within a university hospital portal system which allows unified personalized access to patient dependent medical data to externals like related hospitals or resident doctors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assimilating information and offering a medical opinion in remote and co-located meetings

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (267 KB) |  | HTML iconHTML  

    Discussion on patient data, among hospital staff, plays an increasingly important role in inter-specialist communication. Effectiveness of a discussion depends, among other factors, on how well its participants perceive, assimilate and interpret information exchanged during a discussion. This paper reports a field study conducted to assess information assimilation among medical observer participants during PCDs in a hospital. Medically trained observer participants undertook a questionnaire at multi-disciplinary medical team meetings (MDTMs) in teleconference and co-located settings. Results show that participants are more likely to offer opinions in teleconference while their expectations on the long-term effects of treatment are more realistic in co-located PCDs than in teleconference PCDs. Surprisingly, the presentation of clinical findings, radiology and pathology is perceived to be clearer in teleconference, and respondents believe that they follow the discussion, know the patient management plan and understand the basis for decisions, better in teleconference than in co-located PCDs. While a higher educational value is attributed to teleconference PCDs, evidence suggests a trend to have more errors in teleconference, less critical evaluation and no expression of disagreement with patient management decisions made in teleconference. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HL7 healthcare information management using aspect-oriented programming

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    Given the heterogeneity of healthcare software systems, data from each system is often incompatible inhibiting interoperability. To enable the sharing and exchange of healthcare information interoperability standards must be adhered to. Health Level Seven (HL7) is the international standards organisation that promotes and enforces the standardisation of electronic healthcare information to facilitate its exchange and management. Incorporating HL7 functionality into existing applications requires significant modification and intrusive extensions. Using aspect-oriented programming (AOP), we can introduce HL7 functionality into existing applications without the requirement for refactoring or modification. HL7 data formatting affects multiple parts of an application and hence is a ldquocrosscut-ting concernrdquo. These concerns which entwine with base functionality introduce complexity and reduce modularity. A second benefit of AOP is its advanced modularisation capabilities which are capable of modularising ldquocrosscut-ting concernsrdquo. We illustrate the benefits of using AOP in HL7 by example and measure the effects of the approach on healthcare applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GridSnake: A Grid-based implementation of the Snake segmentation algorithm

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (694 KB) |  | HTML iconHTML  

    Medical imaging is becoming a key technique to visualize the internal structure of the body. Magnetic resonance imaging (MRI) is currently used to take different spatial images of organs, such as the heart. The output of such an analysis is a set of images representing different views of the body or of an organ. There exist many algorithms to pre-process and analyze medical images such as the well known Snake segmentation algorithm. An issue in medical imaging is the large size of images, that require large and efficient data stores, and the high computational power needed to process them. For these reasons, the Grid is being more and more used as an ideal environment for medical image processing. This paper presents a first experience in porting the snake algorithm on a Globus-based Grid. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation, reconstruction, and analysis of blood thrombi in 2-photon microscopy images

    Publication Year: 2009 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (634 KB) |  | HTML iconHTML  

    In this paper, we study the problem of segmenting, reconstructing, and analyzing the structure and growth of thrombi (clots) in vivo in blood vessels based on 2-photon microscopic image data. First, we develop an algorithm for segmenting clots in 3-D microscopic images which incorporates the density-based clustering algorithm and other methods for dealing with imaging artifacts. Next, we apply the union-of-balls (or alpha-shape) algorithm to reconstruct the boundary of clots in 3-D. Finally, we perform experimental analysis on the reconstructed clots and obtain quantitative data of thrombus growth and structures. The experiments are conducted on laser-induced injuries in vessels of two types of mice (the wild type and the type with low levels of coagulation factor VII). By analyzing and comparing the developing clot structures based on their reconstruction from image data, we obtain results of biomedical significance. Our quantitative analysis of the clot composition leads to better understanding of the thrombus development, which is also valuable to the modeling and verification of computational simulation of thrombogenesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The TRIACS analytical workflows platform for distributed clinical decision support

    Publication Year: 2009 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (985 KB) |  | HTML iconHTML  

    In this paper we discuss a flexible distributed workflow-based approach that enables researchers to study biomedical data for creating decision support pipelines. Specifically we describe the TRIACS platform, which has first been applied to supporting evidence-based decisions for optimum diabetic retinopathy screening intervals In the prioritization mechanism, pseudonymised case data is stratified for screening need by computation of outcome risk or by clustering of dasiaat-riskpsila cases with past cases of actual preventable outcomes. Workflows present a novel approach to this problem by providing an appropriate level of granularity for breaking the domain problem into a collection of reusable service-oriented components that can be applied in different ways. TRIACS is intended to make the creation of new application logic quicker and easier than bespoke development methods. Through the TRIACS workflow interface, modular code is portable and available to solve analogous domain problems including application to trial studies for mining and analysing clinical data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy compliance in european healthgrid domains: An ontology-based approach

    Publication Year: 2009 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (397 KB) |  | HTML iconHTML  

    The integration of different European medical systems by means of grid technologies will continue to be challenging if technology does not intervene to enhance interoperability between national regulatory frameworks on data protection. Achieving compliance in European healthgrid domains is crucial but challenging because of the diversity and complexity of Member State legislation across Europe. Lack of automation and inconsistency of processes across healthcare organizations increase the complexity of the compliance task. In the absence of automation, the compliance task entails human intervention. In this paper we present an approach to automate privacy requirements for the sharing of patient data between Member States across Europe in a healthgrid domain and ensure its enforcement internally and within external domains where the data might travel. This approach is based on the semantic modelling of privacy obligations that are of legal, ethical or cultural nature. Our model reflects both similarities and conflicts, if any, between the different Member States. This will allow us to reason on the safeguards a data controller should demand from an organization belonging to another Member State before disclosing medical data to them. The system will also generate the relevant set of policies to be enforced at the process level of the grid to ensure privacy compliance before allowing access to the data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the integration of protein contact map predictions

    Publication Year: 2009 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (341 KB) |  | HTML iconHTML  

    Protein structure prediction is a key topic in computational structural proteomics. The hypothesis that protein biological functions are implied by their three-dimensional structure makes the protein tertiary structure prediction a relevant problem to be solved. Predicting the tertiary structure of a protein by using its residue sequence is called the protein folding problem. Recently, novel approaches to the solution of this problem have been found and many of them use contact maps as a guide during the prediction process. Contact map structures are bidimensional objects which represent some of the structural information of a protein. Many approaches and bioinformatics tools for contact map prediction have been presented during the past years, having different performances for different protein families. In this work we present a novel approach based on the integration of contact map predictions in order to improve the quality of the predicted contact map with a consensus-based algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.