Skip to Main Content
Assessment of the (Multi) Similarity among a set of protein structures is achieved through an ensemble of protein structure comparison methods/algorithms. This leads to the generation of a multitude of data that varies both in type and size. After passing through standardization and normalization, this data is further used in consensus development; providing domain independent and highly reliable view of the assessment of (di)similarities. This paper briefly describes some of the techniques used for the estimation of missing/invalid values resulting from the process of multi-comparison of very large scale datasets in a distributed/grid environment. This is followed by an empirical study on the storage capacity and query processing time required to cope with the results of such comparisons. In particular we investigate and compare the storage/query overhead of two commonly used database technologies such as the Hierarchical Data Format (HDF) (HDF5) and Relational Database Management System (RDBMS) (Oracle/SQL) in terms of our application deployed on the National Grid Service (NGS), UK. As the technologies explored under this investigation are quite generic in the science and engineering domain, our findings would also be beneficial for other scientific applications having related magnitude of data and functionality.