Skip to Main Content
A methodology of decentralized metadata management has been proposed within this paper that data and metadata from server-side can be distributed to the various network nodes. This methodology supports large capacity work machines to read and write data in parallel when running data-intensive applications on the platform, with random access, flexible access granularity, and supports concurrent read/write characteristics of high data load. By applying distributed hash table technology, bulk metadata can be divided into the form of tree-structure of segmentation tree. The data and metadata of read/write and new data appended to process are described. The test results show that this methodology can get high bandwidth of aggregation and average bandwidth of read/write when working in different parallel access with read/write process.