Loading [MathJax]/extensions/MathMenu.js
Hardware Supported Persistent Object Address Translation | IEEE Conference Publication | IEEE Xplore

Hardware Supported Persistent Object Address Translation


Abstract:

Emerging non-volatile main memory technologies create a new opportunity for writing programs with a large, byte-addressable persistent storage that can be accessed throug...Show More

Abstract:

Emerging non-volatile main memory technologies create a new opportunity for writing programs with a large, byte-addressable persistent storage that can be accessed through regular memory instructions. These new memory-as-storage technologies impose significant challenges to current programming models. In particular, some emerging persistent programming frameworks, like the NVM Library (NVML), implement relocatable persistent objects that can be mapped anywhere in the virtual address space. To make this work, persistent objects are referenced using object identifiers (ObjectID), rather than pointers, that need to be translated to an address before the object can be read or written. Frequent translation from ObjectID to address incurs significant overhead. We propose treating ObjectIDs as a new persistent memory address space and provide hardware support for efficiently translating ObjectIDs to virtual addresses. With our design, a program can use load and store instructions to directly access persistent data using ObjectIDs, and these new instructions can reduce the programming complexity of this system. We also describe several possible microarchitectural designs and evaluate them. We evaluate our design on Sniper modeling both in-order and out-of-order processors with 6 micro-benchmarks and the TPCC application. The results show our design can give significant speedup over the baseline system using software translation. We demonstrate for the Pipelined implementation that our design has an average speedup of 1.96× and 1.58× on an in-order and out-of-order processor, respectively, over the baseline system on microbenchmarks that place persistent data randomly into persistent pools. For the same in-order and out-of-order microarchitectures, we measure a speedup of 1.17× and 1.12×, respectively, on the TPC-C application when B+Trees are put in different pools and rewritten to use our new hardware.
Date of Conference: 14-17 October 2017
Date Added to IEEE Xplore: 11 April 2019
ISBN Information:

ISSN Information:

Conference Location: Boston, MA, USA

Contact IEEE to Subscribe

References

References is not available for this document.