By Topic

Exploring code cache eviction granularities in dynamic optimization systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
K. Hazelwood ; Div. of Eng. & Appl. Sci., Harvard Univ., Cambridge, MA, USA ; J. E. Smith

Dynamic optimization systems store optimized or translated code in a software-managed code cache in order to maximize reuse of transformed code. Code caches store superblocks that are not fixed in size, contain links to other superblocks, and carry a high replacement overhead. These additional constraints reduce the effectiveness of conventional hardware-based cache management policies. In this paper, we explore code cache management policies that evict large blocks of code from the code cache, thus avoiding the bookkeeping overhead of managing single cache blocks. Through a combined simulation and analytical study of cache management overheads, we show that employing a medium-grained FIFO eviction policy results in an effective balance of cache management complexity and cache miss rates. Under high cache pressure the choice of medium granularity translates into a significant reduction in overall execution time versus both coarse and fine granularities.

Published in:

Code Generation and Optimization, 2004. CGO 2004. International Symposium on

Date of Conference:

20-24 March 2004