By Topic

Caching values in the load store queue

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
D. Nicolaescu ; Dept. of Comput. Sci., California Univ., Irvine, CA, USA ; A. Veidenbaum ; A. Nicolau

The latency of an L1 data cache continues to grow with increasing clock frequency, cache size and associativity. The increased latency is an important source of performance loss in high-performance processors. The paper proposes to cache data utilizing the load store queue (LSQ) hardware and data paths. Using very little additional hardware, this inexpensive cache improves performance and reduces energy consumption. The modified load store queue "caches" all previously accessed data values going beyond existing store-to-load forwarding techniques. Both load and store data are placed in the LSQ and are retained there after a corresponding memory access instruction has been committed. It is shown that a 128-entry modified LSQ design allows an average of 51% of all loads in the SPECint2000 benchmarks to get their data from the LSQ. Up to 7% performance improvement is achieved on SPECint2000 with a 1-cycle LSQ access latency and 3-cycle L1 cache latency. The average speedup is over 4%.

Published in:

Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, 2004. (MASCOTS 2004). Proceedings. The IEEE Computer Society's 12th Annual International Symposium on

Date of Conference:

4-8 Oct. 2004