Cart (Loading....) | Create Account
Close category search window
 

Caching and predicting branch sequences for improved fetch effectiveness

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Onder, S. ; Dept. of Comput. Sci., Michigan Technol. Univ., Houghton, MI, USA ; Jun Xu ; Gupta, R.

A sequence of branch instructions in the dynamic instruction stream forms a branch sequence if at most one non-branch instruction separates each consecutive pair of branches in the sequence. We propose a branch prediction scheme in which branch sequence history is explicitly maintained to identify frequently encountered branch sequences at runtime and when the first branch in the sequence is encountered, the outcomes of the all of the branches in the sequence are predicted. We have designed an implementation of a branch sequence predictor which provides overall mis-prediction rates that are comparable with the gshare single branch predictor. Using this branch sequence predictor, we have devised a novel instruction fetch mechanism. By saving the instructions following the first branch belonging to a branch sequence in a sequence table, the proposed mechanism eliminates fetches of nonconsecutive instruction cache lines containing these instructions and therefore delays associated with their fetching are avoided. Experiments comparing the proposed fetch mechanism with a simple fetch mechanism based upon a single branch prediction for Spec95 benchmarks demonstrate that the total number of I-cache lines fetched during execution decreases by as much as 15%, the number of useful instructions per fetched cache line increases by as much as 18%, and the overall IPCs achieved on a superscalar processor increase by as much as 17% for some benchmarks

Published in:

Parallel Architectures and Compilation Techniques, 1999. Proceedings. 1999 International Conference on

Date of Conference:

1999

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.