By Topic

An Empirical Architecture-Centric Approach to Microarchitectural Design Space Exploration

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Dubach, C. ; Sch. of Inf., Univ. of Edinburgh, Edinburgh, UK ; Jones, T.M. ; O'Boyle, M.F.P.

The microarchitectural design space of a new processor is too large for an architect to evaluate in its entirety. Even with the use of statistical simulation, evaluation of a single configuration can take an excessive amount of time due to the need to run a set of benchmarks with realistic workloads. This paper proposes a novel machine-learning model that can quickly and accurately predict the performance and energy consumption of any new program on any microarchitectural configuration. This architecture-centric approach uses prior knowledge from offline training and applies it across benchmarks. This allows our model to predict the performance of any new program across the entire microarchitecture configuration space with just 32 further simulations. First, we analyze our design space and show how different microarchitectural parameters can affect the cycles, energy, energy-delay (ED), and energy-delay-squared (EDD) of the architectural configurations. We show the accuracy of our predictor on SPEC CPU 2000 and how it can be used to predict programs from a different benchmark suite. We then compare our approach to a state-of-the-art program-specific predictor and show that we significantly reduce prediction error. We reduce the average error when predicting performance from 24 percent to just seven percent and increase the correlation coefficient from 0.55 to 0.95. Finally, we evaluate the cost of offline learning and show that we can still achieve a high coefficient of correlation when using just five benchmarks to train.

Published in:

Computers, IEEE Transactions on  (Volume:60 ,  Issue: 10 )