Scheduled System Maintenance on December 17th, 2014:
IEEE Xplore will be upgraded between 2:00 and 5:00 PM EST (18:00 - 21:00) UTC. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Performance Evaluation of Cache Memory Organizations in Embedded Systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Soryani, M. ; Dept. of Comput. Eng., Iran Univ. of Sci. & Technol., Tehran ; Sharifi, M. ; Rezvani, M.H.

The tremendous rise in microprocessor technology has offered high speed processors and has increased the processor-memory speed gap dramatically. On the other hand, real-time embedded systems often have a hard deadline to complete their instructions. Consequently, the design of cache memory hierarchy is a critical issue in embedded systems. This paper describes a simulation-based performance evaluation of typical cache design issues in embedded systems such as using split caches for data and instruction versus unified cache for data and instruction, cache size and associativity and replacement policy. The evaluation is done using SimpleScalar simulation tools based on its Alpha version. We select some benchmarks for this study based on some previous researches about the clustering of SPEC CPU2000 benchmark suite. The contribution of this work is identifying important parameters for cache design in general-purpose embedded systems. Our results show that the Pseudo LRU techniques for cache replacement, such as MRU can approximate LRU with much lower complexity for a wide variety of cache sizes and degree of associativities

Published in:

Information Technology, 2007. ITNG '07. Fourth International Conference on

Date of Conference:

2-4 April 2007