Cart (Loading....) | Create Account
Close category search window
 

Full Fault Resilience and Relaxed Synchronization Requirements at the Cache-Memory Interface

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Chengmo Yang ; Dept. of Electr. & Comput. Eng., Univ. of Delaware, Newark, DE, USA ; Orailoglu, A.

While multicore platforms promise significant speedup for many current applications, they also suffer from increased reliability problems as a result of ever scaling device size. The projected elevation in fault rate, together with the diverse behavior of fault manifestation, argues for highly efficient solutions of full fault resilience. Traditional duplication and checkpointing strategies typically impose sizable overhead in checkpointing execution results, or in constantly synchronizing two threads for value checking. To reduce such overhead while at the same time delivering full fault resilience, we propose an integrated fault detection and checkpointing framework, wherein the comparison and checkpointing process is performed at the cache-memory interface. By sharing a single cache between two duplicated threads, execution results can be directly verified in the cache before being written back, thus strictly protecting the memory against execution faults. Meanwhile, as unconfirmed data are allowed to be written into the cache, one thread can run well ahead of the other, thus relaxing the straightjacket of the strict execution synchronization model. If a cache block is constantly updated, further synchronization relaxation can be achieved through extending the cache design to duplicate a cache block and skip the comparison of the intermediate values.

Published in:

Very Large Scale Integration (VLSI) Systems, IEEE Transactions on  (Volume:19 ,  Issue: 11 )

Date of Publication:

Nov. 2011

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.