By Topic

A Reinforcement-Learning Approach to Failure-Detection Scheduling

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Fancong Zeng ; BEA Systems, Inc.

A failure-detection scheduler for an online production system must strike a tradeoff between performance and reliability. If failure-detection processes are run too frequently, valuable system resources are spent checking and rechecking for failures. However, if failure-detection processes are run too rarely, a failure can remain undetected for a long time. In both cases, system performability suffers. We present a model-based learning approach that estimates the failure rate and then performs an optimization to find the tradeoff that maximizes system performability. We show that our approach is not only theoretically sound but practically effective, and we demonstrate its use in an implemented automated deadlock-detection system for Java.

Published in:

Seventh International Conference on Quality Software (QSIC 2007)

Date of Conference:

11-12 Oct. 2007