Skip to Main Content
A failure-detection scheduler for an online production system must strike a tradeoff between performance and reliability. If failure-detection processes are run too frequently, valuable system resources are spent checking and rechecking for failures. However, if failure-detection processes are run too rarely, a failure can remain undetected for a long time. In both cases, system performability suffers. We present a model-based learning approach that estimates the failure rate and then performs an optimization to find the tradeoff that maximizes system performability. We show that our approach is not only theoretically sound but practically effective, and we demonstrate its use in an implemented automated deadlock-detection system for Java.