Skip to Main Content
We consider a Markovian model for a distributed firm real-time system, with a homogeneous job arrival stream and multiple heterogeneous clusters of servers, each having its own queue and server pool. Upon a job's arrival, it is decided whether to reject it, at a cost of R, or to accept it and then route it to some cluster, to await processing in first-come first-served fashion. Jobs come with firm deadlines, to the beginning or to the end of service, reneging if they are missed, at a cost of 1. Given the intractability of finding an average-cost optimal admission control and routing policy, we consider a static policy (optimal Bernoulli splitting (BS)), and four dynamic policies based on numeric indices attached to individual queues as functions of their current congestion: individually optimal (IO), policy improvement (PI) upon the optimal BS, restless bandit (RB), and a novel hybrid PI-RB policy. Index-computing algorithms with linear complexity are presented. A numerical study on two-cluster instances is reported, where the policies are benchmarked against the optimal cost performance as model parameters are varied one at a time. The study reveals that the PI-RB index policy is consistently near optimal.