Skip to Main Content
We propose a new load balancing algorithm for distributed systems, assuming central coordination and distribution of load by explicit communication between the resources. The novelty of the algorithm lies in the goal it tries to achieve. Fitted for networks processing the requests of a web service, our algorithm tries to satisfy a client's average response time and number of requests processed per time interval. A request from a client is distributed by a master processor to the worker processor which is estimated to finish its associated workload first. When choosing the next request to process, a worker processor computes the priority of requests. Both the estimation and the priority association are computed based on the two constraining parameters: average response time and number of requests per time interval specified in the license of the client issuing the request. We analyze the correctness of our algorithm with respect to satisfying the above mentioned goal in different circumstances. We discuss the performance of the proposed algorithm and present the results of a simulation.