Cart (Loading....) | Create Account
Close category search window
 

Dynamically optimal policies for stochastic scheduling subject to preemptive-repeat machine breakdowns

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Xiaoqiang Cai ; Dept. of Syst. Eng. & Eng. Manage., Chinese Univ. of Hong Kong Shatin, China ; Xianyi Wu ; Xian Zhou

We consider the problem of finding a dynamically optimal policy to process n jobs on a single machine subject to stochastic breakdowns. We study the preemptive-repeat breakdown model, i.e., if a machine breaks down during the processing of a job, the work done on the job prior to the breakdown is lost and the job will have to be started over again. Our study is built on a general setting, which allows: 1) the uptimes and downtimes of the machine to follow general probability distributions, not necessarily independent of each other; 2) the breakdown process to depend upon the job being processed; and 3) the processing times of the jobs to be random variables following arbitrary distributions. We consider two possible cases for the processing time of a job interrupted by a breakdown: a) it is resampled according to its probability distribution or b) it is the same random variable as that before the breakdown. We introduce the concept of occupying time and find its Laplace and integral transforms. For the problem with resampled processing times, we establish a general optimality equation on the optimal dynamic policy under a unified objective measure. We deduce from the optimality equation the optimal dynamic policies for several problems with well-known criteria, including weighted discounted reward, weighted flowtime, truncated cost, number of tardy jobs under stochastic order, and maximum holding cost. For the problem with same random processing time, we develop the optimal dynamic policy via the theory of bandit process. A set of Gittins indices are derived that give the optimal dynamic policies under the criteria of weighted discounted reward and weighted flowtime. Note to Practitioners-It is common in practice that a machine is subject to breakdowns, which may severely interrupt the job it is processing. In such a situation, there may be limited information on the breakdown patterns of the machine and the processing requirements of the jobs. A great challenge faced by the decision-maker is how to utilize the information available to make a right decision. Stochastic scheduling considering stochastic machine breakdowns aims to determine the optimal policies in these situations. In this paper, we study the problem within the preemptive-repeat- breakdown framework, to address the practical situations where a job will have to be re-started again if a machine breakdown occurs when it is being processed. Problems of such can be found in many industrial applications. Examples include refining metal in a refinery factory, running a program on a computer, performing a reliability test on a facility, etc. Generally, if a job must be continuously processed with no interruption until it is totally completed, then the preemptive-repeat breakdown formulation should be used. Our research in this paper focuses on optimal dynamic policies, which aim to utilize real-time information to dynamically adjust/improve a decision. We consider two types of models, depending on whether the processing time of the job interrupted by a breakdown must be resampled or not. For the problem with resampled processing times, we establish a general optimality equation under a unified objective measure. We further deduce the optimal dynamic policies under a number of well-known criteria. For the problem without resampled processing times, we develop the optimal dynamic policies, under the criteria of weighted discounted reward and weighted flowtime. Broadly speaking, our findings can be applied in any situations where it is desirable to derive the best dynamic decisions to tackle the problem with stochastic machine breakdowns and preemptive-repeat jobs.

Published in:

Automation Science and Engineering, IEEE Transactions on  (Volume:2 ,  Issue: 2 )

Date of Publication:

April 2005

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.