Abstract
This paper describes a family of discrete-review policies for scheduling open multiclass queueing networks. Each of the policies in the family is derived from what we call a dynamic reward function: such a function associates with each queue length vector q and each job class k a positive value rk(q), which is treated as a reward rate for time devoted to processing class k jobs. Assuming that each station has a traffic intensity parameter less than one, all policies in the family considered are shown to be stable. In such a policy, system status is reviewed at discrete points in time, and at each such point the controller formulates a processing plan for the next review period, based on the queue length vector observed. Stability is proved by combining elementary large deviations theory with an analysis of an associated fluid control problem. These results are extended to systems with class dependent setup times as well as systems with alternate routing and admission control capabilities.