Abstract
This paper considers the solution of Markov decision problems whose parameters can be obtained only via approximating schemes, or where it is computationally preferable to approximate the parameters, rather than employing exact algorithms for their computation. Various models are presented in which this situation occurs. Furthermore, it is shown that a modified value-iteration method may be employed, both for the discounted version and for the undiscounted version of the model, in order to solve the optimality equation and to find optimal policies. In both cases, the convergence rate is determined. As a side result, we characterize the asymptotic behavior of backward products of a geometrically convergent sequence of Markov matrices.
Full Citation
Journal of Optimization Theory and Applications
vol.
34
,
(June 01, 1981):
207
-241
.