Dynamic Programming with State-Dependent Discounting

By John Stachurski and Junnan Zhang

http://d.repec.org/n?u=RePEc:arx:papers:1908.08800&r=dge

This paper extends the core results of discrete time infinite horizon dynamic programming theory to the case of state-dependent discounting. The traditional constant-discount condition requires that the discount factor of the controller is strictly less than one. Here we replace the constant factor with a discount factor process and require, in essence, that the process is strictly less than one on average in the long run. We prove that, under this condition, the standard optimality results can be recovered, including Bellman’s principle of optimality, convergence of value function iteration and convergence of policy function iteration. We also show that the condition cannot be weakened in many standard settings. The dynamic programming framework considered in the paper is general enough to contain features such as recursive preferences. Several applications are discussed.

I guess everyone was thinking dynamic programming would work with variable discounting, but it is nice that this paper firmly establishes it works when the discount rate is state-dependent. One may now apply this without guilt.

Leave a comment