By Ricard Evans and Kirk Phillips
This paper presents an adjustment to commonly used approximation methods for dynamic stochastic general equilibrium (DSGE) models. Policy functions approximated around the steady state will be inaccurate away from the steady state. In some cases, this does not lead to substantial inaccuracies. In other cases, however, the model may not have a well-defined steady state, or the nature of the steady state may be at odds with its off-steady-state dynamics. We show how to simulate a DSGE model by approximating about the current state. Our method introduces an approximation error, but minimizes the error associated with a finite-order Taylor-series expansion of the model’s characterizing equations. This method is easily implemented using available simulation software and has the advantage of mimicking highly non-linear behavior. We illustrate this with a variety of simple models. We compare our technique with other simulation techniques and show that the approximation errors are approximately the same for stable, well-defined models. We also illustrate how this method can solve and simulate models that are not tractable with standard approximation methods.
(Log-)linearization clearly does not apply in some situations, and this paper details a relatively simple method that should help out those who want to stick with their standard solution tools. If you research question deals with large shocks or your model has an undefined steady-state, this paper is likely for you.