December 3, 2013
By Juan Carlos Parra-Alvarez
This paper evaluates the accuracy of a set of techniques that approximate the solution of continuous-time DSGE models. Using the neoclassical growth model I compare linear-quadratic, perturbation and projection methods. All techniques are applied to the HJB equation and the optimality conditions that define the general equilibrium of the economy. Two cases are studied depending on whether a closed form solution is available. I also analyze how different degrees of non-linearities affect the approximated solution. The results encourage the use of perturbations for reasonable values of the structural parameters of the model and suggest the use of projection methods when a high degree of accuracy is required.
Continuous-time DSGE models are not very popular but do have some interest applications. While there is an extensive literature looking at solution methods for discrete-time models that is looking a computing performance and precisions, that literature is much scarcer for the continuous-time kind. That paper is a good start.
November 26, 2013
The season for call for papers is starting, so here is a first lot of them. If you know of more, send me a link and I will post them here, in the NEP-DGE mailing list, and over at the QM&RBC Agenda.
Society for Economic Dynamics Annual Meeting, Toronto, 26-28 June 2014. That will be the 25th meeting.
Society for Computational Economics Annual Meeting, Oslo, 22-24 June 2014.
Causes and Consequences of Policy Uncertainty, Princeton, 10-11 April 2014.
Conference Theory and Methods in Macroeconomics (T2M), Lausanne (Switzerland), 13-14 February 2014. Deadline very soon.
November 24, 2013
By Till Gross
I analyze international tax competition in a framework of dynamic optimal taxation for strategically competing governments. The global capital stock is determined endogenously as in a neo-classical growth model. With perfect commitment and a complete tax system (where all factors of production can be taxed), governments set their capital taxes so that the net return is equal to the social marginal product of capital. Capital accumulation thus follows the modified golden rule. This is independent of relative country size, capital taxes in other countries, and the degree of capital mobility. In contrast, with an exogenous capital stock returns on capital are pure rents and a government’s ability to capture them is limited through capital fight, triggering a race to the bottom. With an endogenous capital stock, capital is an intermediate good and taxes on it are not used to raise revenues, but to implement the optimal capital stock. Even in a non-cooperative game it is thus not individually rational for governments to engage in tax competition. I provide a general proof that if the modified golden rule holds in a closed economy, then it also does in an open economy.
This paper highlights an important point that can explain much of the divide in the public debate about capital income taxation. Those who want higher taxes consider capital to be largely exogenous and thus rent. Those who want lower taxes see capital as mostly endogenous and thus subject to distortions. What this paper does not address, though, is how progressive taxation can address the issue of equity and redistribution. In the context of tax competition, this is important as it adds a new layer of trade-offs: raising taxes reduces inequality, but is also more likely to reduce tax income when tax competition is possible. This paper is a good start to think about this, one step at a time.
November 21, 2013
By Jim Dolmas
In this paper, I combine disappointment aversion, as employed by Routledge and Zin and Campanale, Castro and Clementi, with rare disasters in the spirit of Rietz, Barro, Gourio, Gabaix and others. I find that, when the model’s representative agent is endowed with an empirically plausible degree of disappointment aversion, a rare disaster model can produce moments of asset returns that match the data reasonably well, using disaster probabilities and disaster sizes much smaller than have been employed previously in the literature. This is good news. Quantifying the disaster risk faced by any one country is inherently difficult with limited time series data. And, it is open to debate whether the disaster risk relevant to, say, US investors is well-approximated by the sizable risks found by Barro and co-authors in cross-country data. On the other hand, we have evidence that individuals tend to over-weight bad or disappointing outcomes, relative to the outcomes’ weights under expected utility. Recognizing aversion to disappointment means that disaster risks need not be nearly as large as suggested by the cross-country evidence for a rare disaster model to produce average equity premia and risk-free rates that match the data. I illustrate the interaction between disaster risk and disappointment aversion both analytically and in the context of a simple Rietz-like model of asset-pricing with rare disasters. I then analyze a richer model, in the spirit of Barro, with a distribution of disaster sizes, Epstein-Zin preferences, and partial default (in the event of a disaster) on the economy’s ‘risk-free’ asset. For small elasticities of intertemporal substitution, the model is able to match almost exactly the means and standard deviations of the equity return and risk-free rate, for disaster risks one-half or one-fourth the estimated sizes from Barro. For larger elasticities of intertemporal substitution, the model’s fit is less satisfactory, though it fails in a direction not often viewed as problematic—it under-predicts the volatility of the riskfree rate. Even so, apart from that failing, the results are broadly similar to those obtained by Gourio but with disaster risks one-half or onefourth as large.
The study of disaster risk and asset prices is at risk of becoming a fad. It is thus nice to see a paper that shows that one does not need as much risk as recently applied in the literature to find good results. Indeed, much like the general public, the researcher may see too much disaster risk right after a disaster happened.
November 19, 2013
By Martin Kliem and Harald Uhlig
This paper presents a novel Bayesian method for estimating dynamic stochastic general equilibrium (DSGE) models subject to a constrained posterior distribution of the implied Sharpe ratio. We apply our methodology to a DSGE model with habit formation in consumption and leisure, using an estimate of the Sharpe ratio to construct the constraint. We show that the constrained estimation produces a quantitative model with both reasonable asset-pricing as well as business-cycle implications.
To continue with the theme about models addressing both business cycles and asset pricing: The point of this paper is quite simple. If an estimated model cannot satisfy both business cycle and asset pricing aspects, one can try to force it that way with Bayesian estimation. And as long as the model can work in theory, it should have a good shot at working with estimated parameters. This also means that we do not yet have a model that, at least in this respect, fits the data naturally enough to not need being guided by tight priors.
November 4, 2013
By Laura Veldkamp and Anna Orlik
For decades, macroeconomists have searched for shocks that are plausible drivers of business cycles. A recent advance in this quest has been to explore uncertainty shocks. Researchers use a variety of forecast and volatility data to justify heteroskedastic shocks in a model, which can then generate realistic cyclical uctuations. But the relevant measure of uncertainty in most models is the conditional variance of a forecast. When agents form such forecasts with state, parameter and model uncertainty, neither forecast dispersion nor innovation volatilities are good proxies for conditional forecast variance. We use observable data to select and estimate a forecasting model and then ask the model to inform us about what uncertainty shocks look like and why they arise.
There is a cottage industry trying to find ways to embed variable uncertainty into business cycle models. This paper differs in that it refines the measurement of uncertainty shocks by getting closer to how market participants form their expectations, in particular with model uncertainty, and then get surprised. This refinement is not inocuous, it allows agents to be uncertain about endogenous variables, not only exogenous ones like total factor productivity. The method can be extended to any forecasting rule used in business forecasting, for example.
October 30, 2013
By José Carrasco-Gallego and Margarita Rubio
In this paper, we analyse the implications of macroprudential and monetary policies for business cycles, welfare, and .nancial stability. We consider a dynamic stochastic general equilibrium (DSGE) model with housing and collateral constraints. A macroprudential rule on the loan-to-value ratio (LTV), which responds to output and house price deviations, interacts with a traditional Taylor rule for monetary policy. From a positive perspective, introducing a macroprudential tool mitigates the effects of booms in the economy by restricting credit. However, monetary and macroprudential policies may enter in conflict when shocks come from the supply-side of the economy. From a normative point of view, results show that the introduction of this macroprudential measure is welfare improving. Then, we calculate the combination of policy parameters that maximizes welfare and find that the optimal LTV rule should respond relatively more aggressively to house prices than to output deviations. Finally, we study the efficiency of the policy mix. We propose a tool that includes not only the variability of output and inflation but also the variability of borrowing, to capture the effects of policies on financial stability: a three-dimensional policy frontier (3DPF). We find that both policies acting together unambiguously improve the stability of the system.
As much as capital requirements for banks should vary over the business cycle, this paper argues that the loan-to-value ratio for commercial loans should also be a policy variable. That makes good sense if it were otherwise constant, as it is admittedly often modelled. But lenders do adjust it according to circumstances, and the question therefore should be whether there is still room for a policy-maker to intervene and adjust it in its own way. Due to moral hazard in the banking sector, a strong point for intervention can be made for capital requirements. I am not sure where the market inefficiency would be that would call for intervention on the loan-to-value ratio.
October 21, 2013
Andrew Y. Chen
A unified framework for understanding asset prices and aggregate fluctuations is critical for understanding both issues. I show that a real business cycle model with external habit preferences and capital adjustment costs provides one such framework. The estimated model matches the first two moments of the equity premium and risk-free rate, return and dividend predictability regressions, and the second moments of output, consumption, and investment. The model also endogenizes a key mechanism of consumption-based asset pricing models. In order to address the Shiller volatility puzzle, external habit, long-run risk, and disaster models require the assumption that the volatility of marginal utility is countercyclical. In the model, this countercyclical volatility arises endogenously. Production makes precautionary savings effects show up in consumption. These effects lead to countercyclical consumption volatility and countercyclical volatility of marginal utility. External habit amplifies this channel and makes it quantitatively significant.
Another paper that comes to the conclusion that habit persistence is an essential part of any model that wants to reflect both business cycles and asset prices. Modelers may want to may that a standard feature.
October 12, 2013
By Ömer Tuğrul Açıkgöz
Aiyagari (1995) showed that long-run optimal fiscal policy features a positive tax rate on capital income in Bewley-type economies with heterogeneous agents and incomplete markets. However, determining the magnitude of the optimal capital income tax rate was considered to be prohibitively difficult due to the need to compute the optimal tax rates along the transition path. This paper shows that, in this class of models, long-run optimal fiscal policy and the corresponding allocation can be studied independently of the initial conditions and the transition path. Numerical methods based on this finding are used on a model calibrated to the U.S. economy. I find that the observed average capital income tax rate in the U.S. is too high, the average labor income tax rate and the debt-to-GDP ratio are too low, compared to the long-run optimal levels. The implications of these findings for the existing literature on the optimal quantity of debt and constrained efficiency are also discussed.
The results of this paper will upset people across the political spectrum. First, the public debt to GDP ratio should be much higher than currently in the US. This is because the public debt allows households to overcome their borrowing constraints, thus the best is for the government to borrow up to the natural borrowing limit (which is not the debt ceiling). Second, labor income taxes should be higher, because this finances the debt and reduces the volatility of household income. Third, capital income taxes should be lower, as this favors the accumulation of precautionary savings. The paper also highlights that policies that are welfare-maximizing in the long run can leads to significantly dominated outcomes in the short-run. This also shows that as so often in the optimal tax literature, optimal policy is difficult to find and results can easily be reversed by changing some aspect of the model. The future will tell whether this analysis will be robust.
October 8, 2013
By Babak Mahmoudi
This paper investigates the long-run effects of open-market operations on the distributions of assets and prices in the economy. It offers a theoretical framework to incorporate multiple asset holdings in a tractable heterogeneous-agent model, in which the central bank implements policies by changing the supply of nominal bond and money. This model features competitive search, which produces distributions of money and bond holdings as well as price dispersion among submarkets. At a high enough bond supply, the equilibrium shows segmentation in the asset market; only households with good income shocks participate in the bond market. When deciding whether to participate in the asset market, households compare liquidity services provided by money with returns on bond. Segmentation in the asset market is generated endogenously without assuming any rigidities or frictions in the asset market. In an equilibrium with a segmented asset market, open-market operations affect households’ participation decisions and, therefore, have real effects on the distribution of assets and prices in the economy. Numerical exercises show that the central bank can improve welfare by purchasing bonds and supplying money when the asset market is segmented.
Pretty neat paper, as it endogenizes the market segmentation that is typically hard-coded in models. In addition, it looks at how policy influences this limited participation, and it matters for the influence of monetary policy on outcomes. Indeed, monetary policy acts first on those agents who are participating in markets, and if their number and composition changes as a consequence of policy, it may amplify or dilute the policy. Here, with appropriate policy, its impact is amplified.