November 26, 2013
The season for call for papers is starting, so here is a first lot of them. If you know of more, send me a link and I will post them here, in the NEP-DGE mailing list, and over at the QM&RBC Agenda.
Society for Economic Dynamics Annual Meeting, Toronto, 26-28 June 2014. That will be the 25th meeting.
Society for Computational Economics Annual Meeting, Oslo, 22-24 June 2014.
Causes and Consequences of Policy Uncertainty, Princeton, 10-11 April 2014.
Conference Theory and Methods in Macroeconomics (T2M), Lausanne (Switzerland), 13-14 February 2014. Deadline very soon.
November 24, 2013
By Till Gross
I analyze international tax competition in a framework of dynamic optimal taxation for strategically competing governments. The global capital stock is determined endogenously as in a neo-classical growth model. With perfect commitment and a complete tax system (where all factors of production can be taxed), governments set their capital taxes so that the net return is equal to the social marginal product of capital. Capital accumulation thus follows the modified golden rule. This is independent of relative country size, capital taxes in other countries, and the degree of capital mobility. In contrast, with an exogenous capital stock returns on capital are pure rents and a government’s ability to capture them is limited through capital fight, triggering a race to the bottom. With an endogenous capital stock, capital is an intermediate good and taxes on it are not used to raise revenues, but to implement the optimal capital stock. Even in a non-cooperative game it is thus not individually rational for governments to engage in tax competition. I provide a general proof that if the modified golden rule holds in a closed economy, then it also does in an open economy.
This paper highlights an important point that can explain much of the divide in the public debate about capital income taxation. Those who want higher taxes consider capital to be largely exogenous and thus rent. Those who want lower taxes see capital as mostly endogenous and thus subject to distortions. What this paper does not address, though, is how progressive taxation can address the issue of equity and redistribution. In the context of tax competition, this is important as it adds a new layer of trade-offs: raising taxes reduces inequality, but is also more likely to reduce tax income when tax competition is possible. This paper is a good start to think about this, one step at a time.
November 21, 2013
By Jim Dolmas
In this paper, I combine disappointment aversion, as employed by Routledge and Zin and Campanale, Castro and Clementi, with rare disasters in the spirit of Rietz, Barro, Gourio, Gabaix and others. I find that, when the model’s representative agent is endowed with an empirically plausible degree of disappointment aversion, a rare disaster model can produce moments of asset returns that match the data reasonably well, using disaster probabilities and disaster sizes much smaller than have been employed previously in the literature. This is good news. Quantifying the disaster risk faced by any one country is inherently difficult with limited time series data. And, it is open to debate whether the disaster risk relevant to, say, US investors is well-approximated by the sizable risks found by Barro and co-authors in cross-country data. On the other hand, we have evidence that individuals tend to over-weight bad or disappointing outcomes, relative to the outcomes’ weights under expected utility. Recognizing aversion to disappointment means that disaster risks need not be nearly as large as suggested by the cross-country evidence for a rare disaster model to produce average equity premia and risk-free rates that match the data. I illustrate the interaction between disaster risk and disappointment aversion both analytically and in the context of a simple Rietz-like model of asset-pricing with rare disasters. I then analyze a richer model, in the spirit of Barro, with a distribution of disaster sizes, Epstein-Zin preferences, and partial default (in the event of a disaster) on the economy’s ‘risk-free’ asset. For small elasticities of intertemporal substitution, the model is able to match almost exactly the means and standard deviations of the equity return and risk-free rate, for disaster risks one-half or one-fourth the estimated sizes from Barro. For larger elasticities of intertemporal substitution, the model’s fit is less satisfactory, though it fails in a direction not often viewed as problematic—it under-predicts the volatility of the riskfree rate. Even so, apart from that failing, the results are broadly similar to those obtained by Gourio but with disaster risks one-half or onefourth as large.
The study of disaster risk and asset prices is at risk of becoming a fad. It is thus nice to see a paper that shows that one does not need as much risk as recently applied in the literature to find good results. Indeed, much like the general public, the researcher may see too much disaster risk right after a disaster happened.
November 19, 2013
By Martin Kliem and Harald Uhlig
This paper presents a novel Bayesian method for estimating dynamic stochastic general equilibrium (DSGE) models subject to a constrained posterior distribution of the implied Sharpe ratio. We apply our methodology to a DSGE model with habit formation in consumption and leisure, using an estimate of the Sharpe ratio to construct the constraint. We show that the constrained estimation produces a quantitative model with both reasonable asset-pricing as well as business-cycle implications.
To continue with the theme about models addressing both business cycles and asset pricing: The point of this paper is quite simple. If an estimated model cannot satisfy both business cycle and asset pricing aspects, one can try to force it that way with Bayesian estimation. And as long as the model can work in theory, it should have a good shot at working with estimated parameters. This also means that we do not yet have a model that, at least in this respect, fits the data naturally enough to not need being guided by tight priors.
November 4, 2013
By Laura Veldkamp and Anna Orlik
For decades, macroeconomists have searched for shocks that are plausible drivers of business cycles. A recent advance in this quest has been to explore uncertainty shocks. Researchers use a variety of forecast and volatility data to justify heteroskedastic shocks in a model, which can then generate realistic cyclical uctuations. But the relevant measure of uncertainty in most models is the conditional variance of a forecast. When agents form such forecasts with state, parameter and model uncertainty, neither forecast dispersion nor innovation volatilities are good proxies for conditional forecast variance. We use observable data to select and estimate a forecasting model and then ask the model to inform us about what uncertainty shocks look like and why they arise.
There is a cottage industry trying to find ways to embed variable uncertainty into business cycle models. This paper differs in that it refines the measurement of uncertainty shocks by getting closer to how market participants form their expectations, in particular with model uncertainty, and then get surprised. This refinement is not inocuous, it allows agents to be uncertain about endogenous variables, not only exogenous ones like total factor productivity. The method can be extended to any forecasting rule used in business forecasting, for example.