September 15, 2014
By Kaiji Chen and Yi Wen
This paper provides a theory to explain the paradoxical features of the great housing boom in China —the persistently faster-than-GDP housing price growth, exceptionally high capital returns, and excessive vacancy rates. The expectation that high capital returns driven mainly by resource reallocation are not sustainable in the long run can induce the very productive entrepreneurs to speculate in housing during economic transition. This creates a self-fulfilling growing housing bubble, which can create severe resource misallocation. A calibrated version of the theory accounts quantitatively for both the growth dynamics of house prices and other salient features of the recent Chinese experience.
My trip to China last year gave me a big puzzle. Why are housing prices skyrocketing when there is an unbelievable amount of construction going on and entire satellite cities that appear empty? This paper is showing this can indeed happen, and it does not look good for the future.
September 10, 2014
By Paul Beaudry and Franck Portier
There is a widespread belief that changes in expectations may be an important independent driver of economic fluctuations. The news view of business cycles offers a formalization of this perspective. In this paper we discuss mechanisms by which changes in agents’ information, due to the arrival of news, can cause business cycle fluctuations driven by expectational change, and we review the empirical evidence aimed at evaluating its relevance. In particular, we highlight how the literature on news and business cycles offers a coherent way of thinking about aggregate fluctuations, while at the same time we emphasize the many challenges that must be addressed before a proper assessment of its role in business cycles can be established.
If you are interested in how news and expectations in general matter for the business cycle, this is a must read. Beaudry and Portier have been very influential in getting this literature moving with modern methods. Much like the topic, the paper is also forward-looking in the sense that it opens all sorts of avenues that merit exploration.
September 8, 2014
BY Stefano Gnocchi, Daniela Hauser and Evi Pappa
We build an otherwise-standard business cycle model with housework, calibrated consistently with data on time use, in order to discipline consumption-hours complementarity and relate its strength to the size of fiscal multipliers. We show that if substitutability between home and market goods is calibrated on the empirically relevant range, consumption-hours complementarity is large and the model generates fiscal multipliers that agree with the evidence. Hence, our analysis supports the relevance of consumption-hours complementarity for fiscal multipliers. However, we also find that explicitly modeling the home sector is more appealing than restricting to the consumption-leisure margin and/or to the preferences proposed by Greenwood, Hercowitz and Huffman (1988). A housework model can imply substantial complementarity, without low wealth effects contradicting the microeconomic evidence.
It has been fashionable again to look at home production, and it is important. Indeed, it is difficult to understand the labor supply without its outside option. And this paper is a nice example of why it matters.
September 2, 2014
Juan Carlos Hatchondo, Leonardo Martinez and Cesar Sosa-Padilla
In this study, we measure the effects of debt dilution on sovereign default risk and consider debt covenants that could mitigate these effects. First, we calibrate a baseline model of defaultable debt (in which debt can be diluted) with endogenous debt duration, using data from Spain. Secondly, we present a model in which sovereign bonds contain a covenant that eliminates debt dilution. We quantify the effects of dilution by comparing the simulations of the model with and without this covenant. We find that dilution accounts for 79 percent of the default risk in the baseline economy. Without dilution, the optimal duration of sovereign debt increases by almost two years. Consumption volatility also increases, but eliminating dilution still produces substantial welfare gains. Introducing debt covenants that could be easier to implement in practice has similar effects. A covenant that penalizes the government for bond prices below a threshold is more effective in reducing the default frequency. A covenant that penalizes the government for debt levels above a threshold is more effective in reducing consumption volatility. These covenants could be useful for enforcing fiscal rules.
I wonder how politicians would react to the idea of commitment. Still, the idea that you want to force yourself not to abuse the privileges that sovereign debt gives you has a lot of merit, especially when short-sighted politicians have no interest in building credibility.
August 29, 2014
By Jean-Baptiste Michau
This paper investigates the provision of insurance to workers against search-induced wage fluctuations. I rely on numerical simulations of a model of on-the-job search and precautionary savings. The model is calibrated to low skilled workers in the U.S.. The extent of insurance is determined by the degree of progressivity of a non-linear transfer schedule. The fundamental trade-off is that a more generous provision of insurance reduces incentives to search for better paying jobs, which is detrimental to the production efficiency of the economy. I show that progressivity raises the search intensity of unemployed worker, which reduces the equilibrium rate of unemployment, but lowers the search intensity of employed job seekers, which results in a lower output level. I also solve numerically for the optimal non-linear transfer schedule. The optimal policy is to provide almost no insurance up to a monthly income level of $1450, such as to preserve incentives to move up the wage ladder, and full insurance above $1650. This policy halves the standard deviation of labor incomes, increases output by 2.4% and generates a consumption-equivalent welfare gain of 1.3%. Forbidding private savings does not fundamentally change the shape of the optimal transfer function, but tilts the optimal policy towards more insurance at the expense of production efficiency.
There is no doubt this paper will generate controversy, but it makes sense. Suppose that workers do not like fluctuations in their wages as they move from job to job. Clearly, they would like to obtain insurance against such fluctuations. But if they get it, a moral hazard problem arises whereby they would not search hard enough for a new job if their current one has low pay before insurance. Such an economy would have a poor allocation of resources, as output could be higher with better job matches. The solution appears to be that low-wage jobs should not be insured at all, to preserve incentives for search.
People will object that this provides no insurance to the most vulnerable. We need to define vulnerable here. The common definition would be low-skilled workers who can only obtain low wage jobs. This is not what this paper is about. People who lost out in the life-lottery because they were born with fewer skills or in an environment that is less conducive to accumulate skills should obtain a different type of insurance, likely through social welfare. What this paper is about is how typically young workers bounce around from job to job until they find the right match. You want to provide them some insurance while giving the right incentives to search. And sometimes this involves not giving insurance.
August 26, 2014
Heterogeneity and Government Revenues: Higher Taxes at the Top?
By Nezih Guner, Martin Lopez-Daneri and Gustavo Ventura
We evaluate the effectiveness of a more progressive tax scheme in raising government revenues. We develop a life-cycle economy with heterogeneity and endogenous labor supply. Households face a progressive income tax schedule, mimicking the Federal Income tax, and flat-rate taxes that capture payroll, state and local taxes and the corporate income tax. We parameterize this model to reproduce aggregate and cross-sectional observations for the U.S. economy, including the shares of labor income for top earners. We find that a tilt of the Federal income tax schedule towards high earners leads to small increases in revenues which are maximized at an effective marginal tax rate of about 36.9% for the richest 5% of households – in contrast to a 21.7% marginal rate in the benchmark economy. Maximized revenue from Federal income taxes is only 8.4% higher than it is in the benchmark economy, while revenues from all sources increase only by about 1.6%. The room for higher revenues from more progressive taxes is even lower when average taxes are higher to start with. We conclude that these policy recommendations are misguided if the aim is to exclusively raise government revenue.
Taxing top earners: a human capital perspective
By Alejandro Badel and Mark Huggett
We assess the consequences of substantially increasing the marginal tax rate on U.S. top earners using a human capital model. The top of the model Laffer curve occurs at a 53 percent top tax rate. Tax revenues and the tax rate at the top of the Laffer curve are smaller compared to an otherwise similar model that ignores the possibility of skill change in response to a tax reform. We also show that if one applies the methods used by Diamond and Saez (2011) to provide quantitative guidance for setting the tax rate on top earners to model data then the resulting tax rate exceeds the tax rate at the top of the model Laffer curve
By coincidence, two papers on a very similar topic were listed in the same NEP-DGE report. And they have rather different results: Top tax rates of 37% versus 53%. How come? The models are quite different in fact. While the first one experiments with a tilting of the tax schedule, the second only increases the tax rate only of the top earners. This can justify part of the difference. Then the second one includes a human capital accumulation effect, which should actually lead to a lower top taxation rate. But as it considers the top 1% earners, while the first paper takes the top 5%, the results are not really comparable. Still both papers demonstrate that the top rates are lower than what simpler models show, and the additional complexity matters.
This all reminds us that quantitative results can sometimes be sensitive to 1) what your are measuring, and 2) what effects to include in the model. Hence the importance of either comparing results with previous literature as long as the models are nested or providing simplified models within the paper to gauge the impact of additional features.
August 13, 2014
By Yasuo Hirose
Benhabib, Schmitt-Grohé, and Uribe (2001) argue for the existence of a deflation steady state when the zero lower bound on the nominal interest rate is considered in a Taylor-type monetary policy rule. This paper estimates a medium-scale DSGE model with a deflation steady state for the Japanese economy during the period from 1999 to 2013, when the Bank of Japan conducted a zero interest rate policy and the inflation rate was almost always negative. Although the model exhibits equilibrium indeterminacy around the deflation steady state, a set of specific equilibria is selected by Bayesian methods. According to the estimated model, shocks to households’ preferences, investment adjustment costs, and external demand do not necessarily have an inflationary effect, in contrast to a standard model with a targeted-inflation steady state. An economy in the deflation equilibrium could experience unexpected volatility because of sunspot fluctuations, but it turns out that the effect of sunspot shocks on Japan’s business cycles is marginal and that macroeconomic stability during the period was a result of good luck.
DSGE models had to break new ground with the zero lower bound on interest rates because of the inherent non-linearities. This is even more the case when the models are estimated. Here, the problem is even deeper, as Japan has had a long period of deflation, and the canonical model predicts indeterminacy. This paper shows that you can still estimate such a model, thanks to good old Bayes.