Flipping in the Housing Market

February 9, 2017

Charles Ka Yui Leung and Chung-Yi Tse


We add arbitraging middlemen — investors who attempt to profit from buying low and selling high — to a canonical housing market search model. Flipping tends to take place in sluggish and tight, but not in moderate, markets. To follow is the possibility of multiple equilibria. In one equilibrium, most, if not all, transactions are intermediated, resulting in rapid turnover, a high vacancy rate, and high housing prices. In another equilibrium, few houses are bought and sold by middlemen. Turnover is slow, few houses are vacant, and prices are moderate. Moreover, flippers can enter and exit en masse in response to the smallest interest rate shock. The housing market can then be intrinsically unstable even when all flippers are akin to the arbitraging middlemen in classical finance theory. In speeding up turnover, the flipping that takes place in a sluggish and illiquid market tends to be socially beneficial. The flipping that takes place in a tight and liquid market can be wasteful as the efficiency gain from any faster turnover is unlikely to be large enough to offset the loss from more houses being left vacant in the hands of flippers. Based on our calibrated model, which matches several stylized facts of the U.S. housing market, we show that the housing price response to interest rate change is very non-linear, suggesting cautions to policy attempt to “stabilize” the housing market through monetary policy.

Interesting. The next question is then: can flippers trigger a bubble?

Learning Efficiency Shocks, Knowledge Capital and the Business Cycle: A Bayesian Evaluation

January 31, 2017

By Alok Johri and Muhebullah Karimzada


We incorporate shocks to the efficiency with which firms learn from production activity and accumulate knowledge into an otherwise standard real DSGE model with imperfect competition. Using real aggregate data and Bayesian inference techniques, we find that learning efficiency shocks are an important source of observed variation in the growth rate of aggregate output, investment, consumption and especially hours worked in post-war US data. The estimated shock processes suggest much less exogenous variation in preferences and total factor productivity are needed by our model to account for the joint dynamics of consumption and hours. This occurs because learning efficiency shocks induce shifts in labour demand uncorrelated with current TFP, a role usually played by preference shocks. At the same time, knowledge capital acts like an endogenous source of productivity variation in the model. Measures of model fit prefer the specification with learning efficiency shocks.

Conceptually, I much prefer learning efficiency shocks to preference shocks, which have become a catch-all for anything that cannot be measured and which are as close to a black box as can be. Learning efficiency shocks have a much better defined theoretical story that can be tested.

Retirement Behavior in the U.S. and Europe

January 27, 2017

By Jochem de Bresser, Raquel Fonseca and Pierre-Carl Michaud


We develop a retirement model featuring various labor market exit routes: unemployment, disability, private and public pensions. The model allows for saving and uncertainty along several dimensions, including health and mortality. Individuals’ preferences are estimated on data from the U.S. and Europe using institutional variation across countries. We analyze the roles of preferences and institutions in explaining international heterogeneity in retirement behavior. Preliminary estimates suggest that a single set of preferences for individuals from the U.S., the Netherlands and Spain does not fit the data well. Were Europeans to have the same preferences as Americans, they would save less than they actually do. Furthermore, the Dutch and Spanish would work more hours than is observed in the data.

Interesting that the one-size-fits-all approach for preferences does not apply, at least in this case. I wonder how those differences in preferences can be explained. If this is not from an estimation issue (model miss-specification, for example), then what drives them? Tradition/history? Demographics? Anything else?

Firing Costs, Misallocation, and Aggregate Productivity

January 24, 2017

By José-María Da-Rocha, Marina Mendes Tavares and Diego Restuccia


We assess the quantitative impact of firing costs on aggregate total factor productivity (TFP) in a dynamic general-equilibrium framework where the distribution of establishment-level productivity is not invariant to the policy. Firing costs not only generate static factor misallocation, but also a worsening of the productivity distribution contributing to large aggregate TFP losses. Firing costs equivalent to 5 year’s wages imply a drop in TFP of more than 20 percent. Factor misallocation accounts for 20 percent of the productivity loss, a relatively small drop in TFP, whereas the remaining 80 percent arises from the endogenous change in the productivity distribution.

Firing costs are a disaster. Sad!

Monetary Policy, Trend Inflation and Unemployment Volatility

January 19, 2017

BY Sergio Lago Alves


The literature has long agreed that the canonical DMP model with search and matching frictions in the labor market can deliver large volatilities in labor market quantities, consistent with US data during the Great Moderation period (1985-2005), only if there is at least some wage stickiness. I show that the canonical model can deliver nontrivial volatility in unemployment without wage stickiness. By keeping average US inflation at a small but positive rate, monetary policy may be accountable for the standard deviations of labor market variables to have achieved those large empirical levels. Solving the Shimer (2005) puzzle, the role of long-run inflation holds even for an economy with flexible wages, as long as it has staggered price setting and search and matching frictions in the labor market.

Another nail in the coffin of the Shimer puzzle, this time with a straightforward model and without stretching the calibration too much.

International Business Cycle and Financial Intermediation

January 17, 2017

By Tamas Csabafi, Max Gillman and Ruthira Naraidoo


The world-wide financial crisis of 2007 to 2009 caused bankruptcy and bank failures in the US and many other parts such as Europe. Recent empirical evidence suggests that this simultaneous drop in output was strongest in countries with greater financial ties to the US economy with important cross border deposit and lending. This paper develops a two-country framework to allow for banking structures within an international real business cycle model. The banking structure across countries is modelled using the production approach to financial intermediation. We allow both countries. banks to be able to take deposits both locally and internationally. We analyze the transmission mechanism of both goods and banking sector productivity shocks. We show that goods total factor productivity (TFP) and bank TFP have different effects on the finance premium. Most countries have shown procyclic equity premium over their histories but with evidence that these are countercyclic during the Great Recession especially. The model has the ability to explain the countercyclical movements of credit spreads during major recession and financial crisis when goods TFP also affects banking productivity. This we model as a cross correlation of shocks to replicate the recent events during the crisis period. Importantly, the model can also explain business cycles facts and the countercyclical behaviour of the trade balance.

Opening up and economy brings obvious efficiency gains and allows for smoothing out domestic shocks. But it also opens up the domestic economy up to foreign shocks. The impact of foreign TFP shocks has long been discussed in the literature. This paper brings in the impact of of foreign finance shocks, something that was an obvious driver in the dissemination of the financial crisis from the US to some other countries. The story is likely not as simple as this paper makes it, but it is a nice contribution that attempts to take the banking sector seriously in the international business cycle literature.

Testing part of a DSGE model by Indirect Inference

December 21, 2016

By Patrick Minford, Michael Wickens and Yongdeng Xu


e propose a new type of test. Its aim is to test subsets of the structural equations of a DSGE model. The test draws on the statistical inference for limited information models and the use of indirect inference to test DSGE models. Using Monte Carlo experiments on two subsets of equations of the Smets-Wouters model we show that the model has accurate size and good power in small samples. In a test of the Smets-Wouters model on US Great Moderation data we reject the specification of the wage-price but not the expenditure sector, pointing to the first as the source of overall model rejection.

It is very easy for DSGE model to be rejected. That should be expected, as they are a simplification of the real world and they are disciplined in ways that does not lend to data fitting at any cost. But as every research question that deserves its own model that highlights what is needed to answer that question and leaves aside what appears to be marginal, it is of great use to see whether the particular model features the right ingredients. This paper shows a method that can help us here, testing only part of the model, the one we really care about, without having the neglected parts dragging the whole model down.