Category Archives: Macroeconomic Theory

What Are Real Business Cycles?

The real business cycle model is often described as the core of modern business cycle research. What this means is that other business cycle models have the RBC model as a special case (i.e. strip away all of the frictions from your model and its an RBC model). The idea that the RBC model is the core of modern business cycle research is somewhat tautological since the RBC model is just a neoclassical model without any frictions. Thus, if we start with a model with frictions and take those frictions away, we have a frictionless model.

The purpose of the original RBC models was not necessarily to argue that these models represented an accurate portrayal of the business cycle, but rather to see how much of the business cycle could be explained without the appeal to frictions. The basic idea is that there could be shocks to tastes and/or technology and that these changes could cause fluctuations in economic activity. Furthermore, since the RBC model was a frictionless model, any such fluctuations would be efficient. This conclusion was important. We typically think of recessions as being inefficient and costly. If this is true, countercyclical policy could be welfare-increasing. However, if the world can be adequately explained by the RBC model, then economic fluctuations represent efficient responses to unexpected changes in tastes and technology. There is no role for countercyclical policy.

There were two critical responses to RBC models. The first criticism was that the model was too simple. The crux of this argument is that if one estimated changes in total factor productivity (TFP; technology in the RBC model) using something like the Solow residual and plugged this into the model, one might be misled into thinking the model had greater predictive power than it did in reality. The basic idea is that the Solow residual is, as the name implies, a residual. Thus, this measure of TFP only captured fluctuations in output that were not explained by changes in labor and capital. Since there are a lot of things besides technology that might effect output other than labor and capital, this might not be a good measure of TFP and might result in attributing a greater percentage of fluctuations to TFP than was true of the actual data generating process.

The second critical response was largely to ridicule and make fun of the model. For example, Franco Modigliani once quipped that RBC-type models were akin to assuming that business cycles were mass outbreaks of laziness. Others would criticize the theory by stating that recessions must be periods of time when society collectively forgets how to use technology. And recently, Paul Romer has suggested that technology shocks be relabeled as phlogiston shocks.

These latter criticisms are certainly witty and no doubt the source of laughter in seminar rooms. Unfortunately, these latter criticisms obscure the more important criticisms. More importantly, however, they represent a misunderstanding of what the RBC model is about. As a result, I would like to provide an interpretation of the RBC model and then discuss more substantive criticisms.

The idea behind the real business cycle model is that fluctuations in aggregate productivity are the cause of economic fluctuations. If all firms are identical, then any decline in aggregate productivity must be a decline in the productivity of all the individual firms. But why would firms become less productive? To me, this seems to be the wrong way to interpret the model. My preferred interpretation is as follows. Suppose that you have a bunch of different firms producing different goods and these firms have different levels of productivity. In this case, an aggregate productivity shock is simply the reallocation from high productivity firms to low productivity firms or vice versa. As long as we think of all markets as being competitive, then the RBC model is just a reduced form version of what I’ve just described. In other words, the RBC model essentially suggests that fluctuations in the economy are driven by the reallocation of inputs between firms with different levels of productivity, but since markets are efficient we don’t need to get into the weeds of this reallocation in the model and can simply focus our attention on a representative firm and aggregate productivity.

I think that my interpretation is important for a couple of reasons. First, it suggests that while “forgetting how to use technology” might get chuckles in the seminar room, it is not particularly useful for thinking about productivity shocks. Second, and more importantly, this interpretation allows for further analysis. For example, how often do we see such reallocation between high productivity firms and low productivity firms? How well do such reallocations line up with business cycles in the data? What are the sources of reallocation? For example, if the reallocation is due to changes in demographics and/or preferences, then these reallocations could be interpreted as efficient responses to structural changes in the economy and be seen as efficient. However, if these reallocations are caused by changes in relative prices due to, say, monetary policy, then the welfare and policy implications are much different.

Thus, to me, rather than denigrate RBC theory, what we should do is try to disaggregate productivity, determine what causes reallocation, and try to assess whether this is an efficient reallocation or should really be considered misallocation. The good news is that economists are already doing this (here and here, for example). Unfortunately, you hear more sneering and name-calling in popular discussions than you do about this interesting and important work.

Finally, I should note that I think one of the reasons that the real business cycle model has been such a point of controversy is that it implies that recessions are efficient responses to fluctuations in productivity and counter-cyclical policy is unnecessary. This notion violates the prior beliefs of a great number of economists. As a result, I think that many of these economists are therefore willing to dismiss RBC out of hand. Nonetheless, while I myself am not inclined to think that recessions are simply efficient responses to taste and technology changes, I do think that this starting point is useful as a thought exercise. Using an RBC model as a starting point to thinking about recessions forces one to think about the potential sources of inefficiencies, how to test the magnitude of such effects, and the appropriate policy response. The better we are able to disaggregate fluctuations in productivity, the more we should be able to learn about fluctuations in aggregate productivity and the more we might be able to learn about the driving forces of recessions.

The Fed, Populism, and Related Topics

Jon Hilsenrath has quite the article in The Wall Street Journal, the title of which is “Years of Fed Missteps Fueled Disillusion With the Economy and Washington”. The article criticizes Fed policy, suggests these policy failures are at least partially responsible for the rise in populism in the United States, and presents a rather incoherent view of monetary policy. As one should be able to tell, the article is wide-ranging, so I want to do something different than I do in a typical blog post. I am going to go through the article point-by-point and deconstruct the narrative.

Let’s start with the lede:

Once-revered central bank failed to foresee the crisis and has struggled in its aftermath, fostering the rise of populism and distrust of institutions

There is a lot tied up in this lede. First, has the Federal Reserve ever been a revered institution? According to Hilsenrath’s own survey evidence, in 2003 only 53% of the population rated the Fed as “Good” or “Excellent”. In the midst of the Great Moderation, I would hardly called this revered.

Second, I’ve really grown tired of this argument that economists or policymakers or the Fed “failed to foresee the crisis.” The implicit assumption is that if the crisis had been foreseen, steps could have been taken to prevent it or make it less severe. But, if we accept this assumption, then we would only observe crises when they weren’t foreseen. Yet crises that were prevented would never show up in the data.

Third, to attribute the rise in populism to Federal Reserve policy presumes that the populism is tied to economic factors that the Fed can influence. Sure, if the Fed could have used policy to make real GDP higher today than it had been in the past that might have eased economic concerns. But productivity slowdowns and labor market disruptions caused by trade shocks are not things that the Federal Reserve can correct. To the extent to which these factors are what is driving populism, the Fed only has limited ability to ease such concerns.

But that’s enough about the lede…

So the basis of the article is that Fed policy has been a failure. This policy failure undermined the standing of the institution, created a wave of populism, and caused the Fed to re-think its policies. I’d like to discuss each of these points individually using passages from the article.

Let’s begin by discussing the declining public opinion of the Fed. Hilsenrath shows in his article that the public’s assessment of the Federal Reserve has declined significantly since 2003. He also shows that people have a great deal less confidence in Janet Yellen than Alan Greenspan? What does this tell us? Perhaps the public had an over-inflated view of the Fed to begin with. It certainly reasonable to think that the public had an over-inflated view of Alan Greenspan. It seems to me that there is a simple negative correlation between what they think of the Fed and a moving average of real GDP growth. It is unclear whether there are implications beyond this simple correlation.

Regarding the rise in populism, everyone has their grand theory of Donald Trump and (to a lesser extent) Bernie Sanders. Here’s Hilsenrath:

For anyone seeking to explain one of the most unpredictable political seasons in modern history, with the rise of Donald Trump and Bernie Sanders, a prime suspect is public dismay in institutions guiding the economy and government. The Fed in particular is a case study in how the conventional wisdom of the late 1990s on a wide range of economic issues, including trade, technology and central banking, has since slowly unraveled.

Do Trump and Sanders supporters have lower opinions of the Fed than the population as whole? Who knows? We are not told in the article. Also, has the conventional wisdom been upended? Whose conventional wisdom? Economists? The public?

So the populism and the reduced standing of the Fed appear to be correlations with things that are potentially correlated with Fed policy. Hardly the smoking gun suggested by the lede. So what about the re-thinking that is going on at the Fed?

First, officials missed signs that a more complex financial system had become vulnerable to financial bubbles, and bubbles had become a growing threat in a low-interest-rate world.

Secondly, they were blinded to a long-running slowdown in the growth of worker productivity, or output per hour of labor, which has limited how fast the economy could grow since 2004.

Thirdly, inflation hasn’t responded to the ups and downs of the job market in the way the Fed expected.

These are interesting. Let’s take them point-by-point:

1. Could the Fed have prevented the housing bust and the subsequent financial crisis? It is unclear. But even if they completed missed this, could not policy have responded once these effects became apparent?

2. What does this even mean? If there is a productivity slowdown that explains lower growth, then shouldn’t the Federal Reserve get a pass on the low growth of real GDP over the past several years? Shouldn’t we blame low productivity growth?

3. Who believes in the Phillips Curve as a useful guide for policy?

My criticism of Hilsenrath’s article should not be read as a defense of the Fed’s monetary policy. For example, critics might think I’m being a bit hypocritical since I have argued in my own academic work that the maintenance of stable nominal GDP growth likely contributed to the Great Moderation. The collapse of nominal GDP during the most recent recession would therefore seem to indicate a policy failure on the part of the Fed. However, notice how much different that argument is in comparison to the arguments made by Hilsenrath. The list provided by Hilsenrath suggests that the problems with Fed policy are (1) the Fed isn’t psychic, (2) the Fed didn’t understand that slow growth is not due to their policy, and (3) that the Phillips Curve is dead. Only this third component should factor into a re-think. But for most macroeconomists that re-think began taking place as early as Milton Friedman’s 1968 AEA Presidential Address — if not earlier. More recently, during an informal discussion at a conference, I observed Robert Lucas tell Noah Smith rather directly that “the Phillips Curve is dead” (to no objection) — so the Phillips Curve hardly represents conventional wisdom.

In fact, Hilsenrath’s logic regarding productivity is odd. He writes:

Fed officials, failing to see the persistence of this change [in productivity], have repeatedly overestimated how fast the economy would grow. The Fed has projected faster growth than the economy delivered in 13 of the past 15 years and is on track to do so again this year.

Private economists, too, have been baffled by these developments. But Fed miscalculations have consequences, contributing to start-and-stop policies since the crisis. Officials ended bond-buying programs, thinking the economy was picking up, then restarted them when it didn’t and inflation drifted lower.

There are 3 points that Hilsenrath is making here:

1. Productivity caused growth to slow.

2. The slowdown in productivity caused the Fed to over-forecast real GDP growth.

3. This has resulted in a stop-go policy that has hindered growth.

I’m trying to make sense of how these things fit together. Most economists think of productivity as being completely independent of monetary policy. So if low productivity growth is causing low GDP growth, then this is something that policy cannot correct. However, point 3 suggests that low GDP growth is explained by tight monetary policy. This is somewhat of a contradiction. For example, if the Fed over-forecast GDP growth, then the implication seems to be that if they’d forecast growth perfectly, they would have had more expansionary policy, which could have increased growth. But if growth was low due to low productivity, then a more expansionary monetary policy would have had only a temporary effect on real GDP growth. In fact, during the 1970s, the Federal Reserve consistently over-forecast real GDP. However, in contrast to recent policy, the Fed saw these over-foreasts as a failure of their policies rather than a productivity slowdown and tried to expand monetary policy further. What Athanasios Orphanides’s work has shown is that the big difference between policy in the 1970s and the Volcker-Greenspan era was that policy in the 1970s put much more weight on the output gap. Since the Fed was over-forecasting GDP, this caused the Fed to think they were observing negative output gaps and subsequently conducted expansionary policy. The result was stagflation.

So is Hilsenrath saying he’d prefer that policy be more like the 1970s? One cannot simultaneously argue that growth is low because of low productivity and tight monetary policy. (Even if it is some combination of both, then monetary policy is of second-order importance and that violates Hilsenrath’s thesis.)

In some sense, what is most remarkable is how far the pendulum has swung in 7 years. Back in 2009, very few people argued that tight monetary policy was to blame for the financial crisis or the recession — heck, Scott Sumner started a blog primarily because he didn’t see anyone making the case that tight monetary policy was to blame. Now, in 2016, the Wall Street Journal is now publishing stories that blame the Federal Reserve for all of society’s ills. There is a case to be made that monetary policy played a role in causing the recession and/or in explaining the slow recovery. Unfortunately, this article in the WSJ isn’t it.

The New Keynesian Failure

In a previous post, I defended neo-Fisherism. A couple of days ago I wrote a post in which I discussed the importance of monetary semantics. I would like to tie together two of my posts so that I can present a more comprehensive view of my own thinking regarding monetary policy and the New Keynesian model.

My post on neo-Fisherism was intended to provide support for John Cochrane who has argued that the neo-Fisher result is part of the New Keynesian model. Underlying this entire issue, however, is what determines the price level and inflation. In traditional macroeconomics, the quantity theory was always lurking in the background (if not the foreground). Under the quantity theory, the money supply determined the price level. Inflation was always and everywhere a monetary phenomenon.

The New Keynesian model dispenses with money altogether. The initial impulse for doing so was the work of Michael Woodford, who wrote a paper discussing how monetary policy would be conducted in a world without money. The paper (to my knowledge) was not initially an attempt to remove money completely from analysis, but rather to figure out a role for monetary policy once technology had developed to a point in which the monetary base was arbitrarily small. However, it seems that once people realized that it was possible to exclude money completely, this literature sort of took that ball and ran with it. The case for doing so was further bolstered by the fact that money already seemed to lack any empirical relevance.

Of course, there are a few fundamental problems with this literature. First, my own research shows that the empirical analysis that claims money is unimportant is actually the result of the fact that the Federal Reserve publishes monetary aggregates that are not consistent with index number theory, aggregation theory, or economic theory. When one uses Divisia monetary aggregates, the empirical evidence is consistent with standard monetary predictions. This is not unique to my paper. My colleague, Mike Belongia, found similar results when he re-examined empirical evidence using Divisia aggregates.

Second, while Woodford emphasizes in Interest and Prices that a central bank’s interest rate target could be determined by a channel system, in the United States the rate is still determined through open market operations (although now that the Fed is paying interest on reserves, it could conceivably use a channel system). This distinction might not seem to be important, but as I alluded to in my previous post, the federal funds rate is an intermediate target. How the central bank influences the intermediate target is important for the conduct of policy. If the model presumes that the mechanism is different from reality, this is potentially important.

Third, Ed Nelson has argued that the quantity theory is actually lurking in the background of the New Keynesian model and that New Keynesians don’t seem to realize it.

With all that being said, let’s circle back to neo-Fisherism. Suppose that a central bank announced that they were going to target a short term nominal interest rate of zero for seven years. How would they accomplish this?

A good quantity theorist would suggest that there are two ways that they would try to accomplish this. The first way would be to continue to use open market purchases to prevent the interest rate from ever rising. However, open market purchases would be inflationary. Since higher inflation expectations puts upward pressure on nominal interest rates, this sort of policy is unsustainable.

The second way to accomplish the goal of the zero interest rate is to set money growth such that the sum of expected inflation and the real interest rate is equal to zero. In other words, the only sustainable way to commit to an interest rate of zero over the long term is deflation (or low inflation if the real interest rate is negative).

The New Keynesians, however, think that the quantity theory is dead and that we can think about policy without money. And in the New Keynesian model, one can supposedly peg the short term nominal interest rate at zero for a short period of time. Not only is this possible, but it also should lead to an increase in inflation and economic activity. Interestingly, however, as my post on neo-Fisherism demonstrated, this isn’t what happens in their model. According to their model, setting the nominal interest rate at zero leads to a reduction in the rate of inflation. This is so because (1) the nominal interest rate satisfies the Fisher equation, and (2) people have rational expectations. (Michael Woodford has essentially admitted this, but now wants to relax the assumption of rational expectations.)

So why am I bringing all of this up again and why should we care?

Well, it seems that Federal Reserve Bank of St. Louis President Jim Bullard recently gave a talk in which he discussed two competing hypotheses. The first is that lower interest rates should cause higher inflation (the conventional view of New Keynesians and others). The second is that lower interest rates should result in lower inflation. As you can see if you look through his slides, he seems to suggest that the neo-Fisher view is correct since we have a lower interest rate and we have lower inflation.

In my view, however, he has drawn the wrong lesson because he has ignored a third hypothesis. The starting point of his analysis seems to be that the New Keynesian model is the useful framework for analysis and given that this is true, which argument about interest rates is correct, the modified Woodford argument? Or the neo-Fisherites?

However, a third hypothesis is that the New Keynesian model is not the correct model to use for analysis. In the quantity theory view, inflation declines when money growth declines. Thus, if you see lower interest rates, the only way that they are sustainable for long periods of time is if money growth (and therefore inflation) declines as well. Below is a graph of Divisia M4 growth from 2004 to the present. Note that the growth rate seems to have permanently declined.

Also, note the following scatterplot between a 1-month lag in money growth and inflation. If you were to fit a line, you would find that the relationship is positive and statistically significant.

So perhaps money isn’t so useless after all.

To get back to my point from a previous post, it seems that discussions of policy need to take seriously the following. First, the central bank needs to specify its target variable (i.e. a specific numerical value for a variable, such as inflation or nominal GDP). Second, the central bank needs to describe how it is going to adjust its instrument (the monetary base) to hit its target. Third, the central bank needs to specify the transmission mechanism through which this will work. In other words, what intermediate variables will tell the central bank whether or not it is likely to hit its target.

As it currently stands, the short term nominal interest rate is the Federal Reserve’s preferred intermediate variable. Nonetheless, the federal funds rate has been close to zero for six and a half years (!) and yet inflation has not behaved in the way that policy would predict. At what point do we begin to question using this as an intermediate variable?

The idea that low nominal interest rates are associated with low inflation and high nominal interest rates are associated with high inflation is the Fisher equation. Milton Friedman argued this long ago. The New Keynesian model assumes that the Fisher identity holds, but it has no mechanism to explain why. It’s just true in equilibrium and therefore has to happen. Thus, when the nominal interest rate rises and individuals have rational expectations, they just expect more inflation and it happens. Pardon me if I don’t think that sounds like the world we live in. New Keynesians also don’t seem to think that this sounds like the world we live in, but this is their model!

To me, the biggest problem with the New Keynesian model is the lack of any mechanism. Without understanding the mechanisms through which policy works, how can one begin to offer policy advice and determine the likelihood of success? At the very least one should take steps to ensure that the policy mechanisms they think exist are actually in the model.

But the sheer dominance of the New Keynesian model in policy circles also leads to false dichotomies. Jim Bullard is basically asking the question: does the world look like the New Keynesian model says or does it look like the New Keynesians say? Maybe the answer is that it doesn’t look like either alternative.

Understanding John Taylor

There has been a great deal of debate regarding Taylor rules recently. The U.S. House of Representatives recently proposed a bill that would require the Federal Reserve to articulate their policy in the form of a rule, such as the Taylor Rule. This bill created some debate about whether or not the Federal Reserve should adopt the Taylor Rule or not. In reality, the bill did not require the Federal Reserve to adopt the Taylor Rule, but rather used the Taylor Rule as an example.

In addition, John Taylor has been advocating the Taylor Rule as a guide to policy recently as well as attributing the recent financial crisis/recession to the deviations from the Taylor Rule. While it should not surprise anyone that Taylor has been advocating a rule of his own design and which bears his name, he has faced criticism regarding his recent advocacy of the rule and his views on the financial crisis.

Those who know me know that I am no advocate of Taylor Rules or the Taylor Rule interpretation of monetary policy (see here, here, and here). Nonetheless, a number of people have simply dismissed Taylor’s arguments because they think that he is either (a) deliberately misleading the public for ideological reasons, or (b) mistaken about the literature on monetary policy. Neither of these views is charitable to Taylor since they imply that he is either being deliberately obtuse or does not understand the very literature that he is citing. I myself am similarly puzzled by some of Taylor’s comments. Nonetheless, it seems to me that an attempt to better understand Taylor’s position can not only help us to understand Taylor himself, but it might also clarify some of the underlying issues regarding monetary policy. In other words, rather than simply accept the easy (uncharitable) view of Taylor, let’s see if there is something to learn from Taylor’s position. (I am not going to link the dismissive views of Taylor. However, I will address some of the substantive criticism raised by Tony Yates later in the post.)

Let’s begin with Taylor’s position. This is a lengthy quote from Taylor’s blog, but I think that this a very explicit outline of Taylor’s ideas regarding monetary policy history:

Let me begin with a mini history of monetary policy in the United States during the past 50 years. When I first started doing monetary economics in the late 1960s and 1970s, monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century. Using the same performance measures, the results were excellent. Inflation and unemployment both came down. We got the Great Moderation, or the NICE period (non-inflationary consistently expansionary) as Mervyn King put it. Researchers like John Judd and Glenn Rudebush at the San Francisco Fed and Richard Clarida, Mark Gertler and Jordi Gali showed that this improved performance was closely associated with more rules-based policy, which they defined as systematic changes in the instrument of policy — the federal funds rate — in response to developments in the economy.


But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation. You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%. The results were not good. In my view this policy change brought on a search for yield, excesses in the housing market, and, along with a regulatory process which broke rules for safety and soundness, was a key factor in the financial crisis and the Great Recession.


This deviation from rules-based monetary policy went beyond the United States, as first pointed out by researchers at the OECD, and is now obvious to any observer. Central banks followed each other down through extra low interest rates in 2003-2005 and more recently through quantitative easing. QE in the US was followed by QE in Japan and by QE in the Eurozone with exchange rates moving as expected in each case. Researchers at the BIS showed the deviation went beyond OECD and called it the Global Great Deviation. Rich Clarida commented that “QE begets QE!” Complaints about spillover and pleas for coordination grew. NICE ended in both senses of the word. World monetary policy now seems to have moved into a strategy-free zone.

This short history demonstrates that shifts toward and away from steady predictable monetary policy have made a great deal of difference for the performance of the economy, just as basic macroeconomic theory tells us. This history has now been corroborated by David Papell and his colleagues using modern statistical methods. Allan Meltzer found nearly the same thing in his more detailed monetary history of the Fed.

My reading of this suggests that there are two important points that we can learn about Taylor’s view. First, Taylor’s view of the Great Moderation is actually quite different than the New Keynesian consensus — even though he seems to think that they are quite similar. The typical New Keynesian story about the Great Moderation is that prior to 1979, the Federal Reserve failed to follow the Taylor principle (i.e. raise the nominal interest rate more than one-for-one with an increase in inflation, or in other words, raise the real interest rate when inflation rises). In contrast, Taylor’s view seems to be that the Federal Reserve became more rule-based. However, a Taylor rule with different parameters than Taylor’s original rule can still be consistent with rule-based policy. So what Taylor seems to mean is that if we look at the federal funds rate before and after 1979, it seems to be consistent with his proposed Taylor Rule in the latter period, but there are significant deviations from that rule in the former period.

This brings me to the second point. Taylor’s view about the importance of the Taylor Rule is one based on empirical observation. What this means is that his view is quite different from those working in the New Keynesian wing of the optimal monetary policy literature. To see how Taylor’s view is different from the New Keynesian literature, we need to consider two things that Taylor published in 1993.

The first source that we need to consult is Taylor’s book, Macroeconomic Policy in a World Economy. In that book Taylor presents a rational expectations model and in the latter chapters uses the model to compare monetary policy rules that look at inflation, real output, and nominal income. He finds that the preferred monetary policy rule in the countries that he considers is akin to what we would now call a Taylor Rule. In other words, the policy that reduces the variance of output and inflation is a rule that responds to both inflation and the output gap.

However, the canonical Taylor Rule and the one that John Taylor now advocates does not actually appear in the book (the results presented in the book suggest different coefficients on inflation and output). The canonical Taylor Rule in which the coefficient on inflation is equal to 1.5 and the coefficient on the output gap is equal to 0.5 appears in Taylor’s paper “Discretion versus policy rules in practice”:

Screen Shot 2015-05-21 at 9.29.58 AM

Thus, as we can see in the excerpt from Taylor’s paper, the reason that he finds this particular policy rule desirable is that it seems to describe monetary policy during a time in which policymakers seemed to be doing well.

However, Taylor is also quick to point out that the Federal Reserve needn’t adopt this rule, but rather that the rule should be one of the indicators that the Federal Reserve looks at when conducting policy:

Screen Shot 2015-05-21 at 9.34.13 AM

Indeed, Taylor’s views on monetary policy do not seem to have changed much from his 1993 paper. He still advocates using the Taylor Rule as a guide to monetary policy rather than as a formula required for monetary policy.

However, what is most important is the following distinction between Taylor’s 1993 book and Taylor’s 1993 paper. In his book, Taylor shows using evidence from simulations that a feedback rule for monetary policy in which the central bank responds to inflation and the output gap (rather than inflation itself or nominal income) is the preferable policy among the three alternatives he considers. In contrast, in his 1993 paper, we begin to see that Taylor views the version of the rule in which the coefficient on inflation is 1.5 and the coefficient on the output gap is 0.5 as a useful benchmark for policy because it seems to describe policy well between the period 1987 – 1992 — a period that Taylor would classify as good policy. In other words, Taylor’s advocacy of the conventional 1.5/0.5 Taylor Rule seems to be informed by the empirical observation that when policy is good, it also tends to coincide with this rule.

This is also evident in Taylor’s 1999 paper entitled, “A Historical Analysis of Monetary Policy Rules.” In this paper, Taylor does two things. First, he estimates reaction functions for the Federal Reserve to determine the effect of inflation and the output gap on the federal funds rate. In doing so, he shows that the Greenspan era seems to have produced a policy consistent with the conventional 1.5/0.5 version of the Taylor Rule whereas for the pre-1979 period, this was not the case. Again, this provides Taylor with some evidence that when Federal Reserve policy is approximately consistent with the conventional Taylor Rule, the corresponding macroeconomic outcomes seem to be better.

This is best illustrated by the second thing that Taylor does in the paper. In the last section of the paper, Taylor plots the path of the federal funds rate if monetary policy had followed a Taylor rule and the actual federal funds rate for the same two eras described above. What the plots of the data show is that during the 1970s, when inflation was high and when nobody would really consider macroeconomic outcomes desirable, the Federal Reserve systematically set the federal funds rate below where they would have set it had they been following the Taylor Rule. In contrast, when Taylor plots the federal funds rate implied by the conventional Taylor Rule and the actual federal funds rate for the Greenspan era (in which inflation was low and the variance of the output gap was low), he finds that policy is very consistent with the Taylor Rule.

He argues on the basis of this empirical observation that the deviations from the Taylor Rule in the earlier period represent “policy mistakes”:

…if one defines policy mistakes as deviations from such a good policy rule, then such mistakes have been associated with either high and prolonged inflation or drawn-out periods of low capacity utilization, much as simple monetary theory would predict. (Taylor, 1999: 340).

Thus, when we think about John Taylor’s position, we should recognize that Taylor’s position on monetary policy and the Taylor Rule is driven much more by empirical evidence than it is by model simulations. He sees periods of good policy as largely consistent with the conventional Taylor Rule and periods of bad policy as inconsistent with the conventional Taylor Rule. This reinforces his view that the Taylor Rule is a good indicator about the stance of monetary policy.

Taylor’s advocacy of the Taylor Rule as a guide for monetary policy is very different from the related New Keynesian literature on optimal monetary policy. That literature, beginning with Rotemberg and Woodford (1999) — incidentally writing in the same volume as Taylor’s 1999 paper, which was edited by Taylor — derives welfare criteria using the utility function of the representative agent in the New Keynesian model. In the context of these models, it is straightforward to show that the optimal monetary policy is one that minimizes the weighted sum of the variance of inflation and the variance of the output gap.

I bring this up because this literature reached different conclusions regarding the coefficients in the Taylor Rule. For example, as Tony Yates explains:

…if you take a modern macro model and work out what is the optimal Taylor Rule – tune the coefficients so that they maximise social welfare, properly defined in model terms, you will get very large coefficients on the term in inflation. Perhaps an order of magnitude greater than JT’s. This same result is manifest in ‘pure’ optimal policies, where we don’t try to calculate the best Taylor Rule, but we calculate the best interest rate scheme in general. In such a model, interest rates are ludicrously volatile. This lead to the common practice of including terms in interest rate volatility in the criterion function that we used to judge policy. Doing that dials down interest rate volatility. Or, in the exercise where we try to find the best Taylor Rule, it dials down the inflation coefficient to something reasonable. This pointed to a huge disconnect between what the models were suggesting should happen, and what central banks were actually doing to tame inflation [and what John Taylor was saying they should do]. JT points out that most agree that the response to inflation should be greater than one for one. But should it be less than 20? Without an entirely arbitary term penalising interest rate volatility, it’s possible to get that answer.

I suspect that if one brought up this point to Taylor, he would suggest that these fine-tuned coefficients are unreasonable. As evidence in favor of his position, he would cite the empirical observations discussed above. Thus, there is a disconnect between what the Taylor Rule literature has to say about Taylor Rules and what John Taylor has to say about Taylor Rules. I suspect the difference is that the literature is primarily based on considering optimal monetary policy in terms of a theoretical model whereas John Taylor’s advocacy of the Taylor Rule is based on his own empirical observations.

Nonetheless, as Tony pointed out to me in conversation, if that is indeed the position that Taylor would take, then quotes like this from Taylor’s recent WSJ op-ed are misleading, “The summary is accurate except for the suggestion that I put the rule forth simply as a description of past policy when in fact the rule emerged from years of research on optimal monetary policy.” I think that what Taylor is really saying is that Taylor Rules, defined generally as rules in which the central bank adjusts the interest rate to changes in inflation and the output gap, are consistent with optimal policy rather than arguing that his exact Taylor Rule is the optimal policy in these models. Nonetheless, I agree with Tony that this statement is misleading regardless of what Taylor meant when he wrote it.

But suppose that we give Taylor the benefit of doubt and suggest that this statement was unintentionally misleading. There is still this bit about the financial crisis to discuss and it is on this subject that there are questions that need to be asked of Taylor.

In Taylor’s book Getting Off Track, he argues that deviations from the Taylor Rule caused the financial crisis. To demonstrate this, he first shows that from 2003 – 2006, the federal funds rates was approximately 2 percentage points below the rate implied by the conventional Taylor Rule. He then provides empirical evidence regarding the effects of the deviations from the Taylor Rule on housing starts. He constructs a counterfactual to suggest that if the Federal Reserve had followed the Taylor Rule, then then housing starts would have been between 200,000 – 400,000 units lower each year between 2003 and 2006 than what we actually observed. He also shows that the deviations from the Taylor Rule in Europe can explain changes in housing investment in for a sample that includes Germany, Austria, Italy, the Netherlands, Belgium, Finland, France, Spain, Greece, and Ireland.

Taylor therefore argues that by keeping interest rates too low for too long, the Federal Reserve (and the ECB by following suit with low interest rates) created the housing boom that ultimately went bust and led to a financial crisis.

In a separate post, Tony Yates responds to this hypothesis by making the following points:

2. John’s rule was shown to deliver pretty good results in variations on a narrow class of DSGE models. The crisis has cast much doubt on whether this class is wide enough to embrace the truth. In particular, it typically left out the financial sector. Modifications of the rule such that central bank rates respond to spreads can be shown to deliver good results in prototype financial-inclusive DSGE models. But these models are just a beginning, and certainly not the last word, on how to describe the financial sector. In models in which the Taylor Rule was shown to be good, smallish deviations from it don’t cause financial crises, therefore, because almost none of these models articulate anything that causes a financial crisis. How can you put a financial crisis in real life down to departures from a rule whose benefits were derived in a model that had no finance? There is a story to be told. But it requires much alteration of the original model. Perhaps nominal illusion; misapprehension of risk, learning, and runs. And who knows what the best monetary policy would be in that model.

3. In the models in which the TR is shown to be good, the effects of monetary policy are small and relatively short-lived. To most in the macro profession, the financial crisis looks like a real phenomenon, building up over 2-2.5 decades, accompanying relative nominal stability. Such phenomena don’t have monetary causes, at least not seen through the spectacles of models in which the TR does well. Conversely, if monetary policy is deduced to have two decade long impulses, then we must revise our view about the efficacy of the Taylor Rule.

Thus, we are back to the literature on optimal monetary policy. Again, I suspect that if one raised these points to John Taylor, he might argue that (i) his empirical evidence on the financial crisis trumps the optimal policy literature (which admittedly has issues — like the lack of a financial sector in my of these models), (ii) his empirical analysis suggests that a Taylor Rule might be optimal in a properly modified model, or (iii) regardless of whether the conventional Taylor Rule is optimal, the deviation from this type of policy is harmful as evident by the empirical evidence.

Nonetheless, this brings me to my own questions about/criticisms of Taylor’s approach:

1. Suppose that Taylor believes that point (i) is true. If this is the case, then citing the optimal monetary policy literature as supportive of the Taylor Rule in the WSJ is not simply innocently misleading the readers, it is deliberately misleading the readers by choosing to only cite this literature when it fits with his view. One should not selectively cite literature when it is favorable to one’s view and then not cite the same literature when it is no longer favorable.

2. As Tony Yates points out, point (ii) is impossible to answer.

3. Regarding point (iii), the question is whether or not empirical evidence is sufficient to establish the Taylor Rule as a desirable policy. For example, as the work of Athanasios Orphanides demonstrates, the conclusions about whether the Federal Reserve following the Taylor principle (i.e. having a coefficient on inflation greater than 1) in the pre- and post-Volcker era are dependent upon the data that one uses in their analysis. When one uses the data that the Federal Reserve had in real-time, the problems associated with policy have more to do with the responsiveness of the Fed to the output gap than they do with the rate of inflation. In other words, the Federal Reserve does not typically do a good job forecasting the output gap in real-time. This is a critical flaw in the Taylor Rule because it implies that even if the Taylor Rule is optimal, the central bank might not be able to set policy consistent with the rule.

In other words, if the deviations from the Taylor Rule have such a large effect on economic outcomes and it is very difficult for the central bank to maintain a policy consistent with the Taylor Rule, then perhaps this isn’t a desirable policy after all.

4. One has to stake out a position regarding where they stand on models and the data. Taylor’s initial advocacy of this type of rule seems to be driven by the model simulations that he has done. However, his more recent advocacy of this type of rule seems to be driven by the empirical evidence in his 1993 and 1999 papers and his book, Getting Off Track. But the empirical evidence should be consistent with the model simulations and it is not clear that this is true. In other words, one should not make statements about the empirical importance of a rule when the outcome from deviating from that rule is not even a feature of the model that was used to do the simulations.

5. In addition, the Taylor Rule lacks the intuition of, say, a money growth rule. With a money growth rule, the analysis is simply based on quantity theoretic arguments. If one targets a particular rate of growth in the monetary aggregate (assuming that velocity is stable), we have a good idea about what nominal income growth (or inflation) will be. In addition, the quantity theory is well known (if not always well understood) and can be shown to be consistent with a large number of models (even models with flexible prices). This sort of rule for policy is intuitive. If you know that in the long run money growth causes inflation then the way to prevent inflation is to limit money growth.

It is not so clear what the intuition is behind the Taylor Rule. It says that we need to tighten policy when inflation rises and/or when real GDP is above potential. That part is fairly intuitive. But what are the “correct” parameters? And why is Taylor’s preferred parameterization such a good rule? Is it solely based on his empirical work because the optimal monetary policy literature suggests alternatives?

6. Why did things change between the 1970s and the early 2000s. In his 1999 paper, Taylor argues that the Federal Reserve kept interest rates too low for too long and we ended up with stagflation. In his book Getting Off Track, he implies that when the Federal Reserve kept interest rates too low for too long we ended up with a housing boom and bust. But why wasn’t there inflation/stagflation? Why was there such a different response to having interest rates too low in the early 2000s as opposed to the 1970s? These are questions that empirics alone cannot answer.

In any event, I hope that this post brings some clarity to the debate.

Interest Rates and Investment

The conventional way of discussing monetary policy is by referencing the interest rate target of the central bank. This is also the way that monetary policy is communicated in the basic New Keynesian model. The idea is that the transmission of monetary policy is primarily through the interest rate. I would like to argue in this post that this is a problematic way of thinking about monetary policy and that the transmission mechanism of policy is unclear.

In the New Keynesian model, the real interest rate affects the time path of consumption through the consumption Euler equation. In particular, when the real interest rate falls, the household would want to save less and therefore would want to consume more. This increases real economic activity in the current period. If we add capital to the model, a lower interest rate encourages a greater investment in capital. Thus, if monetary policy can affect the real interest rate in the short run, then the interest rate target of the central bank can be used as a stabilization tool.

This investment mechanism, however, is questionable. It ignores how investment is actually done in the real world. We can illustrate this lesson with a simple example.

Suppose that there is a firm. The firm produces a product and is deciding whether to build a new factory to increase its production. Let V(t) denote the value of the factory at time t. The initial value of the project is V(0) = V_0. Now suppose that the value to the firm of building the factory is growing over time:

{{\dot{V}}\over{V}} = a

It follows that the value of the factory at some arbitrary date in the future, say time T, is

e^{aT} V_0

Now suppose that the cost to build the factory is some fixed cost, F. The firm’s objective is to choose the optimal point in time to build the factory so as to maximize the expected discounted net value of the project:

\max\limits_{T} e^{-rT} [e^{aT}V_0 - F]

where r > a is the real interest rate. The maximization problem implies that

T^* = max\bigg[{{1}\over{a}} ln\bigg({{rI}\over{(r-a)V_0}}\bigg),0\bigg]

Assuming that T^* > 0 (i.e. the optimal time to invest is not immediately), it is straightforward to see that when the real interest rate declines, it is beneficial to put off the investment further into the future.

We can understand the intuition behind this result as follows. In a standard model with capital, the marginal product of capital (net of some adjustment cost) is equal to the real interest rate. Thus, when the real interest rate falls, the firm wants to increase its investment in capital, but because it is costly to adjust that capital, it takes time for the capital stock to reach the firm’s desired level. In contrast, the framework presented above suggests that investment is an option and the firm has to decide when to exercise that option. In that case, a lower the real interest rate means that the future is more important (all else equal). But if the future is more important, then that increases the opportunity cost of exercising the option today. So the firm would want to wait to exercise the option.

So which way is best to think about interest rates and investment? The empirical evidence on the issue (albeit somewhat dated) seems to suggest that price variables, like the real interest rate, are not particularly useful in explaining investment (at least compared to other variables). So is this really the mechanism that should be emphasized in the conduct of monetary policy?

[I should note that this insight is (at least I thought) well known. This example is precisely the example provided by Dixit and Pindyck (1994). Countless other examples can be found in Stokey (2008).]

Interest on Reserves and the Federal Funds Rate

The payment of interest on reserves is supposed to put a floor beneath the federal funds rate. Since banks can lend to one another overnight at the federal funds rate, they have a choice. The bank can either lend excess reserves to another bank at the federal funds rate or they can hold the reserves at the Federal Reserve and collect the interest the Fed pays on reserves. In theory, this means that the federal funds rate should never go below the interest rate on reserves. The reason is simple. No bank should have the incentive to lend at a lower rate than they would receive by not lending.

However, the effective federal funds rate has been consistently below the interest rate on reserves. How can this be so? Marvin Goodfriend explains:

The interest on reserves floor for the federal funds rate failed, and continues to fail to this day, because non-depository institutions (such as government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac, and Federal Home Loan Banks (FHLBs)) are authorized to hold overnight balances at the Fed, but are not eligible to receive interest on those balances. Hence, the GSEs and FHLSs [sp] have an incentive to try to earn interest on their overnight balances at the Fed by lending them to depositories eligible to receive interest on their reserve balances. The federal funds rate is thereby driven below interest on reserves to the point that depositories are willing to borrow from the GSEs and the FHLBs, deposit the proceeds at the Fed, and earn the spread between interest on reserves and the federal funds rate.

More here.

What Does It Mean for the Natural Rate of Interest to Be Negative?

Talk of the zero lower bound has permeated the debate about monetary policy in recent years. In particular, there is one consistent story across a variety of different thinkers involving the difference between the natural rate of interest and the market rate of interest. Specifically, the argument holds that if the market rate of interest is higher than the natural rate of interest then monetary policy is too tight. With regards to the current state of the world, this is potentially problematic is the market rate of interest is zero, but needs to be lower.

I find this way of thinking about monetary policy to be quite odd for several reasons. First, conceivably when one talks about the natural rate of interest, the reference is to a real interest rate. New Keynesians, for example, clearly see the natural rate of interest as a real rate of interest (at least in their models). Second, the market rate of interest is a nominal rate. Thus, it is odd to say that the market rate of interest is above the natural rate of interest when one is nominal and one is real. I suppose that what they mean is that given the nominal interest rate and given the expectations of inflation, the implied real market rate is too high. But this seems to be an odd way to describe what is going on.

Regardless of this confusion, what advocates of this approach appear to be saying is this: when the market rate of interest is at the zero lower bound and the natural rate of interest is negative, unless inflation expectations rise, there is no way to equate the real market rate of interest with the natural rate.

But this brings me to the most important question that I have about this entire argument: Why is the natural rate of interest negative?

It is easy to imagine a real market interest rate being negative. If inflation expectations are positive and policymakers drive a nominal interest rate low enough, then the implied real interest rate is negative. It is NOT, however, easy to imagine the natural rate of interest being negative.

To simplify matters, let’s consider a world with zero inflation. The central bank uses an interest rate rule to set monetary policy. The nominal market rate is therefore equal to the real market interest rate. Thus, assuming that the central bank is pursuing a policy to maintain zero inflation, they are effectively setting the real rate of interest. Thus, the optimal policy is to set the interest rate equal to the natural interest rate. Also, since everyone knows the central bank will never create inflation, this makes the zero lower bound impenetrable (i.e. you cannot even use inflation expectations to lower the real rate when the nominal rate hits zero). I have therefore created a world in which a central bank is incapable of setting the market rate of interest equal to the natural rate of interest if the natural rate is negative. My question is, why in the world would we ever reach this point?

So let’s consider the determination of the natural rate of interest. I will define the natural rate of interest as the real rate of interest that would result with perfect markets, perfect information, and perfectly flexible prices (the New Keynesian would be proud, I think). To determine the equilibrium real interest rate, we need to understand saving behavior and we need to understand investment behavior. The equilibrium interest rate is then determined by the market in our perfect benchmark world. So let’s set up a really simple model of saving and investment.

Time is continuous and infinite. A representative household receives an endowment of income, y, and can either consume the income or save it. If they save it, they earn a real interest rate, r. The household generates utility via consumption. The household utility function is given as

\int_0^{\infty} e^{-\rho t} u[c(t)] dt

where \rho is the rate of time preference and c is consumption. The household’s asset holdings evolve according to:

\dot{a} = y - c + ra

where a are the asset holdings of the individual. In a steady state equilibrium, it is straightforward to show that

r = \rho

The real interest rate is equal to the rate of time preference.

Now let’s consider the firm. Firms face an investment decision. Let’s suppose for simplicity that the firm produces bacon. We can then think of the firm as facing a duration problem. They purchase a pig at birth and they raise the pig. The firm then has to decide how long to wait until they slaughter the pig to make the bacon. Suppose that the duration of investment is given as \theta. The production of bacon is given by the production function:

b = f(\theta)

where f’,-f”>0 and b is the quantity of bacon produced. The purchase of the pig requires some initial outlay of investment, i, which is assumed to be exogenously fixed in real terms and then it just grows until it is slaughtered. The value of the pig over the duration of the investment is given as

p = \int_{-\theta}^0 e^{-rt} i dt

Integration of this expression yields

{{p}\over{i}} = {{1}\over{r}}(e^{r\theta} - 1)

Let’s normalize the amount of investment done to 1. Thus, we can write the firm’s profit equation as

\textrm{Profit} = f(\theta) - e^{r\theta}

The firm’s profit-maximizing decision is therefore given as

f'(\theta) = re^{r\theta}

Given that the firm makes zero economic profits, it is straightforward to show that

r = {{f'(\theta)}\over{f(\theta)}}

So let’s summarize what we have. We have an inverse supply of saving curve that is given as

r = \rho

Thus, the saving curve is a horizontal line at the rate of time preference.

The inverse investment demand curve is given as

r = {{f'(\theta)}\over{f(\theta)}}

The intersection of these two curves determine the equilibrium real interest rate and the equilibrium duration of investment. Since the supply curve is horizontal, the real interest rate is always equal to the rate of time preference. So this brings me back to my question: How can we explain why the natural rate of interest would be negative?

You might look at the equilibrium conditions and think “sure the natural rate of interest can be negative, we just have to assume that the rate of time preference is negative.” While, this might mathematically be true, it would seem to imply that people value the future more than the present. Does anybody believe that to be true? Are we really to believe that the the zero lower bound is a problem because the general public’s preferences change such that they suddenly value the future more than the present?

But suppose you are willing to believe this. Suppose you think it is perfectly reasonable to assume that people woke up sometime during the recession and their rate of time preference was negative. There are two sides to the market. So what would happen to the duration of investment if the real interest rate was negative? From our inverse investment demand curve, we see that the real interest rate is equal to the ratio of the marginal product of duration over total production. We have made the standard assumption that the marginal product is positive, so this would seem to rule out any equilibrium in which the real interest rate was negative. But suppose at a sufficiently long duration, the marginal product is negative. We could always write down a production function with this characteristic, but how generalizable would this production function be? And why would a firm choose this actually duration when they could have chosen a shorter duration and had the same level of production?

Thus, the only way that one can believe that the natural rate of interest is negative is if they believe that individuals suddenly value the future more than the present and that in a perfect, frictionless world firms would prefer to undertake dynamically inefficient investment projects. And not only that, advocates of this viewpoint also think that the problem with policy is that we cannot use our policy tools to get us to a point consistent with these conditions!

Finally, you might argue that I have simply cherry-picked a model that fits my conclusion. But the model I have presented here is just Hirshleifer’s attempt to model the theories of Bohm-Bawerk and Wicksell, the economists that came up with the idea of the natural rate of interest. So this would seem to be a good starting point for analysis.

P.S. If you are interested in evaluating monetary policy within a framework like this, you should check out one of my working papers, written with Alex Salter.