Monthly Archives: August 2015

The New Keynesian Failure

In a previous post, I defended neo-Fisherism. A couple of days ago I wrote a post in which I discussed the importance of monetary semantics. I would like to tie together two of my posts so that I can present a more comprehensive view of my own thinking regarding monetary policy and the New Keynesian model.

My post on neo-Fisherism was intended to provide support for John Cochrane who has argued that the neo-Fisher result is part of the New Keynesian model. Underlying this entire issue, however, is what determines the price level and inflation. In traditional macroeconomics, the quantity theory was always lurking in the background (if not the foreground). Under the quantity theory, the money supply determined the price level. Inflation was always and everywhere a monetary phenomenon.

The New Keynesian model dispenses with money altogether. The initial impulse for doing so was the work of Michael Woodford, who wrote a paper discussing how monetary policy would be conducted in a world without money. The paper (to my knowledge) was not initially an attempt to remove money completely from analysis, but rather to figure out a role for monetary policy once technology had developed to a point in which the monetary base was arbitrarily small. However, it seems that once people realized that it was possible to exclude money completely, this literature sort of took that ball and ran with it. The case for doing so was further bolstered by the fact that money already seemed to lack any empirical relevance.

Of course, there are a few fundamental problems with this literature. First, my own research shows that the empirical analysis that claims money is unimportant is actually the result of the fact that the Federal Reserve publishes monetary aggregates that are not consistent with index number theory, aggregation theory, or economic theory. When one uses Divisia monetary aggregates, the empirical evidence is consistent with standard monetary predictions. This is not unique to my paper. My colleague, Mike Belongia, found similar results when he re-examined empirical evidence using Divisia aggregates.

Second, while Woodford emphasizes in Interest and Prices that a central bank’s interest rate target could be determined by a channel system, in the United States the rate is still determined through open market operations (although now that the Fed is paying interest on reserves, it could conceivably use a channel system). This distinction might not seem to be important, but as I alluded to in my previous post, the federal funds rate is an intermediate target. How the central bank influences the intermediate target is important for the conduct of policy. If the model presumes that the mechanism is different from reality, this is potentially important.

Third, Ed Nelson has argued that the quantity theory is actually lurking in the background of the New Keynesian model and that New Keynesians don’t seem to realize it.

With all that being said, let’s circle back to neo-Fisherism. Suppose that a central bank announced that they were going to target a short term nominal interest rate of zero for seven years. How would they accomplish this?

A good quantity theorist would suggest that there are two ways that they would try to accomplish this. The first way would be to continue to use open market purchases to prevent the interest rate from ever rising. However, open market purchases would be inflationary. Since higher inflation expectations puts upward pressure on nominal interest rates, this sort of policy is unsustainable.

The second way to accomplish the goal of the zero interest rate is to set money growth such that the sum of expected inflation and the real interest rate is equal to zero. In other words, the only sustainable way to commit to an interest rate of zero over the long term is deflation (or low inflation if the real interest rate is negative).

The New Keynesians, however, think that the quantity theory is dead and that we can think about policy without money. And in the New Keynesian model, one can supposedly peg the short term nominal interest rate at zero for a short period of time. Not only is this possible, but it also should lead to an increase in inflation and economic activity. Interestingly, however, as my post on neo-Fisherism demonstrated, this isn’t what happens in their model. According to their model, setting the nominal interest rate at zero leads to a reduction in the rate of inflation. This is so because (1) the nominal interest rate satisfies the Fisher equation, and (2) people have rational expectations. (Michael Woodford has essentially admitted this, but now wants to relax the assumption of rational expectations.)

So why am I bringing all of this up again and why should we care?

Well, it seems that Federal Reserve Bank of St. Louis President Jim Bullard recently gave a talk in which he discussed two competing hypotheses. The first is that lower interest rates should cause higher inflation (the conventional view of New Keynesians and others). The second is that lower interest rates should result in lower inflation. As you can see if you look through his slides, he seems to suggest that the neo-Fisher view is correct since we have a lower interest rate and we have lower inflation.

In my view, however, he has drawn the wrong lesson because he has ignored a third hypothesis. The starting point of his analysis seems to be that the New Keynesian model is the useful framework for analysis and given that this is true, which argument about interest rates is correct, the modified Woodford argument? Or the neo-Fisherites?

However, a third hypothesis is that the New Keynesian model is not the correct model to use for analysis. In the quantity theory view, inflation declines when money growth declines. Thus, if you see lower interest rates, the only way that they are sustainable for long periods of time is if money growth (and therefore inflation) declines as well. Below is a graph of Divisia M4 growth from 2004 to the present. Note that the growth rate seems to have permanently declined.

Also, note the following scatterplot between a 1-month lag in money growth and inflation. If you were to fit a line, you would find that the relationship is positive and statistically significant.

So perhaps money isn’t so useless after all.

To get back to my point from a previous post, it seems that discussions of policy need to take seriously the following. First, the central bank needs to specify its target variable (i.e. a specific numerical value for a variable, such as inflation or nominal GDP). Second, the central bank needs to describe how it is going to adjust its instrument (the monetary base) to hit its target. Third, the central bank needs to specify the transmission mechanism through which this will work. In other words, what intermediate variables will tell the central bank whether or not it is likely to hit its target.

As it currently stands, the short term nominal interest rate is the Federal Reserve’s preferred intermediate variable. Nonetheless, the federal funds rate has been close to zero for six and a half years (!) and yet inflation has not behaved in the way that policy would predict. At what point do we begin to question using this as an intermediate variable?

The idea that low nominal interest rates are associated with low inflation and high nominal interest rates are associated with high inflation is the Fisher equation. Milton Friedman argued this long ago. The New Keynesian model assumes that the Fisher identity holds, but it has no mechanism to explain why. It’s just true in equilibrium and therefore has to happen. Thus, when the nominal interest rate rises and individuals have rational expectations, they just expect more inflation and it happens. Pardon me if I don’t think that sounds like the world we live in. New Keynesians also don’t seem to think that this sounds like the world we live in, but this is their model!

To me, the biggest problem with the New Keynesian model is the lack of any mechanism. Without understanding the mechanisms through which policy works, how can one begin to offer policy advice and determine the likelihood of success? At the very least one should take steps to ensure that the policy mechanisms they think exist are actually in the model.

But the sheer dominance of the New Keynesian model in policy circles also leads to false dichotomies. Jim Bullard is basically asking the question: does the world look like the New Keynesian model says or does it look like the New Keynesians say? Maybe the answer is that it doesn’t look like either alternative.

On Monetary Semantics

My colleague, Mike Belongia, was kind enough to pass along a book entitled, “Targets and Indicators of Monetary Policy.” The book was published in 1969 and features contributions of the Karl Brunner, Allan Meltzer, Anna Schwartz, James Tobin, and others. The book itself was a product of a conference at UCLA held in 1966. There are two overarching themes to the book. The first theme, which is captured implicitly by some papers and is discussed explicitly by others, is the need for clarification in monetary policy discussions regarding indicator variables and target variables. The second theme is that, given these common definitions, economic theory can be used to guide policymakers regarding what variables should be used as indicators and targets. While I’m not going to summarize all of the contributions, there is one paper that I wanted to discuss because of its continued relevance today and that is Tobin’s contribution entitled, “Monetary Semantics.”

Contemporary discussions of monetary policy often begin with a misguided notion. For example, I often hear something to the effect of “the Federal Reserve has one instrument and that is the federal funds rate.” This is incorrect. The federal funds rate is not and never has been an instrument of the Federal Reserve. One might think that this is merely semantics, but this gets to broader issues about the role of monetary policy.

This point is discussed at length in Tobin’s paper. It is useful here to quote Tobin at length:

No subject is engulfed in more confusion and controversy than the measurement of monetary policy. Is it tight? Is it easy? Is it tighter than it was last month, or last year, or ten years ago? Or is it easier? Such questions receive a bewildering variety of answers from Federal Reserve officials, private bankers, financial journalists, politicians, and academic economists…The problem is not only descriptive but normative; that is, we all want an indicator of ease or tightness not just to describe what is happening, but to appraisecurrent policy against some criterion of desirable or optimal policy.

[…]

I begin with some observations about policy making that apply not just to monetary policy, indeed not just to public policy, but quite generally. From the policy maker’s standpoint, there are three kinds of variables on which he obtains statistical or other observations: instruments, targets, and intermediate variables. Instruments are variables he controls completely himself. Targets are variables he is trying to control, that is, to cause to reach certain numerical values, or to minimize fluctuations. Intermediate variables lie in-between. Neither are they under perfect control nor are their values ends in themselves.

This quote is important in and of itself for clarifying language. However, there is a broader importance that can perhaps best be illustrated by a discussion of recent monetary policy.

In 2012, I wrote a very short paper (unpublished, but can be found here) about one of the main problems with monetary policy in the United States. I argued in that paper that the main problem was that the Federal Reserve lacked an explicit target for monetary policy. Without an explicit target, it was impossible to determine whether monetary policy was too loose, too tight, or just right. (By the time the paper was written, the Fed had announced a 2% target for inflation.) In the paper, I pointed out that folks like Scott Sumner were saying that monetary policy was too tight because nominal GDP had fallen below trend while people like John Taylor were arguing that monetary policy was too loose because the real federal funds rate was below the level consistent with the Taylor Rule. In case that was enough, people like Federal Reserve Bank of St. Louis President Jim Bullard claimed that monetary policy was actually just about right since inflation was near its recently announced 2% target. What was more remarkable is that if one looked at the data, all of these people were correct based on their criteria for evaluating monetary policy. This is actually quite disheartening considering the fact that these three ways of evaluating policy had been remarkably consistent in their evaluations of monetary policy in the past.

I only circulated the paper among a small group of people and much of the response that I received was something to the effect of “the Fed has a mandate to produce low inflation and full employment, its reasonable to think that’s how they should be evaluated.” That sort of response seems reasonable on first glance, but that view ignores the main point I was trying to make. Perhaps I did make the case poorly since I did not manage to convince anyone of my broader point. So I will try to clarify my position here.

All of us know the mandate of the Federal Reserve. That mandate consists of two goals (actually three if you include keeping interest rates “moderate” – no comment on that goal): stable prices and maximum employment. However, knowing the mandate doesn’t actually provide any guidance for policy. What does it actually mean to have stable prices and maximum employment? These are goals, not targets. This is like when a politician says, “I’m for improving our schools.” That’s great. I’m for million dollar salaries for economics professors with the last name Hendrickson. Without a plan, these goals are meaningless.

There is nothing wrong with the Federal Reserve having broadly defined goals, but along with these broadly defined goals needs to be an explicit target. Also, the central bank needs a plan to achieve the target. Conceivably, this plan would outline how the Federal Reserve planned to use its instrument to achieve its target, with a description of intermediate variables that it would use to provide guidance to ensure that their policy is successful.

The Federal Reserve has two goals, which conceivably also means that they have two targets (more on that later). So what are the Fed’s targets? According to a press release from the Federal Reserve:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee’s ability to promote maximum employment in the face of significant economic disturbances.

The maximum level of employment is largely determined by nonmonetary factors that affect the structure and dynamics of the labor market. These factors may change over time and may not be directly measurable. Consequently, it would not be appropriate to specify a fixed goal for employment; rather, the Committee’s policy decisions must be informed by assessments of the maximum level of employment, recognizing that such assessments are necessarily uncertain and subject to revision.

So the Federal Reserve’s targets are 2% inflation and whatever the FOMC thinks the maximum level of employment is. This hardly clarifies the Federal Reserve’s targets. In addition, the Fed provides no guidance as to how they intend to achieve these targets.

The fact that the Federal Reserve has two goals (or one target and one goal) for policy is also problematic because the Fed only has one instrument, the monetary base (the federal funds rate is an intermediate variable).* So how can policy adjust one variable to achieve two targets? Well, it would be possible to do such a thing if the two targets had some explicit relationship. However, at times policy might have to act when these targets are not behaving in a complementary fashion with respect to the dual mandate. The Fed admits as much in the very same press release:

These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.

I will leave it to the reader to determine whether this clarifies or obfuscates the stance of the FOMC.

Despite the widespread knowledge of the dual mandate and despite the fact that the Federal Reserve has been a bit more forthcoming about an explicit target associated with its mandate, those evaluating Fed policy are stuck relying on other indicators of the stance of policy. In other words, since the Federal Reserve still does not have an explicit target that we can look at to evaluate policy, economists have sought other ways to do it.

John Taylor has chosen to think about policy in terms of the Taylor Rule. He views the Fed as adjusting the monetary base to set the federal funds rate consistent with the Taylor Rule, which has been shown to produce low variability in inflation and output around targets. Empirical evidence exists that shows that when the Federal Reserve has conducted policy broadly consistent with its mandate, the behavior of the federal funds rate looks as it would under the Taylor Rule. As a result, the Taylor Rule becomes a guide for policy in the absence of explicit targets. But even this guidance is only guidance with respect to an intermediate variable.

Scott Sumner has chosen to think about policy in terms of nominal GDP. This follows from a quantity theoretic view of the world. If the central bank promotes stable nominal GDP growth, then inflation expectations will be stable and price mechanism will function efficiently. In addition, the central bank will respond to only the types of shocks that they can correct. Stable nominal GDP therefore implies low inflation and stable employment. My own research suggests that the Federal Reserve conducted monetary policy as if they were stabilizing nominal GDP growth during the Great Moderation. But even using nominal GDP as a guide is limited in the sense that this is not an official target of the Federal Reserve, so a deviation of nominal GDP from trend (even it is suboptimal) might be consistent with the Federal Reserve’s official targets since the latter is essentially unknown.

Nonetheless, the development of these different approaches (and others) was the necessary outgrowth of the desire to understand and evaluate monetary policy. Such an approach is only necessary when the central bank has broad goals without explicit targets and without an explicit description of how they are going to achieve those targets.

We therefore end up back at Tobin’s original questions. How do we know when policy is too loose? Or too tight?

During the Great Inflation and the Great Moderation, both the Taylor Rule and stable growth in nominal GDP provided a good way to evaluate policy. During the Great Inflation both evaluation methods suggest that policy was too loose (although this is less clear regarding the Taylor Rule with real time data). During the Great Moderation, both evaluation methods suggest that policy was conducted well. What is particularly problematic, however, is that the most recent period since 2007, the Taylor Rule and nominal GDP have given the opposite conclusions about the stance of monetary policy. This has further clouded the discussion surrounding policy because advocates of each approach can point to historical evidence as supportive of their approach.

With an explicit target evaluating the stance of policy would be simple. If the FOMC adopted a 2% inflation target (and nothing else), then whenever inflation was above 2% (give or take some measurement error), policy would be deemed loose. Whenever inflation was below 2%, policy would be deemed too tight. Since neither the federal funds rate prescribed by the Taylor Rule nor nominal GDP are an official target of the Fed, it’s not immediately obvious how to judge the stance based solely on these criteria. (And what is optimal is another issue.)

If we want to have a better understanding of monetary policy, we need to emphasize the proper monetary semantics. First, we as economists need to use consistent language regarding instruments, targets, and intermediate variables. No more referring to the federal funds rate as the instrument of policy. Second, the Federal Reserve’s mandate needs to be modified to have only one target for monetary policy. If not otherwise specified, the Federal Reserve needs to provide a specific numerical goal for this target variable and then they need to describe how they are going to use their instrument to achieve this goal. Taking monetary semantics seriously is about more than language, it is about creating clear guidelines and accountability at the FOMC.

* One could argue that the Federal Reserve now has two instruments, the interest rate on reserves and the monetary base. While they do have direct control over these things, it is also important to remember that these variables must be compatible in equilibrium.