The New Keynesian Failure

In a previous post, I defended neo-Fisherism. A couple of days ago I wrote a post in which I discussed the importance of monetary semantics. I would like to tie together two of my posts so that I can present a more comprehensive view of my own thinking regarding monetary policy and the New Keynesian model.

My post on neo-Fisherism was intended to provide support for John Cochrane who has argued that the neo-Fisher result is part of the New Keynesian model. Underlying this entire issue, however, is what determines the price level and inflation. In traditional macroeconomics, the quantity theory was always lurking in the background (if not the foreground). Under the quantity theory, the money supply determined the price level. Inflation was always and everywhere a monetary phenomenon.

The New Keynesian model dispenses with money altogether. The initial impulse for doing so was the work of Michael Woodford, who wrote a paper discussing how monetary policy would be conducted in a world without money. The paper (to my knowledge) was not initially an attempt to remove money completely from analysis, but rather to figure out a role for monetary policy once technology had developed to a point in which the monetary base was arbitrarily small. However, it seems that once people realized that it was possible to exclude money completely, this literature sort of took that ball and ran with it. The case for doing so was further bolstered by the fact that money already seemed to lack any empirical relevance.

Of course, there are a few fundamental problems with this literature. First, my own research shows that the empirical analysis that claims money is unimportant is actually the result of the fact that the Federal Reserve publishes monetary aggregates that are not consistent with index number theory, aggregation theory, or economic theory. When one uses Divisia monetary aggregates, the empirical evidence is consistent with standard monetary predictions. This is not unique to my paper. My colleague, Mike Belongia, found similar results when he re-examined empirical evidence using Divisia aggregates.

Second, while Woodford emphasizes in Interest and Prices that a central bank’s interest rate target could be determined by a channel system, in the United States the rate is still determined through open market operations (although now that the Fed is paying interest on reserves, it could conceivably use a channel system). This distinction might not seem to be important, but as I alluded to in my previous post, the federal funds rate is an intermediate target. How the central bank influences the intermediate target is important for the conduct of policy. If the model presumes that the mechanism is different from reality, this is potentially important.

Third, Ed Nelson has argued that the quantity theory is actually lurking in the background of the New Keynesian model and that New Keynesians don’t seem to realize it.

With all that being said, let’s circle back to neo-Fisherism. Suppose that a central bank announced that they were going to target a short term nominal interest rate of zero for seven years. How would they accomplish this?

A good quantity theorist would suggest that there are two ways that they would try to accomplish this. The first way would be to continue to use open market purchases to prevent the interest rate from ever rising. However, open market purchases would be inflationary. Since higher inflation expectations puts upward pressure on nominal interest rates, this sort of policy is unsustainable.

The second way to accomplish the goal of the zero interest rate is to set money growth such that the sum of expected inflation and the real interest rate is equal to zero. In other words, the only sustainable way to commit to an interest rate of zero over the long term is deflation (or low inflation if the real interest rate is negative).

The New Keynesians, however, think that the quantity theory is dead and that we can think about policy without money. And in the New Keynesian model, one can supposedly peg the short term nominal interest rate at zero for a short period of time. Not only is this possible, but it also should lead to an increase in inflation and economic activity. Interestingly, however, as my post on neo-Fisherism demonstrated, this isn’t what happens in their model. According to their model, setting the nominal interest rate at zero leads to a reduction in the rate of inflation. This is so because (1) the nominal interest rate satisfies the Fisher equation, and (2) people have rational expectations. (Michael Woodford has essentially admitted this, but now wants to relax the assumption of rational expectations.)

So why am I bringing all of this up again and why should we care?

Well, it seems that Federal Reserve Bank of St. Louis President Jim Bullard recently gave a talk in which he discussed two competing hypotheses. The first is that lower interest rates should cause higher inflation (the conventional view of New Keynesians and others). The second is that lower interest rates should result in lower inflation. As you can see if you look through his slides, he seems to suggest that the neo-Fisher view is correct since we have a lower interest rate and we have lower inflation.

In my view, however, he has drawn the wrong lesson because he has ignored a third hypothesis. The starting point of his analysis seems to be that the New Keynesian model is the useful framework for analysis and given that this is true, which argument about interest rates is correct, the modified Woodford argument? Or the neo-Fisherites?

However, a third hypothesis is that the New Keynesian model is not the correct model to use for analysis. In the quantity theory view, inflation declines when money growth declines. Thus, if you see lower interest rates, the only way that they are sustainable for long periods of time is if money growth (and therefore inflation) declines as well. Below is a graph of Divisia M4 growth from 2004 to the present. Note that the growth rate seems to have permanently declined.

Also, note the following scatterplot between a 1-month lag in money growth and inflation. If you were to fit a line, you would find that the relationship is positive and statistically significant.

So perhaps money isn’t so useless after all.

To get back to my point from a previous post, it seems that discussions of policy need to take seriously the following. First, the central bank needs to specify its target variable (i.e. a specific numerical value for a variable, such as inflation or nominal GDP). Second, the central bank needs to describe how it is going to adjust its instrument (the monetary base) to hit its target. Third, the central bank needs to specify the transmission mechanism through which this will work. In other words, what intermediate variables will tell the central bank whether or not it is likely to hit its target.

As it currently stands, the short term nominal interest rate is the Federal Reserve’s preferred intermediate variable. Nonetheless, the federal funds rate has been close to zero for six and a half years (!) and yet inflation has not behaved in the way that policy would predict. At what point do we begin to question using this as an intermediate variable?

The idea that low nominal interest rates are associated with low inflation and high nominal interest rates are associated with high inflation is the Fisher equation. Milton Friedman argued this long ago. The New Keynesian model assumes that the Fisher identity holds, but it has no mechanism to explain why. It’s just true in equilibrium and therefore has to happen. Thus, when the nominal interest rate rises and individuals have rational expectations, they just expect more inflation and it happens. Pardon me if I don’t think that sounds like the world we live in. New Keynesians also don’t seem to think that this sounds like the world we live in, but this is their model!

To me, the biggest problem with the New Keynesian model is the lack of any mechanism. Without understanding the mechanisms through which policy works, how can one begin to offer policy advice and determine the likelihood of success? At the very least one should take steps to ensure that the policy mechanisms they think exist are actually in the model.

But the sheer dominance of the New Keynesian model in policy circles also leads to false dichotomies. Jim Bullard is basically asking the question: does the world look like the New Keynesian model says or does it look like the New Keynesians say? Maybe the answer is that it doesn’t look like either alternative.

On Monetary Semantics

My colleague, Mike Belongia, was kind enough to pass along a book entitled, “Targets and Indicators of Monetary Policy.” The book was published in 1969 and features contributions of the Karl Brunner, Allan Meltzer, Anna Schwartz, James Tobin, and others. The book itself was a product of a conference at UCLA held in 1966. There are two overarching themes to the book. The first theme, which is captured implicitly by some papers and is discussed explicitly by others, is the need for clarification in monetary policy discussions regarding indicator variables and target variables. The second theme is that, given these common definitions, economic theory can be used to guide policymakers regarding what variables should be used as indicators and targets. While I’m not going to summarize all of the contributions, there is one paper that I wanted to discuss because of its continued relevance today and that is Tobin’s contribution entitled, “Monetary Semantics.”

Contemporary discussions of monetary policy often begin with a misguided notion. For example, I often hear something to the effect of “the Federal Reserve has one instrument and that is the federal funds rate.” This is incorrect. The federal funds rate is not and never has been an instrument of the Federal Reserve. One might think that this is merely semantics, but this gets to broader issues about the role of monetary policy.

This point is discussed at length in Tobin’s paper. It is useful here to quote Tobin at length:

No subject is engulfed in more confusion and controversy than the measurement of monetary policy. Is it tight? Is it easy? Is it tighter than it was last month, or last year, or ten years ago? Or is it easier? Such questions receive a bewildering variety of answers from Federal Reserve officials, private bankers, financial journalists, politicians, and academic economists…The problem is not only descriptive but normative; that is, we all want an indicator of ease or tightness not just to describe what is happening, but to appraisecurrent policy against some criterion of desirable or optimal policy.

[…]

I begin with some observations about policy making that apply not just to monetary policy, indeed not just to public policy, but quite generally. From the policy maker’s standpoint, there are three kinds of variables on which he obtains statistical or other observations: instruments, targets, and intermediate variables. Instruments are variables he controls completely himself. Targets are variables he is trying to control, that is, to cause to reach certain numerical values, or to minimize fluctuations. Intermediate variables lie in-between. Neither are they under perfect control nor are their values ends in themselves.

This quote is important in and of itself for clarifying language. However, there is a broader importance that can perhaps best be illustrated by a discussion of recent monetary policy.

In 2012, I wrote a very short paper (unpublished, but can be found here) about one of the main problems with monetary policy in the United States. I argued in that paper that the main problem was that the Federal Reserve lacked an explicit target for monetary policy. Without an explicit target, it was impossible to determine whether monetary policy was too loose, too tight, or just right. (By the time the paper was written, the Fed had announced a 2% target for inflation.) In the paper, I pointed out that folks like Scott Sumner were saying that monetary policy was too tight because nominal GDP had fallen below trend while people like John Taylor were arguing that monetary policy was too loose because the real federal funds rate was below the level consistent with the Taylor Rule. In case that was enough, people like Federal Reserve Bank of St. Louis President Jim Bullard claimed that monetary policy was actually just about right since inflation was near its recently announced 2% target. What was more remarkable is that if one looked at the data, all of these people were correct based on their criteria for evaluating monetary policy. This is actually quite disheartening considering the fact that these three ways of evaluating policy had been remarkably consistent in their evaluations of monetary policy in the past.

I only circulated the paper among a small group of people and much of the response that I received was something to the effect of “the Fed has a mandate to produce low inflation and full employment, its reasonable to think that’s how they should be evaluated.” That sort of response seems reasonable on first glance, but that view ignores the main point I was trying to make. Perhaps I did make the case poorly since I did not manage to convince anyone of my broader point. So I will try to clarify my position here.

All of us know the mandate of the Federal Reserve. That mandate consists of two goals (actually three if you include keeping interest rates “moderate” – no comment on that goal): stable prices and maximum employment. However, knowing the mandate doesn’t actually provide any guidance for policy. What does it actually mean to have stable prices and maximum employment? These are goals, not targets. This is like when a politician says, “I’m for improving our schools.” That’s great. I’m for million dollar salaries for economics professors with the last name Hendrickson. Without a plan, these goals are meaningless.

There is nothing wrong with the Federal Reserve having broadly defined goals, but along with these broadly defined goals needs to be an explicit target. Also, the central bank needs a plan to achieve the target. Conceivably, this plan would outline how the Federal Reserve planned to use its instrument to achieve its target, with a description of intermediate variables that it would use to provide guidance to ensure that their policy is successful.

The Federal Reserve has two goals, which conceivably also means that they have two targets (more on that later). So what are the Fed’s targets? According to a press release from the Federal Reserve:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee’s ability to promote maximum employment in the face of significant economic disturbances.

The maximum level of employment is largely determined by nonmonetary factors that affect the structure and dynamics of the labor market. These factors may change over time and may not be directly measurable. Consequently, it would not be appropriate to specify a fixed goal for employment; rather, the Committee’s policy decisions must be informed by assessments of the maximum level of employment, recognizing that such assessments are necessarily uncertain and subject to revision.

So the Federal Reserve’s targets are 2% inflation and whatever the FOMC thinks the maximum level of employment is. This hardly clarifies the Federal Reserve’s targets. In addition, the Fed provides no guidance as to how they intend to achieve these targets.

The fact that the Federal Reserve has two goals (or one target and one goal) for policy is also problematic because the Fed only has one instrument, the monetary base (the federal funds rate is an intermediate variable).* So how can policy adjust one variable to achieve two targets? Well, it would be possible to do such a thing if the two targets had some explicit relationship. However, at times policy might have to act when these targets are not behaving in a complementary fashion with respect to the dual mandate. The Fed admits as much in the very same press release:

These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.

I will leave it to the reader to determine whether this clarifies or obfuscates the stance of the FOMC.

Despite the widespread knowledge of the dual mandate and despite the fact that the Federal Reserve has been a bit more forthcoming about an explicit target associated with its mandate, those evaluating Fed policy are stuck relying on other indicators of the stance of policy. In other words, since the Federal Reserve still does not have an explicit target that we can look at to evaluate policy, economists have sought other ways to do it.

John Taylor has chosen to think about policy in terms of the Taylor Rule. He views the Fed as adjusting the monetary base to set the federal funds rate consistent with the Taylor Rule, which has been shown to produce low variability in inflation and output around targets. Empirical evidence exists that shows that when the Federal Reserve has conducted policy broadly consistent with its mandate, the behavior of the federal funds rate looks as it would under the Taylor Rule. As a result, the Taylor Rule becomes a guide for policy in the absence of explicit targets. But even this guidance is only guidance with respect to an intermediate variable.

Scott Sumner has chosen to think about policy in terms of nominal GDP. This follows from a quantity theoretic view of the world. If the central bank promotes stable nominal GDP growth, then inflation expectations will be stable and price mechanism will function efficiently. In addition, the central bank will respond to only the types of shocks that they can correct. Stable nominal GDP therefore implies low inflation and stable employment. My own research suggests that the Federal Reserve conducted monetary policy as if they were stabilizing nominal GDP growth during the Great Moderation. But even using nominal GDP as a guide is limited in the sense that this is not an official target of the Federal Reserve, so a deviation of nominal GDP from trend (even it is suboptimal) might be consistent with the Federal Reserve’s official targets since the latter is essentially unknown.

Nonetheless, the development of these different approaches (and others) was the necessary outgrowth of the desire to understand and evaluate monetary policy. Such an approach is only necessary when the central bank has broad goals without explicit targets and without an explicit description of how they are going to achieve those targets.

We therefore end up back at Tobin’s original questions. How do we know when policy is too loose? Or too tight?

During the Great Inflation and the Great Moderation, both the Taylor Rule and stable growth in nominal GDP provided a good way to evaluate policy. During the Great Inflation both evaluation methods suggest that policy was too loose (although this is less clear regarding the Taylor Rule with real time data). During the Great Moderation, both evaluation methods suggest that policy was conducted well. What is particularly problematic, however, is that the most recent period since 2007, the Taylor Rule and nominal GDP have given the opposite conclusions about the stance of monetary policy. This has further clouded the discussion surrounding policy because advocates of each approach can point to historical evidence as supportive of their approach.

With an explicit target evaluating the stance of policy would be simple. If the FOMC adopted a 2% inflation target (and nothing else), then whenever inflation was above 2% (give or take some measurement error), policy would be deemed loose. Whenever inflation was below 2%, policy would be deemed too tight. Since neither the federal funds rate prescribed by the Taylor Rule nor nominal GDP are an official target of the Fed, it’s not immediately obvious how to judge the stance based solely on these criteria. (And what is optimal is another issue.)

If we want to have a better understanding of monetary policy, we need to emphasize the proper monetary semantics. First, we as economists need to use consistent language regarding instruments, targets, and intermediate variables. No more referring to the federal funds rate as the instrument of policy. Second, the Federal Reserve’s mandate needs to be modified to have only one target for monetary policy. If not otherwise specified, the Federal Reserve needs to provide a specific numerical goal for this target variable and then they need to describe how they are going to use their instrument to achieve this goal. Taking monetary semantics seriously is about more than language, it is about creating clear guidelines and accountability at the FOMC.

* One could argue that the Federal Reserve now has two instruments, the interest rate on reserves and the monetary base. While they do have direct control over these things, it is also important to remember that these variables must be compatible in equilibrium.

Some Thoughts on Cryptocurrencies and the Block Chain

Much of the discussion about cryptocurrencies has naturally centered around Bitcoin. Also, this discussion has been particularly focused on the role of Bitcoin as an alternative currency. However, I think that the most important aspect of Bitcoin (and cryptocurrencies more generally) is not necessarily the alternative currency arrangement, but the block chain. It seems to me that the future viability of cryptocurrencies themselves is not as an alternative to existing currencies, but as assets that are redeemable in a particular currency with payments settled much more efficiently using block chain technology.

For those who know little about cryptocurrencies, the block chain can be understood as follows. A block chain is a date store, or computer network in which information is stored on multiple nodes. In the case of cryptocurrency, such as Bitcoin, the block chain is used as a ledger of all transactions. Since every node has access to the block chain, there is no need for any centralized record-keeper or database. Transactions that are carried out using Bitcoin have to be verified by the nodes. A successful transaction is then added to the block chain. Individuals using the system must therefore have balances of the cryptocurrency recorded on the transaction ledger in order to transfer these balances to someone else. In addition, once the nodes verify the transferred balance, the transaction is time-stamped. This avoids scenarios in which people try to double spend a given balance.

This technology is what creates value for Bitcoin. One explanation for why money exists is that people cannot commit to future actions. The lack of commitment problem makes credit infeasible. Money is an alternative for carrying out exchange because money is a record-keeping device. The block chain associated with Bitcoin is quite literally a record-keeping device. It has value because provides a record of transactions. In addition, this simplifies the settlement process and therefore reduces the cost of transfers and settlement.

The benefit of using Bitcoin is thus the value of the record-keeping system, or the block chain. However, in order to be able to benefit from the use of the block chain, you need to have Bitcoins. This is problematic since there are a number of reasons that you might not want Bitcoins. For example, maybe you are perfectly happy with dollars or perhaps you’ve noticed that there are not a whole lot of places willing to accept Bitcoins just yet. Also, you might have noticed that the exchange rate between Bitcoins and dollars is quite volatile.

So if you are unwilling to trade your dollars for Bitcoins, then you don’t have access to the block chain and cannot take advantage of the more efficient settlement. This, it seems to me, is a critical flaw with Bitcoin.

Nonetheless, the technology embodied in Bitcoin is available to all and can therefore be adapted in other ways. Thus, the critical flaw in Bitcoin is not a critical flaw for cryptocurrencies more generally. The value of these cryptocurrencies is in the blockchain and the true value of the block chain is in figuring out how to use this technology to make transactions and banking better and more efficient. There are two particular alternatives that I think are on the right track, NuBits and Ripple.

Think back to pre-central banking days. Prior to central banks, there were individual banks that each issued their own notes. Each bank agreed to redeem its bank notes for a particular commodity, often gold or silver. Bank notes were priced in terms of the asset. In other words, one dollar would be defined as a particular quantity of gold or silver. This therefore implied that the price of the commodity was fixed in terms of the dollar. In order to maintain this exchange rate, the bank had to make sure not to issue too many bank notes. If the bank issued too many notes, they would see a wave of redemptions, which would reduce their reserves of the commodity. In order to prevent losses to reserves, the banks would therefore have an incentive to reduce the notes in circulation. The peg to the commodity therefore provided an anchor for the value of the bank notes and represented a natural mechanism to prevent over-issuance. Thus, fluctuations in the value of the bank notes tended to result from changes in the relative value of gold. (The U.S. experience was actually much different. Due to the existence of a unit banking system, notes often didn’t trade at par. Again, let’s ignore that for now.)

The way that NuBits work is a lot like these old banks worked (without the lending – we’ll have to get to that in a different post). The NuBits system consists of those who own NuShares and those who own NuBits. Those who own NuShares are like equity owners in the system whereas those who own NuBits are like holders of bank notes. The NuBits are redeemable in terms of U.S. dollars. In particular, one dollar is equal to one NuBit. If I own a NuBit, I can redeem that NuBit for one dollar. So how does NuBit manage to do this when Bitcoin clearly experiences a volatile exchange rate? They do so by putting the trust in the equity owners. Owners of NuShares have an incentive to maintain the stability of this exchange rate. If nobody is willing to use NuBits, then there is little value of ownership in the protocol and the shares will have little, if any, value. Thus, the NuBits system provides an incentive for NuShares holders to maintain the stability of the exchange rate and gives these shareholders the ability to do so. For example, if the demand for NuBits falls, this will be seen by a wave of redemptions. This is a signal that there are too many NuBits in circulation. In order to maintain the exchange rate, NuShares holders have an incentive to reduce the quantity of NuBits in circulation. They can do this by parking some of the NuBits (i.e. preventing people from using NuBits in transactions). This is not done forcibly, but rather by offering interest to those who are willing to forgo engaging in transactions. Similarly, if there is an increase in demand then new NuBits can be created.

But while NuBits has solved the volatility problem in a unique and interesting way, they still suffer from the same problem as Bitcoin. In order to be able to benefit from the technology, you need to hold NuBits and there is perhaps even less of an incentive to hold NuBits since it is much harder to use them for normal transactions. Thus, until cryptocurrencies like NuBits can be used in regular everyday transactions, there is little incentive to hold them. Thus, NuBits gets partially where the technology needs to go, but still suffers from a similar problem as Bitcoin.

This brings me to Ripple. Ripple is a much different system. With Ripple, one can set up an account using dollars, euros, Bitcoins, and even Ripple’s own cryptocurrency. One can then transfer funds using block chain technology, but the transfers do not have to take place using Ripple’s cryptocurrency or Bitcoin. In other words, I can transfer dollars or euros just like I transfer cryptocurrencies in other systems. I can do this by setting up an account and transferring the funds to another person with an account through an update the public ledger that is distributed across the nodes of the system. This streamlines the payment process without the need to adopt a particular cryptocurrency. One can even use dollars to pay someone in euros. The way the transaction is carried out is by finding traders on the system who are willing to trade dollars for euros and then transferring the euros to the desired party. This service seems to be immediately more valuable than any other service in this space.

So where do I see this going?

Suppose that you are Citibank or JP Morgan Chase. You could actually combine the types of services that are offered by NuBits and Ripple. You have the deposit infrastructure and are already offering online payment systems for bill paying and peer-to-peer exchanges. The major banks have two possible incentives. First, they could offer JP Morgan Bits (or whatever you want to call them) and have them redeemable 1-for-1 with the dollar. They could then partner with retailers (both online and brick and mortar) to offer a service in which JP Morgan deposit holders could carry around something akin to a debit card or even an app on their phone that allowed them to transact by transferring the JP Morgan Bits from the individual to the firm, charging a very small fee for the transfer. They could partner with firms for online bill paying as well. Alternatively, they could skip the issuance of their own “bank bits” and simply use their block chain to transfer dollar balances and other existing currencies. Whether or not the banks decide to have their own cryptocurrency for settling payments would be determined by whether there are advantages to developing brand loyalty and/or if retailers saw this as a way to generate greater competition for developing cheaper payments while maintaining the stability of purchasing power with the “bank bits.”

The basic point here is that banks could see a profit opportunity by eliminating the middleman and transferring funds between customers using the block chain. The payments would be faster and cheaper. In addition, it would provide retailers with much better protection from fraud.

Citibank is apparently already exploring the possibilities, developing a block chain with a cryptocurrency called “Citicoin.”

Regardless of what ultimately happens, it is an interesting time to be a monetary economist.

Bitcoin Papers

My paper with Thomas Hogan and Will Luther, “The Political Economy of Bitcoin” is now forthcoming from Economic Inquiry. A working paper version can still be found here.

Also, on the same topic, Aaron Yelowitz was kind enough to send me his recent paper that uses Google Trends data to identify the characteristics of Bitcoin users. A link to that paper can be found here.

Understanding John Taylor

There has been a great deal of debate regarding Taylor rules recently. The U.S. House of Representatives recently proposed a bill that would require the Federal Reserve to articulate their policy in the form of a rule, such as the Taylor Rule. This bill created some debate about whether or not the Federal Reserve should adopt the Taylor Rule or not. In reality, the bill did not require the Federal Reserve to adopt the Taylor Rule, but rather used the Taylor Rule as an example.

In addition, John Taylor has been advocating the Taylor Rule as a guide to policy recently as well as attributing the recent financial crisis/recession to the deviations from the Taylor Rule. While it should not surprise anyone that Taylor has been advocating a rule of his own design and which bears his name, he has faced criticism regarding his recent advocacy of the rule and his views on the financial crisis.

Those who know me know that I am no advocate of Taylor Rules or the Taylor Rule interpretation of monetary policy (see here, here, and here). Nonetheless, a number of people have simply dismissed Taylor’s arguments because they think that he is either (a) deliberately misleading the public for ideological reasons, or (b) mistaken about the literature on monetary policy. Neither of these views is charitable to Taylor since they imply that he is either being deliberately obtuse or does not understand the very literature that he is citing. I myself am similarly puzzled by some of Taylor’s comments. Nonetheless, it seems to me that an attempt to better understand Taylor’s position can not only help us to understand Taylor himself, but it might also clarify some of the underlying issues regarding monetary policy. In other words, rather than simply accept the easy (uncharitable) view of Taylor, let’s see if there is something to learn from Taylor’s position. (I am not going to link the dismissive views of Taylor. However, I will address some of the substantive criticism raised by Tony Yates later in the post.)

Let’s begin with Taylor’s position. This is a lengthy quote from Taylor’s blog, but I think that this a very explicit outline of Taylor’s ideas regarding monetary policy history:

Let me begin with a mini history of monetary policy in the United States during the past 50 years. When I first started doing monetary economics in the late 1960s and 1970s, monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century. Using the same performance measures, the results were excellent. Inflation and unemployment both came down. We got the Great Moderation, or the NICE period (non-inflationary consistently expansionary) as Mervyn King put it. Researchers like John Judd and Glenn Rudebush at the San Francisco Fed and Richard Clarida, Mark Gertler and Jordi Gali showed that this improved performance was closely associated with more rules-based policy, which they defined as systematic changes in the instrument of policy — the federal funds rate — in response to developments in the economy.

[…]

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation. You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%. The results were not good. In my view this policy change brought on a search for yield, excesses in the housing market, and, along with a regulatory process which broke rules for safety and soundness, was a key factor in the financial crisis and the Great Recession.

[…]

This deviation from rules-based monetary policy went beyond the United States, as first pointed out by researchers at the OECD, and is now obvious to any observer. Central banks followed each other down through extra low interest rates in 2003-2005 and more recently through quantitative easing. QE in the US was followed by QE in Japan and by QE in the Eurozone with exchange rates moving as expected in each case. Researchers at the BIS showed the deviation went beyond OECD and called it the Global Great Deviation. Rich Clarida commented that “QE begets QE!” Complaints about spillover and pleas for coordination grew. NICE ended in both senses of the word. World monetary policy now seems to have moved into a strategy-free zone.

This short history demonstrates that shifts toward and away from steady predictable monetary policy have made a great deal of difference for the performance of the economy, just as basic macroeconomic theory tells us. This history has now been corroborated by David Papell and his colleagues using modern statistical methods. Allan Meltzer found nearly the same thing in his more detailed monetary history of the Fed.

My reading of this suggests that there are two important points that we can learn about Taylor’s view. First, Taylor’s view of the Great Moderation is actually quite different than the New Keynesian consensus — even though he seems to think that they are quite similar. The typical New Keynesian story about the Great Moderation is that prior to 1979, the Federal Reserve failed to follow the Taylor principle (i.e. raise the nominal interest rate more than one-for-one with an increase in inflation, or in other words, raise the real interest rate when inflation rises). In contrast, Taylor’s view seems to be that the Federal Reserve became more rule-based. However, a Taylor rule with different parameters than Taylor’s original rule can still be consistent with rule-based policy. So what Taylor seems to mean is that if we look at the federal funds rate before and after 1979, it seems to be consistent with his proposed Taylor Rule in the latter period, but there are significant deviations from that rule in the former period.

This brings me to the second point. Taylor’s view about the importance of the Taylor Rule is one based on empirical observation. What this means is that his view is quite different from those working in the New Keynesian wing of the optimal monetary policy literature. To see how Taylor’s view is different from the New Keynesian literature, we need to consider two things that Taylor published in 1993.

The first source that we need to consult is Taylor’s book, Macroeconomic Policy in a World Economy. In that book Taylor presents a rational expectations model and in the latter chapters uses the model to compare monetary policy rules that look at inflation, real output, and nominal income. He finds that the preferred monetary policy rule in the countries that he considers is akin to what we would now call a Taylor Rule. In other words, the policy that reduces the variance of output and inflation is a rule that responds to both inflation and the output gap.

However, the canonical Taylor Rule and the one that John Taylor now advocates does not actually appear in the book (the results presented in the book suggest different coefficients on inflation and output). The canonical Taylor Rule in which the coefficient on inflation is equal to 1.5 and the coefficient on the output gap is equal to 0.5 appears in Taylor’s paper “Discretion versus policy rules in practice”:

Thus, as we can see in the excerpt from Taylor’s paper, the reason that he finds this particular policy rule desirable is that it seems to describe monetary policy during a time in which policymakers seemed to be doing well.

However, Taylor is also quick to point out that the Federal Reserve needn’t adopt this rule, but rather that the rule should be one of the indicators that the Federal Reserve looks at when conducting policy:

Indeed, Taylor’s views on monetary policy do not seem to have changed much from his 1993 paper. He still advocates using the Taylor Rule as a guide to monetary policy rather than as a formula required for monetary policy.

However, what is most important is the following distinction between Taylor’s 1993 book and Taylor’s 1993 paper. In his book, Taylor shows using evidence from simulations that a feedback rule for monetary policy in which the central bank responds to inflation and the output gap (rather than inflation itself or nominal income) is the preferable policy among the three alternatives he considers. In contrast, in his 1993 paper, we begin to see that Taylor views the version of the rule in which the coefficient on inflation is 1.5 and the coefficient on the output gap is 0.5 as a useful benchmark for policy because it seems to describe policy well between the period 1987 – 1992 — a period that Taylor would classify as good policy. In other words, Taylor’s advocacy of the conventional 1.5/0.5 Taylor Rule seems to be informed by the empirical observation that when policy is good, it also tends to coincide with this rule.

This is also evident in Taylor’s 1999 paper entitled, “A Historical Analysis of Monetary Policy Rules.” In this paper, Taylor does two things. First, he estimates reaction functions for the Federal Reserve to determine the effect of inflation and the output gap on the federal funds rate. In doing so, he shows that the Greenspan era seems to have produced a policy consistent with the conventional 1.5/0.5 version of the Taylor Rule whereas for the pre-1979 period, this was not the case. Again, this provides Taylor with some evidence that when Federal Reserve policy is approximately consistent with the conventional Taylor Rule, the corresponding macroeconomic outcomes seem to be better.

This is best illustrated by the second thing that Taylor does in the paper. In the last section of the paper, Taylor plots the path of the federal funds rate if monetary policy had followed a Taylor rule and the actual federal funds rate for the same two eras described above. What the plots of the data show is that during the 1970s, when inflation was high and when nobody would really consider macroeconomic outcomes desirable, the Federal Reserve systematically set the federal funds rate below where they would have set it had they been following the Taylor Rule. In contrast, when Taylor plots the federal funds rate implied by the conventional Taylor Rule and the actual federal funds rate for the Greenspan era (in which inflation was low and the variance of the output gap was low), he finds that policy is very consistent with the Taylor Rule.

He argues on the basis of this empirical observation that the deviations from the Taylor Rule in the earlier period represent “policy mistakes”:

…if one defines policy mistakes as deviations from such a good policy rule, then such mistakes have been associated with either high and prolonged inflation or drawn-out periods of low capacity utilization, much as simple monetary theory would predict. (Taylor, 1999: 340).

Thus, when we think about John Taylor’s position, we should recognize that Taylor’s position on monetary policy and the Taylor Rule is driven much more by empirical evidence than it is by model simulations. He sees periods of good policy as largely consistent with the conventional Taylor Rule and periods of bad policy as inconsistent with the conventional Taylor Rule. This reinforces his view that the Taylor Rule is a good indicator about the stance of monetary policy.

Taylor’s advocacy of the Taylor Rule as a guide for monetary policy is very different from the related New Keynesian literature on optimal monetary policy. That literature, beginning with Rotemberg and Woodford (1999) — incidentally writing in the same volume as Taylor’s 1999 paper, which was edited by Taylor — derives welfare criteria using the utility function of the representative agent in the New Keynesian model. In the context of these models, it is straightforward to show that the optimal monetary policy is one that minimizes the weighted sum of the variance of inflation and the variance of the output gap.

I bring this up because this literature reached different conclusions regarding the coefficients in the Taylor Rule. For example, as Tony Yates explains:

…if you take a modern macro model and work out what is the optimal Taylor Rule – tune the coefficients so that they maximise social welfare, properly defined in model terms, you will get very large coefficients on the term in inflation. Perhaps an order of magnitude greater than JT’s. This same result is manifest in ‘pure’ optimal policies, where we don’t try to calculate the best Taylor Rule, but we calculate the best interest rate scheme in general. In such a model, interest rates are ludicrously volatile. This lead to the common practice of including terms in interest rate volatility in the criterion function that we used to judge policy. Doing that dials down interest rate volatility. Or, in the exercise where we try to find the best Taylor Rule, it dials down the inflation coefficient to something reasonable. This pointed to a huge disconnect between what the models were suggesting should happen, and what central banks were actually doing to tame inflation [and what John Taylor was saying they should do]. JT points out that most agree that the response to inflation should be greater than one for one. But should it be less than 20? Without an entirely arbitary term penalising interest rate volatility, it’s possible to get that answer.

I suspect that if one brought up this point to Taylor, he would suggest that these fine-tuned coefficients are unreasonable. As evidence in favor of his position, he would cite the empirical observations discussed above. Thus, there is a disconnect between what the Taylor Rule literature has to say about Taylor Rules and what John Taylor has to say about Taylor Rules. I suspect the difference is that the literature is primarily based on considering optimal monetary policy in terms of a theoretical model whereas John Taylor’s advocacy of the Taylor Rule is based on his own empirical observations.

Nonetheless, as Tony pointed out to me in conversation, if that is indeed the position that Taylor would take, then quotes like this from Taylor’s recent WSJ op-ed are misleading, “The summary is accurate except for the suggestion that I put the rule forth simply as a description of past policy when in fact the rule emerged from years of research on optimal monetary policy.” I think that what Taylor is really saying is that Taylor Rules, defined generally as rules in which the central bank adjusts the interest rate to changes in inflation and the output gap, are consistent with optimal policy rather than arguing that his exact Taylor Rule is the optimal policy in these models. Nonetheless, I agree with Tony that this statement is misleading regardless of what Taylor meant when he wrote it.

But suppose that we give Taylor the benefit of doubt and suggest that this statement was unintentionally misleading. There is still this bit about the financial crisis to discuss and it is on this subject that there are questions that need to be asked of Taylor.

In Taylor’s book Getting Off Track, he argues that deviations from the Taylor Rule caused the financial crisis. To demonstrate this, he first shows that from 2003 – 2006, the federal funds rates was approximately 2 percentage points below the rate implied by the conventional Taylor Rule. He then provides empirical evidence regarding the effects of the deviations from the Taylor Rule on housing starts. He constructs a counterfactual to suggest that if the Federal Reserve had followed the Taylor Rule, then then housing starts would have been between 200,000 – 400,000 units lower each year between 2003 and 2006 than what we actually observed. He also shows that the deviations from the Taylor Rule in Europe can explain changes in housing investment in for a sample that includes Germany, Austria, Italy, the Netherlands, Belgium, Finland, France, Spain, Greece, and Ireland.

Taylor therefore argues that by keeping interest rates too low for too long, the Federal Reserve (and the ECB by following suit with low interest rates) created the housing boom that ultimately went bust and led to a financial crisis.

In a separate post, Tony Yates responds to this hypothesis by making the following points:

2. John’s rule was shown to deliver pretty good results in variations on a narrow class of DSGE models. The crisis has cast much doubt on whether this class is wide enough to embrace the truth. In particular, it typically left out the financial sector. Modifications of the rule such that central bank rates respond to spreads can be shown to deliver good results in prototype financial-inclusive DSGE models. But these models are just a beginning, and certainly not the last word, on how to describe the financial sector. In models in which the Taylor Rule was shown to be good, smallish deviations from it don’t cause financial crises, therefore, because almost none of these models articulate anything that causes a financial crisis. How can you put a financial crisis in real life down to departures from a rule whose benefits were derived in a model that had no finance? There is a story to be told. But it requires much alteration of the original model. Perhaps nominal illusion; misapprehension of risk, learning, and runs. And who knows what the best monetary policy would be in that model.

3. In the models in which the TR is shown to be good, the effects of monetary policy are small and relatively short-lived. To most in the macro profession, the financial crisis looks like a real phenomenon, building up over 2-2.5 decades, accompanying relative nominal stability. Such phenomena don’t have monetary causes, at least not seen through the spectacles of models in which the TR does well. Conversely, if monetary policy is deduced to have two decade long impulses, then we must revise our view about the efficacy of the Taylor Rule.

Thus, we are back to the literature on optimal monetary policy. Again, I suspect that if one raised these points to John Taylor, he might argue that (i) his empirical evidence on the financial crisis trumps the optimal policy literature (which admittedly has issues — like the lack of a financial sector in my of these models), (ii) his empirical analysis suggests that a Taylor Rule might be optimal in a properly modified model, or (iii) regardless of whether the conventional Taylor Rule is optimal, the deviation from this type of policy is harmful as evident by the empirical evidence.

Nonetheless, this brings me to my own questions about/criticisms of Taylor’s approach:

1. Suppose that Taylor believes that point (i) is true. If this is the case, then citing the optimal monetary policy literature as supportive of the Taylor Rule in the WSJ is not simply innocently misleading the readers, it is deliberately misleading the readers by choosing to only cite this literature when it fits with his view. One should not selectively cite literature when it is favorable to one’s view and then not cite the same literature when it is no longer favorable.

2. As Tony Yates points out, point (ii) is impossible to answer.

3. Regarding point (iii), the question is whether or not empirical evidence is sufficient to establish the Taylor Rule as a desirable policy. For example, as the work of Athanasios Orphanides demonstrates, the conclusions about whether the Federal Reserve following the Taylor principle (i.e. having a coefficient on inflation greater than 1) in the pre- and post-Volcker era are dependent upon the data that one uses in their analysis. When one uses the data that the Federal Reserve had in real-time, the problems associated with policy have more to do with the responsiveness of the Fed to the output gap than they do with the rate of inflation. In other words, the Federal Reserve does not typically do a good job forecasting the output gap in real-time. This is a critical flaw in the Taylor Rule because it implies that even if the Taylor Rule is optimal, the central bank might not be able to set policy consistent with the rule.

In other words, if the deviations from the Taylor Rule have such a large effect on economic outcomes and it is very difficult for the central bank to maintain a policy consistent with the Taylor Rule, then perhaps this isn’t a desirable policy after all.

4. One has to stake out a position regarding where they stand on models and the data. Taylor’s initial advocacy of this type of rule seems to be driven by the model simulations that he has done. However, his more recent advocacy of this type of rule seems to be driven by the empirical evidence in his 1993 and 1999 papers and his book, Getting Off Track. But the empirical evidence should be consistent with the model simulations and it is not clear that this is true. In other words, one should not make statements about the empirical importance of a rule when the outcome from deviating from that rule is not even a feature of the model that was used to do the simulations.

5. In addition, the Taylor Rule lacks the intuition of, say, a money growth rule. With a money growth rule, the analysis is simply based on quantity theoretic arguments. If one targets a particular rate of growth in the monetary aggregate (assuming that velocity is stable), we have a good idea about what nominal income growth (or inflation) will be. In addition, the quantity theory is well known (if not always well understood) and can be shown to be consistent with a large number of models (even models with flexible prices). This sort of rule for policy is intuitive. If you know that in the long run money growth causes inflation then the way to prevent inflation is to limit money growth.

It is not so clear what the intuition is behind the Taylor Rule. It says that we need to tighten policy when inflation rises and/or when real GDP is above potential. That part is fairly intuitive. But what are the “correct” parameters? And why is Taylor’s preferred parameterization such a good rule? Is it solely based on his empirical work because the optimal monetary policy literature suggests alternatives?

6. Why did things change between the 1970s and the early 2000s. In his 1999 paper, Taylor argues that the Federal Reserve kept interest rates too low for too long and we ended up with stagflation. In his book Getting Off Track, he implies that when the Federal Reserve kept interest rates too low for too long we ended up with a housing boom and bust. But why wasn’t there inflation/stagflation? Why was there such a different response to having interest rates too low in the early 2000s as opposed to the 1970s? These are questions that empirics alone cannot answer.

In any event, I hope that this post brings some clarity to the debate.

Some Fun Stuff: Seinfeld and Optimal Stopping Times

Sometimes while you are proctoring exams, you realize that an episode of Seinfeld can be understood as an optimal stopping time problem and you write a short paper about it. Enjoy.

Interest Rates and Investment

The conventional way of discussing monetary policy is by referencing the interest rate target of the central bank. This is also the way that monetary policy is communicated in the basic New Keynesian model. The idea is that the transmission of monetary policy is primarily through the interest rate. I would like to argue in this post that this is a problematic way of thinking about monetary policy and that the transmission mechanism of policy is unclear.

In the New Keynesian model, the real interest rate affects the time path of consumption through the consumption Euler equation. In particular, when the real interest rate falls, the household would want to save less and therefore would want to consume more. This increases real economic activity in the current period. If we add capital to the model, a lower interest rate encourages a greater investment in capital. Thus, if monetary policy can affect the real interest rate in the short run, then the interest rate target of the central bank can be used as a stabilization tool.

This investment mechanism, however, is questionable. It ignores how investment is actually done in the real world. We can illustrate this lesson with a simple example.

Suppose that there is a firm. The firm produces a product and is deciding whether to build a new factory to increase its production. Let V(t) denote the value of the factory at time t. The initial value of the project is $V(0) = V_0$. Now suppose that the value to the firm of building the factory is growing over time:

${{\dot{V}}\over{V}} = a$

It follows that the value of the factory at some arbitrary date in the future, say time T, is

$e^{aT} V_0$

Now suppose that the cost to build the factory is some fixed cost, $F$. The firm’s objective is to choose the optimal point in time to build the factory so as to maximize the expected discounted net value of the project:

$\max\limits_{T} e^{-rT} [e^{aT}V_0 - F]$

where $r > a$ is the real interest rate. The maximization problem implies that

$T^* = max\bigg[{{1}\over{a}} ln\bigg({{rI}\over{(r-a)V_0}}\bigg),0\bigg]$

Assuming that $T^* > 0$ (i.e. the optimal time to invest is not immediately), it is straightforward to see that when the real interest rate declines, it is beneficial to put off the investment further into the future.

We can understand the intuition behind this result as follows. In a standard model with capital, the marginal product of capital (net of some adjustment cost) is equal to the real interest rate. Thus, when the real interest rate falls, the firm wants to increase its investment in capital, but because it is costly to adjust that capital, it takes time for the capital stock to reach the firm’s desired level. In contrast, the framework presented above suggests that investment is an option and the firm has to decide when to exercise that option. In that case, a lower the real interest rate means that the future is more important (all else equal). But if the future is more important, then that increases the opportunity cost of exercising the option today. So the firm would want to wait to exercise the option.

So which way is best to think about interest rates and investment? The empirical evidence on the issue (albeit somewhat dated) seems to suggest that price variables, like the real interest rate, are not particularly useful in explaining investment (at least compared to other variables). So is this really the mechanism that should be emphasized in the conduct of monetary policy?

[I should note that this insight is (at least I thought) well known. This example is precisely the example provided by Dixit and Pindyck (1994). Countless other examples can be found in Stokey (2008).]