Category Archives: Macroeconomic Theory

Are Helicopter Drops a Fiscal Operation?

This is meant to be a quick note on what I think is a common misconception about helicopter drops. I am not advocating that the Federal Reserve or any other central bank undertake the actions I am going to describe nor do I care about whether it is legal for the Federal Reserve or any other central bank. All I am concerned with is helicopter drops on a theoretical level. With that being said, let me get to what I believe is a misconception.

First, some context. Typically, when the Federal Reserve wants to increase the money supply, they buy assets on the open market in exchange for bank reserves. These are called open market operations. One potential problem is that the Federal Reserve is typically purchasing short-term government debt. When short term nominal interest rates are near the zero lower bound, many believe that open market operations are impotent since the central bank is exchanging one asset that does not bear interest for another asset that does not bear interest. Banks are indifferent between the two. The exchange has no meaningful effect on economic activity.

Given this problem, some have advocated a “helicopter drop” of money. Typically, they don’t mean an actual helicopter flies overhead dropping currency from the sky. What they are referring to is something like the following. Suppose that the U.S. Treasury sends a check to everyone in the United States for $100 and issues bonds to pay for it. The Federal Reserve then buys all of these bonds and holds them to maturity. This is effectively a money-financed tax rebate. Thus, it resembles a helicopter drop because everyone gets $100, which was paid for by an expansion of the money supply. However, many people are quick to point out that this is actually a fiscal operation. The U.S. government is giving everyone a check and the Federal Reserve is simply monetizing the debt.

But are helicopter drops really a fiscal operation? Certainly if we think about helicopter drops as I have described them above, it is correct to note that such action requires monetary-fiscal cooperation. However, let’s consider an alternative scenario.

The Federal Reserve has a balance sheet just like any other bank. The Fed classifies things on their balance sheet into 3 categories:

1. Assets. Assets include loans to banks, securities held, foreign currency, gold certificates, SDRs, etc.
2. Liabilities. Liabilities include currency in circulation, bank reserves, repurchase agreements, etc.
3. Capital.

The balance sheet constraint is given as

Assets = Liabilities + Capital

Let’s consider how things change on the balance sheet. Suppose that the Fed took large losses on the Maiden Lane securities purchased during the financial crisis. What would happen? Well, the value of the Fed’s assets would decline. However, the liabilities owed by the Fed would not change. Thus, for the balance sheet to remain in balance, the value of the Fed’s capital would have to decline.

So imagine the following scenario. We all wake up one morning to discover that actual helicopters are lifting off from the rooftops of regional Federal Reserve banks. The helicopters fly through each region dropping currency from the sky. People walk out of their homes and businesses and see money raining down upon them. They quickly scoop up the money and shovel it into their pockets. It is a literal helicopter drop of money!

But how can this be? How could the central bank do such a thing?

If the central bank were to do such a thing, think about what would happen to its balance sheet. Currency in circulation increases thereby increasing Fed liabilities. However, asset values are still the same. So capital declines. (The latest Fed balance sheet suggests that the Fed has $10 billion in surplus capital. This would decline dollar-for-dollar with the increase in the supply of currency.)

What this implies is that a central bank could (in theory) conduct a helicopter drop by effectively reducing its net worth. In the future, the Federal Reserve could restore its capital by reinvesting its earnings into new assets. Thus, the helicopter drop is a form of direct transfer to the public that is paid for by the Fed’s future earnings.

[Now, some of you might be saying, “Ah ha! If the Fed is retaining earnings these are earnings that would have otherwise gone to the Treasury and so it is still a fiscal operation.” I would argue that (a) this is semantics, and (b) there is no reason to believe this is true. The Fed, for example, could simply have used those earnings to furnish new offices at the Board and all of the regional banks — in that case it would be a transfer of wealth from the staff to the general public.]

Money and Banking

You might be able to teach an entire course on the microeconomics of money and banking based on the following thought experiment.

Imagine the following scenario. I want to start a business, but I need to borrow $10,000 to get started. You offer to provide me with that $10,000. However, since you won’t get to consume using that $10,000 and you won’t get to invest that $10,000 in anything else you require that I pay you some interest. I give you a piece of paper that promises to pay you back, with interest, at some future date in time. Intrinsically, that piece of paper that I have given you is worthless. It is just a piece of paper. However, if that piece of paper represents a legally binding agreement, then we call that piece of paper a bond. You are willing to accept that piece of paper from me because you anticipate that I am going to do something productive with your money. In the event that I don’t, you will be entitled to the assets of my business. So, the value of the bond is the expected value of the bond over the duration of the loan plus the value of the option to seize my assets in the event that I cannot/do not pay you back. Now, of course, there is some chance that between now and when I have promised to pay you back you will want to spend money. As a result, a market emerges that allows you to sell this piece of paper to other people.

Now imagine the following alternative scenario. Suppose that you want to save, but you don’t want to deal with trying to figure out how to invest that savings. Fortunately, we have a mutual friend who likes to do this sort of thing. So you give your $10,000 to our friend and he promises to give you your money back plus some interest payment. I also make a visit to our mutual friend, but I ask him to borrow $10,000. He agrees to lend me $10,000, but I have to pay him back with interest (slightly higher than what he is offering you). Since our mutual friend knows that you might need cash for unexpected expenditures in the future, he promises to give you the right to show up and demand your $10,000 (or some fraction thereof) at any moment you want. Thus, to our mutual friend, the value of the loan is the expected value of the loan over the duration agreed upon plus the expected value of the option to seize my assets in the event that I cannot/do not pay him back. The value of the contract for you is the expected value of the loan that you have given our mutual friend plus the value of the option to get your $10,000 back whenever you want plus the value of the option to seize the assets of our mutual friend in the event that the value of his assets decline below what he owes you.

What is the difference between these two scenarios?

Some would say that in the latter scenario the problem is that our mutual friend is offering to give you dollars that he himself does not have to give. Thus, he is “creating dollars out of thin air.” In fact, if he doesn’t have actual dollars, he might give you a piece of paper that promises to give you those dollars in the future. If you are able to trade these pieces of paper in exchange for goods and services, it would appear as though our mutual friend has really created money out of thin air. But has he really? Or is he merely allowing you to transfer some fraction of what he owes you to another individual?

Why might people be willing to accept these pieces of paper printed by our mutual friend and use them in transactions?

Replace “$10,000” with “7.5 ounces of gold.” Do your answers to these questions change?

Reasoning from Interest Rates

A quick note…

We can think of long-term yields as consisting of two components, the average expected future short-term rate and the term premium. However, it is important to note that the average expected future short-term rate itself is a function of the rate of time preference, expectations of future growth, and expectations of inflation. Also, the term premium is a function of duration risk, a liquidity premium, and a safety premium.

So suppose that you see long-term yields change, what can you learn about the stance of monetary policy?

What Are Real Business Cycles?

The real business cycle model is often described as the core of modern business cycle research. What this means is that other business cycle models have the RBC model as a special case (i.e. strip away all of the frictions from your model and its an RBC model). The idea that the RBC model is the core of modern business cycle research is somewhat tautological since the RBC model is just a neoclassical model without any frictions. Thus, if we start with a model with frictions and take those frictions away, we have a frictionless model.

The purpose of the original RBC models was not necessarily to argue that these models represented an accurate portrayal of the business cycle, but rather to see how much of the business cycle could be explained without the appeal to frictions. The basic idea is that there could be shocks to tastes and/or technology and that these changes could cause fluctuations in economic activity. Furthermore, since the RBC model was a frictionless model, any such fluctuations would be efficient. This conclusion was important. We typically think of recessions as being inefficient and costly. If this is true, countercyclical policy could be welfare-increasing. However, if the world can be adequately explained by the RBC model, then economic fluctuations represent efficient responses to unexpected changes in tastes and technology. There is no role for countercyclical policy.

There were two critical responses to RBC models. The first criticism was that the model was too simple. The crux of this argument is that if one estimated changes in total factor productivity (TFP; technology in the RBC model) using something like the Solow residual and plugged this into the model, one might be misled into thinking the model had greater predictive power than it did in reality. The basic idea is that the Solow residual is, as the name implies, a residual. Thus, this measure of TFP only captured fluctuations in output that were not explained by changes in labor and capital. Since there are a lot of things besides technology that might effect output other than labor and capital, this might not be a good measure of TFP and might result in attributing a greater percentage of fluctuations to TFP than was true of the actual data generating process.

The second critical response was largely to ridicule and make fun of the model. For example, Franco Modigliani once quipped that RBC-type models were akin to assuming that business cycles were mass outbreaks of laziness. Others would criticize the theory by stating that recessions must be periods of time when society collectively forgets how to use technology. And recently, Paul Romer has suggested that technology shocks be relabeled as phlogiston shocks.

These latter criticisms are certainly witty and no doubt the source of laughter in seminar rooms. Unfortunately, these latter criticisms obscure the more important criticisms. More importantly, however, they represent a misunderstanding of what the RBC model is about. As a result, I would like to provide an interpretation of the RBC model and then discuss more substantive criticisms.

The idea behind the real business cycle model is that fluctuations in aggregate productivity are the cause of economic fluctuations. If all firms are identical, then any decline in aggregate productivity must be a decline in the productivity of all the individual firms. But why would firms become less productive? To me, this seems to be the wrong way to interpret the model. My preferred interpretation is as follows. Suppose that you have a bunch of different firms producing different goods and these firms have different levels of productivity. In this case, an aggregate productivity shock is simply the reallocation from high productivity firms to low productivity firms or vice versa. As long as we think of all markets as being competitive, then the RBC model is just a reduced form version of what I’ve just described. In other words, the RBC model essentially suggests that fluctuations in the economy are driven by the reallocation of inputs between firms with different levels of productivity, but since markets are efficient we don’t need to get into the weeds of this reallocation in the model and can simply focus our attention on a representative firm and aggregate productivity.

I think that my interpretation is important for a couple of reasons. First, it suggests that while “forgetting how to use technology” might get chuckles in the seminar room, it is not particularly useful for thinking about productivity shocks. Second, and more importantly, this interpretation allows for further analysis. For example, how often do we see such reallocation between high productivity firms and low productivity firms? How well do such reallocations line up with business cycles in the data? What are the sources of reallocation? For example, if the reallocation is due to changes in demographics and/or preferences, then these reallocations could be interpreted as efficient responses to structural changes in the economy and be seen as efficient. However, if these reallocations are caused by changes in relative prices due to, say, monetary policy, then the welfare and policy implications are much different.

Thus, to me, rather than denigrate RBC theory, what we should do is try to disaggregate productivity, determine what causes reallocation, and try to assess whether this is an efficient reallocation or should really be considered misallocation. The good news is that economists are already doing this (here and here, for example). Unfortunately, you hear more sneering and name-calling in popular discussions than you do about this interesting and important work.

Finally, I should note that I think one of the reasons that the real business cycle model has been such a point of controversy is that it implies that recessions are efficient responses to fluctuations in productivity and counter-cyclical policy is unnecessary. This notion violates the prior beliefs of a great number of economists. As a result, I think that many of these economists are therefore willing to dismiss RBC out of hand. Nonetheless, while I myself am not inclined to think that recessions are simply efficient responses to taste and technology changes, I do think that this starting point is useful as a thought exercise. Using an RBC model as a starting point to thinking about recessions forces one to think about the potential sources of inefficiencies, how to test the magnitude of such effects, and the appropriate policy response. The better we are able to disaggregate fluctuations in productivity, the more we should be able to learn about fluctuations in aggregate productivity and the more we might be able to learn about the driving forces of recessions.

The Fed, Populism, and Related Topics

Jon Hilsenrath has quite the article in The Wall Street Journal, the title of which is “Years of Fed Missteps Fueled Disillusion With the Economy and Washington”. The article criticizes Fed policy, suggests these policy failures are at least partially responsible for the rise in populism in the United States, and presents a rather incoherent view of monetary policy. As one should be able to tell, the article is wide-ranging, so I want to do something different than I do in a typical blog post. I am going to go through the article point-by-point and deconstruct the narrative.

Let’s start with the lede:

Once-revered central bank failed to foresee the crisis and has struggled in its aftermath, fostering the rise of populism and distrust of institutions

There is a lot tied up in this lede. First, has the Federal Reserve ever been a revered institution? According to Hilsenrath’s own survey evidence, in 2003 only 53% of the population rated the Fed as “Good” or “Excellent”. In the midst of the Great Moderation, I would hardly called this revered.

Second, I’ve really grown tired of this argument that economists or policymakers or the Fed “failed to foresee the crisis.” The implicit assumption is that if the crisis had been foreseen, steps could have been taken to prevent it or make it less severe. But, if we accept this assumption, then we would only observe crises when they weren’t foreseen. Yet crises that were prevented would never show up in the data.

Third, to attribute the rise in populism to Federal Reserve policy presumes that the populism is tied to economic factors that the Fed can influence. Sure, if the Fed could have used policy to make real GDP higher today than it had been in the past that might have eased economic concerns. But productivity slowdowns and labor market disruptions caused by trade shocks are not things that the Federal Reserve can correct. To the extent to which these factors are what is driving populism, the Fed only has limited ability to ease such concerns.

But that’s enough about the lede…

So the basis of the article is that Fed policy has been a failure. This policy failure undermined the standing of the institution, created a wave of populism, and caused the Fed to re-think its policies. I’d like to discuss each of these points individually using passages from the article.

Let’s begin by discussing the declining public opinion of the Fed. Hilsenrath shows in his article that the public’s assessment of the Federal Reserve has declined significantly since 2003. He also shows that people have a great deal less confidence in Janet Yellen than Alan Greenspan? What does this tell us? Perhaps the public had an over-inflated view of the Fed to begin with. It certainly reasonable to think that the public had an over-inflated view of Alan Greenspan. It seems to me that there is a simple negative correlation between what they think of the Fed and a moving average of real GDP growth. It is unclear whether there are implications beyond this simple correlation.

Regarding the rise in populism, everyone has their grand theory of Donald Trump and (to a lesser extent) Bernie Sanders. Here’s Hilsenrath:

For anyone seeking to explain one of the most unpredictable political seasons in modern history, with the rise of Donald Trump and Bernie Sanders, a prime suspect is public dismay in institutions guiding the economy and government. The Fed in particular is a case study in how the conventional wisdom of the late 1990s on a wide range of economic issues, including trade, technology and central banking, has since slowly unraveled.

Do Trump and Sanders supporters have lower opinions of the Fed than the population as whole? Who knows? We are not told in the article. Also, has the conventional wisdom been upended? Whose conventional wisdom? Economists? The public?

So the populism and the reduced standing of the Fed appear to be correlations with things that are potentially correlated with Fed policy. Hardly the smoking gun suggested by the lede. So what about the re-thinking that is going on at the Fed?

First, officials missed signs that a more complex financial system had become vulnerable to financial bubbles, and bubbles had become a growing threat in a low-interest-rate world.

Secondly, they were blinded to a long-running slowdown in the growth of worker productivity, or output per hour of labor, which has limited how fast the economy could grow since 2004.

Thirdly, inflation hasn’t responded to the ups and downs of the job market in the way the Fed expected.

These are interesting. Let’s take them point-by-point:

1. Could the Fed have prevented the housing bust and the subsequent financial crisis? It is unclear. But even if they completed missed this, could not policy have responded once these effects became apparent?

2. What does this even mean? If there is a productivity slowdown that explains lower growth, then shouldn’t the Federal Reserve get a pass on the low growth of real GDP over the past several years? Shouldn’t we blame low productivity growth?

3. Who believes in the Phillips Curve as a useful guide for policy?

My criticism of Hilsenrath’s article should not be read as a defense of the Fed’s monetary policy. For example, critics might think I’m being a bit hypocritical since I have argued in my own academic work that the maintenance of stable nominal GDP growth likely contributed to the Great Moderation. The collapse of nominal GDP during the most recent recession would therefore seem to indicate a policy failure on the part of the Fed. However, notice how much different that argument is in comparison to the arguments made by Hilsenrath. The list provided by Hilsenrath suggests that the problems with Fed policy are (1) the Fed isn’t psychic, (2) the Fed didn’t understand that slow growth is not due to their policy, and (3) that the Phillips Curve is dead. Only this third component should factor into a re-think. But for most macroeconomists that re-think began taking place as early as Milton Friedman’s 1968 AEA Presidential Address — if not earlier. More recently, during an informal discussion at a conference, I observed Robert Lucas tell Noah Smith rather directly that “the Phillips Curve is dead” (to no objection) — so the Phillips Curve hardly represents conventional wisdom.

In fact, Hilsenrath’s logic regarding productivity is odd. He writes:

Fed officials, failing to see the persistence of this change [in productivity], have repeatedly overestimated how fast the economy would grow. The Fed has projected faster growth than the economy delivered in 13 of the past 15 years and is on track to do so again this year.

Private economists, too, have been baffled by these developments. But Fed miscalculations have consequences, contributing to start-and-stop policies since the crisis. Officials ended bond-buying programs, thinking the economy was picking up, then restarted them when it didn’t and inflation drifted lower.

There are 3 points that Hilsenrath is making here:

1. Productivity caused growth to slow.

2. The slowdown in productivity caused the Fed to over-forecast real GDP growth.

3. This has resulted in a stop-go policy that has hindered growth.

I’m trying to make sense of how these things fit together. Most economists think of productivity as being completely independent of monetary policy. So if low productivity growth is causing low GDP growth, then this is something that policy cannot correct. However, point 3 suggests that low GDP growth is explained by tight monetary policy. This is somewhat of a contradiction. For example, if the Fed over-forecast GDP growth, then the implication seems to be that if they’d forecast growth perfectly, they would have had more expansionary policy, which could have increased growth. But if growth was low due to low productivity, then a more expansionary monetary policy would have had only a temporary effect on real GDP growth. In fact, during the 1970s, the Federal Reserve consistently over-forecast real GDP. However, in contrast to recent policy, the Fed saw these over-foreasts as a failure of their policies rather than a productivity slowdown and tried to expand monetary policy further. What Athanasios Orphanides’s work has shown is that the big difference between policy in the 1970s and the Volcker-Greenspan era was that policy in the 1970s put much more weight on the output gap. Since the Fed was over-forecasting GDP, this caused the Fed to think they were observing negative output gaps and subsequently conducted expansionary policy. The result was stagflation.

So is Hilsenrath saying he’d prefer that policy be more like the 1970s? One cannot simultaneously argue that growth is low because of low productivity and tight monetary policy. (Even if it is some combination of both, then monetary policy is of second-order importance and that violates Hilsenrath’s thesis.)

In some sense, what is most remarkable is how far the pendulum has swung in 7 years. Back in 2009, very few people argued that tight monetary policy was to blame for the financial crisis or the recession — heck, Scott Sumner started a blog primarily because he didn’t see anyone making the case that tight monetary policy was to blame. Now, in 2016, the Wall Street Journal is now publishing stories that blame the Federal Reserve for all of society’s ills. There is a case to be made that monetary policy played a role in causing the recession and/or in explaining the slow recovery. Unfortunately, this article in the WSJ isn’t it.

The New Keynesian Failure

In a previous post, I defended neo-Fisherism. A couple of days ago I wrote a post in which I discussed the importance of monetary semantics. I would like to tie together two of my posts so that I can present a more comprehensive view of my own thinking regarding monetary policy and the New Keynesian model.

My post on neo-Fisherism was intended to provide support for John Cochrane who has argued that the neo-Fisher result is part of the New Keynesian model. Underlying this entire issue, however, is what determines the price level and inflation. In traditional macroeconomics, the quantity theory was always lurking in the background (if not the foreground). Under the quantity theory, the money supply determined the price level. Inflation was always and everywhere a monetary phenomenon.

The New Keynesian model dispenses with money altogether. The initial impulse for doing so was the work of Michael Woodford, who wrote a paper discussing how monetary policy would be conducted in a world without money. The paper (to my knowledge) was not initially an attempt to remove money completely from analysis, but rather to figure out a role for monetary policy once technology had developed to a point in which the monetary base was arbitrarily small. However, it seems that once people realized that it was possible to exclude money completely, this literature sort of took that ball and ran with it. The case for doing so was further bolstered by the fact that money already seemed to lack any empirical relevance.

Of course, there are a few fundamental problems with this literature. First, my own research shows that the empirical analysis that claims money is unimportant is actually the result of the fact that the Federal Reserve publishes monetary aggregates that are not consistent with index number theory, aggregation theory, or economic theory. When one uses Divisia monetary aggregates, the empirical evidence is consistent with standard monetary predictions. This is not unique to my paper. My colleague, Mike Belongia, found similar results when he re-examined empirical evidence using Divisia aggregates.

Second, while Woodford emphasizes in Interest and Prices that a central bank’s interest rate target could be determined by a channel system, in the United States the rate is still determined through open market operations (although now that the Fed is paying interest on reserves, it could conceivably use a channel system). This distinction might not seem to be important, but as I alluded to in my previous post, the federal funds rate is an intermediate target. How the central bank influences the intermediate target is important for the conduct of policy. If the model presumes that the mechanism is different from reality, this is potentially important.

Third, Ed Nelson has argued that the quantity theory is actually lurking in the background of the New Keynesian model and that New Keynesians don’t seem to realize it.

With all that being said, let’s circle back to neo-Fisherism. Suppose that a central bank announced that they were going to target a short term nominal interest rate of zero for seven years. How would they accomplish this?

A good quantity theorist would suggest that there are two ways that they would try to accomplish this. The first way would be to continue to use open market purchases to prevent the interest rate from ever rising. However, open market purchases would be inflationary. Since higher inflation expectations puts upward pressure on nominal interest rates, this sort of policy is unsustainable.

The second way to accomplish the goal of the zero interest rate is to set money growth such that the sum of expected inflation and the real interest rate is equal to zero. In other words, the only sustainable way to commit to an interest rate of zero over the long term is deflation (or low inflation if the real interest rate is negative).

The New Keynesians, however, think that the quantity theory is dead and that we can think about policy without money. And in the New Keynesian model, one can supposedly peg the short term nominal interest rate at zero for a short period of time. Not only is this possible, but it also should lead to an increase in inflation and economic activity. Interestingly, however, as my post on neo-Fisherism demonstrated, this isn’t what happens in their model. According to their model, setting the nominal interest rate at zero leads to a reduction in the rate of inflation. This is so because (1) the nominal interest rate satisfies the Fisher equation, and (2) people have rational expectations. (Michael Woodford has essentially admitted this, but now wants to relax the assumption of rational expectations.)

So why am I bringing all of this up again and why should we care?

Well, it seems that Federal Reserve Bank of St. Louis President Jim Bullard recently gave a talk in which he discussed two competing hypotheses. The first is that lower interest rates should cause higher inflation (the conventional view of New Keynesians and others). The second is that lower interest rates should result in lower inflation. As you can see if you look through his slides, he seems to suggest that the neo-Fisher view is correct since we have a lower interest rate and we have lower inflation.

In my view, however, he has drawn the wrong lesson because he has ignored a third hypothesis. The starting point of his analysis seems to be that the New Keynesian model is the useful framework for analysis and given that this is true, which argument about interest rates is correct, the modified Woodford argument? Or the neo-Fisherites?

However, a third hypothesis is that the New Keynesian model is not the correct model to use for analysis. In the quantity theory view, inflation declines when money growth declines. Thus, if you see lower interest rates, the only way that they are sustainable for long periods of time is if money growth (and therefore inflation) declines as well. Below is a graph of Divisia M4 growth from 2004 to the present. Note that the growth rate seems to have permanently declined.

Also, note the following scatterplot between a 1-month lag in money growth and inflation. If you were to fit a line, you would find that the relationship is positive and statistically significant.

So perhaps money isn’t so useless after all.

To get back to my point from a previous post, it seems that discussions of policy need to take seriously the following. First, the central bank needs to specify its target variable (i.e. a specific numerical value for a variable, such as inflation or nominal GDP). Second, the central bank needs to describe how it is going to adjust its instrument (the monetary base) to hit its target. Third, the central bank needs to specify the transmission mechanism through which this will work. In other words, what intermediate variables will tell the central bank whether or not it is likely to hit its target.

As it currently stands, the short term nominal interest rate is the Federal Reserve’s preferred intermediate variable. Nonetheless, the federal funds rate has been close to zero for six and a half years (!) and yet inflation has not behaved in the way that policy would predict. At what point do we begin to question using this as an intermediate variable?

The idea that low nominal interest rates are associated with low inflation and high nominal interest rates are associated with high inflation is the Fisher equation. Milton Friedman argued this long ago. The New Keynesian model assumes that the Fisher identity holds, but it has no mechanism to explain why. It’s just true in equilibrium and therefore has to happen. Thus, when the nominal interest rate rises and individuals have rational expectations, they just expect more inflation and it happens. Pardon me if I don’t think that sounds like the world we live in. New Keynesians also don’t seem to think that this sounds like the world we live in, but this is their model!

To me, the biggest problem with the New Keynesian model is the lack of any mechanism. Without understanding the mechanisms through which policy works, how can one begin to offer policy advice and determine the likelihood of success? At the very least one should take steps to ensure that the policy mechanisms they think exist are actually in the model.

But the sheer dominance of the New Keynesian model in policy circles also leads to false dichotomies. Jim Bullard is basically asking the question: does the world look like the New Keynesian model says or does it look like the New Keynesians say? Maybe the answer is that it doesn’t look like either alternative.

Understanding John Taylor

There has been a great deal of debate regarding Taylor rules recently. The U.S. House of Representatives recently proposed a bill that would require the Federal Reserve to articulate their policy in the form of a rule, such as the Taylor Rule. This bill created some debate about whether or not the Federal Reserve should adopt the Taylor Rule or not. In reality, the bill did not require the Federal Reserve to adopt the Taylor Rule, but rather used the Taylor Rule as an example.

In addition, John Taylor has been advocating the Taylor Rule as a guide to policy recently as well as attributing the recent financial crisis/recession to the deviations from the Taylor Rule. While it should not surprise anyone that Taylor has been advocating a rule of his own design and which bears his name, he has faced criticism regarding his recent advocacy of the rule and his views on the financial crisis.

Those who know me know that I am no advocate of Taylor Rules or the Taylor Rule interpretation of monetary policy (see here, here, and here). Nonetheless, a number of people have simply dismissed Taylor’s arguments because they think that he is either (a) deliberately misleading the public for ideological reasons, or (b) mistaken about the literature on monetary policy. Neither of these views is charitable to Taylor since they imply that he is either being deliberately obtuse or does not understand the very literature that he is citing. I myself am similarly puzzled by some of Taylor’s comments. Nonetheless, it seems to me that an attempt to better understand Taylor’s position can not only help us to understand Taylor himself, but it might also clarify some of the underlying issues regarding monetary policy. In other words, rather than simply accept the easy (uncharitable) view of Taylor, let’s see if there is something to learn from Taylor’s position. (I am not going to link the dismissive views of Taylor. However, I will address some of the substantive criticism raised by Tony Yates later in the post.)

Let’s begin with Taylor’s position. This is a lengthy quote from Taylor’s blog, but I think that this a very explicit outline of Taylor’s ideas regarding monetary policy history:

Let me begin with a mini history of monetary policy in the United States during the past 50 years. When I first started doing monetary economics in the late 1960s and 1970s, monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century. Using the same performance measures, the results were excellent. Inflation and unemployment both came down. We got the Great Moderation, or the NICE period (non-inflationary consistently expansionary) as Mervyn King put it. Researchers like John Judd and Glenn Rudebush at the San Francisco Fed and Richard Clarida, Mark Gertler and Jordi Gali showed that this improved performance was closely associated with more rules-based policy, which they defined as systematic changes in the instrument of policy — the federal funds rate — in response to developments in the economy.

[…]

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation. You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%. The results were not good. In my view this policy change brought on a search for yield, excesses in the housing market, and, along with a regulatory process which broke rules for safety and soundness, was a key factor in the financial crisis and the Great Recession.

[…]

This deviation from rules-based monetary policy went beyond the United States, as first pointed out by researchers at the OECD, and is now obvious to any observer. Central banks followed each other down through extra low interest rates in 2003-2005 and more recently through quantitative easing. QE in the US was followed by QE in Japan and by QE in the Eurozone with exchange rates moving as expected in each case. Researchers at the BIS showed the deviation went beyond OECD and called it the Global Great Deviation. Rich Clarida commented that “QE begets QE!” Complaints about spillover and pleas for coordination grew. NICE ended in both senses of the word. World monetary policy now seems to have moved into a strategy-free zone.

This short history demonstrates that shifts toward and away from steady predictable monetary policy have made a great deal of difference for the performance of the economy, just as basic macroeconomic theory tells us. This history has now been corroborated by David Papell and his colleagues using modern statistical methods. Allan Meltzer found nearly the same thing in his more detailed monetary history of the Fed.

My reading of this suggests that there are two important points that we can learn about Taylor’s view. First, Taylor’s view of the Great Moderation is actually quite different than the New Keynesian consensus — even though he seems to think that they are quite similar. The typical New Keynesian story about the Great Moderation is that prior to 1979, the Federal Reserve failed to follow the Taylor principle (i.e. raise the nominal interest rate more than one-for-one with an increase in inflation, or in other words, raise the real interest rate when inflation rises). In contrast, Taylor’s view seems to be that the Federal Reserve became more rule-based. However, a Taylor rule with different parameters than Taylor’s original rule can still be consistent with rule-based policy. So what Taylor seems to mean is that if we look at the federal funds rate before and after 1979, it seems to be consistent with his proposed Taylor Rule in the latter period, but there are significant deviations from that rule in the former period.

This brings me to the second point. Taylor’s view about the importance of the Taylor Rule is one based on empirical observation. What this means is that his view is quite different from those working in the New Keynesian wing of the optimal monetary policy literature. To see how Taylor’s view is different from the New Keynesian literature, we need to consider two things that Taylor published in 1993.

The first source that we need to consult is Taylor’s book, Macroeconomic Policy in a World Economy. In that book Taylor presents a rational expectations model and in the latter chapters uses the model to compare monetary policy rules that look at inflation, real output, and nominal income. He finds that the preferred monetary policy rule in the countries that he considers is akin to what we would now call a Taylor Rule. In other words, the policy that reduces the variance of output and inflation is a rule that responds to both inflation and the output gap.

However, the canonical Taylor Rule and the one that John Taylor now advocates does not actually appear in the book (the results presented in the book suggest different coefficients on inflation and output). The canonical Taylor Rule in which the coefficient on inflation is equal to 1.5 and the coefficient on the output gap is equal to 0.5 appears in Taylor’s paper “Discretion versus policy rules in practice”:

Screen Shot 2015-05-21 at 9.29.58 AM

Thus, as we can see in the excerpt from Taylor’s paper, the reason that he finds this particular policy rule desirable is that it seems to describe monetary policy during a time in which policymakers seemed to be doing well.

However, Taylor is also quick to point out that the Federal Reserve needn’t adopt this rule, but rather that the rule should be one of the indicators that the Federal Reserve looks at when conducting policy:

Screen Shot 2015-05-21 at 9.34.13 AM

Indeed, Taylor’s views on monetary policy do not seem to have changed much from his 1993 paper. He still advocates using the Taylor Rule as a guide to monetary policy rather than as a formula required for monetary policy.

However, what is most important is the following distinction between Taylor’s 1993 book and Taylor’s 1993 paper. In his book, Taylor shows using evidence from simulations that a feedback rule for monetary policy in which the central bank responds to inflation and the output gap (rather than inflation itself or nominal income) is the preferable policy among the three alternatives he considers. In contrast, in his 1993 paper, we begin to see that Taylor views the version of the rule in which the coefficient on inflation is 1.5 and the coefficient on the output gap is 0.5 as a useful benchmark for policy because it seems to describe policy well between the period 1987 – 1992 — a period that Taylor would classify as good policy. In other words, Taylor’s advocacy of the conventional 1.5/0.5 Taylor Rule seems to be informed by the empirical observation that when policy is good, it also tends to coincide with this rule.

This is also evident in Taylor’s 1999 paper entitled, “A Historical Analysis of Monetary Policy Rules.” In this paper, Taylor does two things. First, he estimates reaction functions for the Federal Reserve to determine the effect of inflation and the output gap on the federal funds rate. In doing so, he shows that the Greenspan era seems to have produced a policy consistent with the conventional 1.5/0.5 version of the Taylor Rule whereas for the pre-1979 period, this was not the case. Again, this provides Taylor with some evidence that when Federal Reserve policy is approximately consistent with the conventional Taylor Rule, the corresponding macroeconomic outcomes seem to be better.

This is best illustrated by the second thing that Taylor does in the paper. In the last section of the paper, Taylor plots the path of the federal funds rate if monetary policy had followed a Taylor rule and the actual federal funds rate for the same two eras described above. What the plots of the data show is that during the 1970s, when inflation was high and when nobody would really consider macroeconomic outcomes desirable, the Federal Reserve systematically set the federal funds rate below where they would have set it had they been following the Taylor Rule. In contrast, when Taylor plots the federal funds rate implied by the conventional Taylor Rule and the actual federal funds rate for the Greenspan era (in which inflation was low and the variance of the output gap was low), he finds that policy is very consistent with the Taylor Rule.

He argues on the basis of this empirical observation that the deviations from the Taylor Rule in the earlier period represent “policy mistakes”:

…if one defines policy mistakes as deviations from such a good policy rule, then such mistakes have been associated with either high and prolonged inflation or drawn-out periods of low capacity utilization, much as simple monetary theory would predict. (Taylor, 1999: 340).

Thus, when we think about John Taylor’s position, we should recognize that Taylor’s position on monetary policy and the Taylor Rule is driven much more by empirical evidence than it is by model simulations. He sees periods of good policy as largely consistent with the conventional Taylor Rule and periods of bad policy as inconsistent with the conventional Taylor Rule. This reinforces his view that the Taylor Rule is a good indicator about the stance of monetary policy.

Taylor’s advocacy of the Taylor Rule as a guide for monetary policy is very different from the related New Keynesian literature on optimal monetary policy. That literature, beginning with Rotemberg and Woodford (1999) — incidentally writing in the same volume as Taylor’s 1999 paper, which was edited by Taylor — derives welfare criteria using the utility function of the representative agent in the New Keynesian model. In the context of these models, it is straightforward to show that the optimal monetary policy is one that minimizes the weighted sum of the variance of inflation and the variance of the output gap.

I bring this up because this literature reached different conclusions regarding the coefficients in the Taylor Rule. For example, as Tony Yates explains:

…if you take a modern macro model and work out what is the optimal Taylor Rule – tune the coefficients so that they maximise social welfare, properly defined in model terms, you will get very large coefficients on the term in inflation. Perhaps an order of magnitude greater than JT’s. This same result is manifest in ‘pure’ optimal policies, where we don’t try to calculate the best Taylor Rule, but we calculate the best interest rate scheme in general. In such a model, interest rates are ludicrously volatile. This lead to the common practice of including terms in interest rate volatility in the criterion function that we used to judge policy. Doing that dials down interest rate volatility. Or, in the exercise where we try to find the best Taylor Rule, it dials down the inflation coefficient to something reasonable. This pointed to a huge disconnect between what the models were suggesting should happen, and what central banks were actually doing to tame inflation [and what John Taylor was saying they should do]. JT points out that most agree that the response to inflation should be greater than one for one. But should it be less than 20? Without an entirely arbitary term penalising interest rate volatility, it’s possible to get that answer.

I suspect that if one brought up this point to Taylor, he would suggest that these fine-tuned coefficients are unreasonable. As evidence in favor of his position, he would cite the empirical observations discussed above. Thus, there is a disconnect between what the Taylor Rule literature has to say about Taylor Rules and what John Taylor has to say about Taylor Rules. I suspect the difference is that the literature is primarily based on considering optimal monetary policy in terms of a theoretical model whereas John Taylor’s advocacy of the Taylor Rule is based on his own empirical observations.

Nonetheless, as Tony pointed out to me in conversation, if that is indeed the position that Taylor would take, then quotes like this from Taylor’s recent WSJ op-ed are misleading, “The summary is accurate except for the suggestion that I put the rule forth simply as a description of past policy when in fact the rule emerged from years of research on optimal monetary policy.” I think that what Taylor is really saying is that Taylor Rules, defined generally as rules in which the central bank adjusts the interest rate to changes in inflation and the output gap, are consistent with optimal policy rather than arguing that his exact Taylor Rule is the optimal policy in these models. Nonetheless, I agree with Tony that this statement is misleading regardless of what Taylor meant when he wrote it.

But suppose that we give Taylor the benefit of doubt and suggest that this statement was unintentionally misleading. There is still this bit about the financial crisis to discuss and it is on this subject that there are questions that need to be asked of Taylor.

In Taylor’s book Getting Off Track, he argues that deviations from the Taylor Rule caused the financial crisis. To demonstrate this, he first shows that from 2003 – 2006, the federal funds rates was approximately 2 percentage points below the rate implied by the conventional Taylor Rule. He then provides empirical evidence regarding the effects of the deviations from the Taylor Rule on housing starts. He constructs a counterfactual to suggest that if the Federal Reserve had followed the Taylor Rule, then then housing starts would have been between 200,000 – 400,000 units lower each year between 2003 and 2006 than what we actually observed. He also shows that the deviations from the Taylor Rule in Europe can explain changes in housing investment in for a sample that includes Germany, Austria, Italy, the Netherlands, Belgium, Finland, France, Spain, Greece, and Ireland.

Taylor therefore argues that by keeping interest rates too low for too long, the Federal Reserve (and the ECB by following suit with low interest rates) created the housing boom that ultimately went bust and led to a financial crisis.

In a separate post, Tony Yates responds to this hypothesis by making the following points:

2. John’s rule was shown to deliver pretty good results in variations on a narrow class of DSGE models. The crisis has cast much doubt on whether this class is wide enough to embrace the truth. In particular, it typically left out the financial sector. Modifications of the rule such that central bank rates respond to spreads can be shown to deliver good results in prototype financial-inclusive DSGE models. But these models are just a beginning, and certainly not the last word, on how to describe the financial sector. In models in which the Taylor Rule was shown to be good, smallish deviations from it don’t cause financial crises, therefore, because almost none of these models articulate anything that causes a financial crisis. How can you put a financial crisis in real life down to departures from a rule whose benefits were derived in a model that had no finance? There is a story to be told. But it requires much alteration of the original model. Perhaps nominal illusion; misapprehension of risk, learning, and runs. And who knows what the best monetary policy would be in that model.

3. In the models in which the TR is shown to be good, the effects of monetary policy are small and relatively short-lived. To most in the macro profession, the financial crisis looks like a real phenomenon, building up over 2-2.5 decades, accompanying relative nominal stability. Such phenomena don’t have monetary causes, at least not seen through the spectacles of models in which the TR does well. Conversely, if monetary policy is deduced to have two decade long impulses, then we must revise our view about the efficacy of the Taylor Rule.

Thus, we are back to the literature on optimal monetary policy. Again, I suspect that if one raised these points to John Taylor, he might argue that (i) his empirical evidence on the financial crisis trumps the optimal policy literature (which admittedly has issues — like the lack of a financial sector in my of these models), (ii) his empirical analysis suggests that a Taylor Rule might be optimal in a properly modified model, or (iii) regardless of whether the conventional Taylor Rule is optimal, the deviation from this type of policy is harmful as evident by the empirical evidence.

Nonetheless, this brings me to my own questions about/criticisms of Taylor’s approach:

1. Suppose that Taylor believes that point (i) is true. If this is the case, then citing the optimal monetary policy literature as supportive of the Taylor Rule in the WSJ is not simply innocently misleading the readers, it is deliberately misleading the readers by choosing to only cite this literature when it fits with his view. One should not selectively cite literature when it is favorable to one’s view and then not cite the same literature when it is no longer favorable.

2. As Tony Yates points out, point (ii) is impossible to answer.

3. Regarding point (iii), the question is whether or not empirical evidence is sufficient to establish the Taylor Rule as a desirable policy. For example, as the work of Athanasios Orphanides demonstrates, the conclusions about whether the Federal Reserve following the Taylor principle (i.e. having a coefficient on inflation greater than 1) in the pre- and post-Volcker era are dependent upon the data that one uses in their analysis. When one uses the data that the Federal Reserve had in real-time, the problems associated with policy have more to do with the responsiveness of the Fed to the output gap than they do with the rate of inflation. In other words, the Federal Reserve does not typically do a good job forecasting the output gap in real-time. This is a critical flaw in the Taylor Rule because it implies that even if the Taylor Rule is optimal, the central bank might not be able to set policy consistent with the rule.

In other words, if the deviations from the Taylor Rule have such a large effect on economic outcomes and it is very difficult for the central bank to maintain a policy consistent with the Taylor Rule, then perhaps this isn’t a desirable policy after all.

4. One has to stake out a position regarding where they stand on models and the data. Taylor’s initial advocacy of this type of rule seems to be driven by the model simulations that he has done. However, his more recent advocacy of this type of rule seems to be driven by the empirical evidence in his 1993 and 1999 papers and his book, Getting Off Track. But the empirical evidence should be consistent with the model simulations and it is not clear that this is true. In other words, one should not make statements about the empirical importance of a rule when the outcome from deviating from that rule is not even a feature of the model that was used to do the simulations.

5. In addition, the Taylor Rule lacks the intuition of, say, a money growth rule. With a money growth rule, the analysis is simply based on quantity theoretic arguments. If one targets a particular rate of growth in the monetary aggregate (assuming that velocity is stable), we have a good idea about what nominal income growth (or inflation) will be. In addition, the quantity theory is well known (if not always well understood) and can be shown to be consistent with a large number of models (even models with flexible prices). This sort of rule for policy is intuitive. If you know that in the long run money growth causes inflation then the way to prevent inflation is to limit money growth.

It is not so clear what the intuition is behind the Taylor Rule. It says that we need to tighten policy when inflation rises and/or when real GDP is above potential. That part is fairly intuitive. But what are the “correct” parameters? And why is Taylor’s preferred parameterization such a good rule? Is it solely based on his empirical work because the optimal monetary policy literature suggests alternatives?

6. Why did things change between the 1970s and the early 2000s. In his 1999 paper, Taylor argues that the Federal Reserve kept interest rates too low for too long and we ended up with stagflation. In his book Getting Off Track, he implies that when the Federal Reserve kept interest rates too low for too long we ended up with a housing boom and bust. But why wasn’t there inflation/stagflation? Why was there such a different response to having interest rates too low in the early 2000s as opposed to the 1970s? These are questions that empirics alone cannot answer.

In any event, I hope that this post brings some clarity to the debate.