## On What Monetarism Really Is/Was

Paul Krugman has a recent post on why monetarism failed. Subsequently a number of economics bloggers have replied with their views on monetarism. I don’t have time to summarize all of the viewpoints espoused in these posts, but a fundamental problem throughout these posts is that each author’s description of monetarism seems to be merely their opinion about the distinct characteristics of monetarism. The problem is that many of these opinions do not provide anyone with more than a surface-level view of monetarism (i.e. something one might find in a principles or intermediate macro textbook).

In reality, Old Monetarists not only had views on money and inflation, but also had important views on the monetary transmission mechanism. The role that Old Monetarists saw for money was much more nuanced than the crude quantity theory vision that is often attributed to them. On this note, it is probably more valuable to look to the academic literature that attempts to summarize these ideas and put them into context for a modern reader.

A good place to start for anyone interested in Old Monetarist ideas is the work of Ed Nelson. Nelson is someone who has spent his career studying these ideas and trying to test their importance within modern macroeconomic frameworks. He is also currently working on a book about Milton Friedman’s influence on the monetary policy debate in the United States. To get a sense of what Old Monetarists really believed and why those ideas are relevant, I would recommend Nelson’s 2003 JME paper “The Future of Monetary Aggregates in Monetary Policy Analyis.” Here is the abstract:

This paper considers the role of monetary aggregates in modern macroeconomic models of the New Keynesian type. The focus is on possible developments of these models that are suggested by the monetarist literature, and that in addition seem justified empirically. Both the relation between money and inflation, and between money and aggregate demand, are considered. Regarding the first relation, it is argued that both the mean and the dynamics of inflation in present-day models are governed by money growth. This relationship arises from a conventional aggregate-demand channel; claims that an emphasis on the link between monetary aggregates and inflation requires a direct channel connecting money and inflation, are wide of the mark. The relevance of money for aggregate demand, in turn, lies not via real balance effects (or any other justification for money in the IS equation), but on money’s ability to serve as a proxy for the various substitution effects of monetary policy that exist when many asset prices matter for aggregate demand. This role for monetary aggregates, which is supported by empirical evidence, enhances the value of money to monetary policy.

Here is the working paper version that is not behind a paywall.

## On Public Infrastructure Investment

There are two popular narratives about our infrastructure in the United States. The first is that our infrastructure is crumbling. The second is that our infrastructure spending is allocated based on its political value rather than its economic value. Maybe you believe one of these stories. Maybe you believe both. Maybe you believe neither. Regardless, these narratives are indicative of two important questions. How can we efficiently manage our public infrastructure? And how can we ensure that infrastructure investment isn’t used as a political tool? I have a new paper that proposes an answer to both questions. My proposal is to create a rule of law for public infrastructure based on option values. This rule of law would ensure that infrastructure is maintained efficiently and also that politicians would not be able to use infrastructure spending as a political tool.

The standard way to evaluate public infrastructure projects is to figure out the benefits of the infrastructure over its entire lifespan and then compute the present value of those benefits. Then you do the same thing with the costs. When you subtract the present value of the costs from the present value of the benefits you get something call a net present value. Infrastructure investments are evaluated using a positive net present value criterion. In other words, as long as the present discounted value of the benefits exceeds the present discounted value of the costs the project is desirable.

In theory the net present value approach seems like a good idea. Of course we would want the benefits to outweigh the costs. This approach, however, is much different than how a private firm would value their assets. A firm that owns a factory knows that the factory can eventually become outdated. To the firm the value of the factory is the sum of two components. The first component is the value of the factory to the firm if the firm never shuts down the factory or builds a new one. The second component is the value of the option to build a new factory or add to the current factory’s capacity in the future.

The same general concept is true of public infrastructure investment. The value of any existing infrastructure is the value of the infrastructure over its entire lifetime plus the option value of replacing that infrastructure in the future. This option value is associated with a tradeoff. Since infrastructure depreciates over time, the value from existing infrastructure is declining. This means that as time goes by, the opportunity cost of replacing the infrastructure declines and therefore the option value of replacing the infrastructure rises. However, the longer the government waits to replace the infrastructure, the longer society has to wait to receive the benefit of replacement. This reduces the option value. My proposal suggests that the government should choose the value of the current infrastructure that optimally balances this tradeoff. What this ultimately implies is that the government should wait until the value of the current infrastructure is some fraction of the net present value of the proposed replacement project.

The reason that this option approach is preferable to the net present value approach is as follows. First, even though a current project has a positive net present value, this does not necessarily imply that now is the optimal time to undertake the project. Replacing infrastructure entails an opportunity cost associated with the foregone benefit that society would have received from the existing infrastructure. In other words, society might get greater value from the project if the government chooses to wait a little longer before replacing what is currently there. Second, my approach provides a precise moment at which it is optimal to replace the infrastructure. In contrast, the net present value approach says nothing about optimality; it’s simply a cost-benefit analysis. Given the possibility that society could get an even larger benefit in the future, the option approach should be strictly preferred. Third, the option approach provides an explicit way for the government to maintain an infrastructure fund. In my paper I provide a simple formula for computing the amount of money that needs to be in the fund. This formula is simple; it only needs to take into account the cost of each project and the relative distance that project is from its replacement threshold. This sort of fund is important because it would also allow the government to continue funding infrastructure projects at the optimal time even during a recession when infrastructure budgets, especially at the local level, are often cut.

The final and most significant benefit of my approach, however, is that it would provide the means for establishing a rule of law for public infrastructure projects. This rule of law should appeal to people across the ideological spectrum. I say that for the following reasons. First, if the government adopted this option value approach as a rule of law, this would require that the government fund any and all infrastructure projects that had reached their replacement thresholds. This would ensure that the infrastructure in the United States was maintained efficiently. Second, because the only projects that would receive funding would be those that had reached the replacement threshold, politicians would not be able to use infrastructure spending as a tool for reelection or repayment to supporters. As a result, the option approach would provide the means for a rule of law for infrastructure investment that is both transparent and efficient.

Establishing such a rule of law would be difficult. The same politicians that benefit from allocating infrastructure investment for political reasons would be the same ones who would have to vote on the legislation to enact this new rule. Nonetheless, there is evidence that politicians vote in favor of infrastructure projects that benefit their constituents, but vote against aggregate investment. If the group of politicians that benefit most from this state of affairs is small, then the legislation might be easier to pass. In addition, there is nothing to stop departments of transportation at both the state and federal level from calculating option value and making the data available to the public. This greater transparency, while not a rule of law, would at least be a step in the direction of a more efficient management of our public infrastructure.

## The Importance of Safe Assets

A theme you often hear among bloggers, but a bit less so in seminars, is the idea that the supply of and demand for safe assets matter. David Beckworth is one such blogger who talks about this, but critics often find it hard to think about the macroeconomy in these terms since the role of money has been marginalized within the New Keynesian wing of macroeconomics. I say this because David’s intuitive explanation of safe asset equilibrium seems to be a cross between New Keynesian intuition and Old Monetarist intuition. He is trying to communicate his message to what is essentially the mainstream of the discipline, but by emphasizing something that isn’t generally in their models.

Along these lines, I was happy to stumble upon this paper by Caballero, Farhi, and Gourinchas. In my view this paper is quite similar to David’s views regarding safe assets and monetary policy and so I thought it might be interesting to outline the basic model in the paper and talk about the mechanisms for monetary policy.

The model is a modified version of an IS-LM model. The one modification to the model is a supply and demand condition for safe assets. Formally, the model consists of the following three equations:

$y - \bar{y} = -\delta (r - \bar{r}) - \delta_s (r^s - \bar{r}^s)$
$r^s = \max[\hat{r}^s + \phi(y - \bar{y}), 0]$
$s = \psi_y y + \psi_s r^s - \psi_{\Delta} (r - r^s)$

where $y$ is output, $r$ is the risky interest rate, $r^s$ is the rate on safe assets, $\hat{r}^s$ is the target interest rate, $s$ is the supply of safe assets, $\bar{y}$ is the natural rate of output, $\bar{r}$ is the natural risky interest rate, and $\bar{r}^s$ is the natural safe interest rate, and the greek letters are parameters. Inflation is assumed to be zero such that there is no difference between real and nominal interest rates.

This framework is a familiar IS-LM framework with the first equation is an IS equation, the second equation is a Taylor Rule subject to a zero lower bound, and the third equation determines the safe asset equilibrium.

The best interpretation of the safe asset equilibrium, as they describe it in the paper, is in terms of the flow of safe assets. According to this view, the flow demand for safe assets is a function of output, the rate of return on safe assets, and the risk premium ($r - r^s$). Thus the supply of safe assets, in this interpretation, is the net increase in the supply of safe assets.

Given that setup, let’s see what the model can tell us.

The first assumption that they make is that the supply of safe assets is unresponsive to the risk premium. In other words, in terms of the model, $\psi_{\Delta} = 0$. Given that many safe assets are exogenously supplied, this seems like a reasonable assumption.

Now, let’s think about the determination of the natural rate of interest. If the central bank sets the interest rate on safe assets equal to the natural rate, then output will be equal to potential (essentially by definition). It then follows from the IS equation that the risky interest rate is also equal to the natural risky interest rate. But how does one determine the natural interest rate?

Consider the equilibrium condition for safe assets. The interest rate on safe assets is the rate that exists when output is equal to potential. From the safe asset equilibrium condition it follows that

$\bar{r}^s = {{s - \psi_y \bar{y}}\over{\psi_s}}$

The central bank then needs to set $r^s = \hat{r}^s = \bar{r}^s$.

However, suppose that the net increase in the supply of safe assets is not high enough to keep up with the demand for new safe assets. In particular, suppose that the net increase in the supply of safe assets is so low that

$s < \psi_y \bar{y}$

In this scenario, the natural interest rate would be negative. However, from the Taylor rule, the market rate of interest is subject to a zero lower bound. As a result, the central bank cannot set the interest rate low enough to clear the market for safe assets. So what happens? Well, the central bank sets the safe interest rate as low as it can go $r^s = 0$. Which implies that output is pinned down by the net increase in the supply of safe assets:

$y = {{s}\over{\psi_y}}$

It then follows that $r > \bar{r}$. In other words, the risky interest rate is “too high” and the risk premium rises. But since the risky rate of interest is higher than the natural risky rate, the IS equation implies that output must fall in order to reduce the demand for safe assets and restore equilibrium.

The policy implication is that to escape this scenario, one needs to increase the supply of safe assets. By increasing the supply of safe assets, this increases output toward potential and thereby reduces the risk premium.

As the authors note, early attempts at quantitative easing in the United States did exactly what the model would prescribe because they swapped the risky assets in the market for safe assets. Fiscal stimulus can also help, but not through any sort of production done by the public sector, but because it increases the supply of safe assets (Treasuries).

## Does Monetary Policy Influence the Natural Rate?

Narayana Kocherlakota is now blogging. His most recent post concerns the equilibrium rate of interest, or natural rate of interest as it is sometimes referred. Kocherlakota argues that those who would like to see higher interest rates should stop harping on the Federal Reserve and instead write their Congressman to encourage more fiscal stimulus. I think that this view is both conventional and also odd. Allow me to explain.

Consider the following simple thought experiment. Suppose that the market rate of interest targeted by the Federal Reserve, the federal funds rate, is equal to the equilibrium rate that would prevail in a perfect, frictionless world. We can think of this equilibrium rate as being the rate consistent with a consumption Euler equation. In particular, this implies that the real rate of interest is given by

Real natural interest rate = Rate of time preference + Expected Growth

Now suppose that the economy enters a recession and expected growth declines. This implies that the natural interest rate declines also. If the central bank stands firm and does not adjust its target for the federal funds rate, then monetary policy is too tight. The market interest rate is above the natural interest rate. In a standard Wicksellian world the fact that the market interest rate is “too high” would imply a further reduction in the economic activity, which would further reduce the natural rate of interest. Again, if the central bank continues stand firm, monetary policy actually tightens. The implication is that the central bank can passively tighten even though they haven’t taken any action. In the pure credit economy of Wicksell, this process would continue to produce a deflationary spiral until the central bank equated the market interest rate with the natural rate.

Note this important point. In the Wicksellian model, there is an accelerationist effect. The accelerationist effect is due to the fact that tight monetary policy actually reduces the natural rate. Thus, to get back to normalcy what the central bank needs to do is not only to lower the market interest rate, but to lower the market rate below the natural rate. Once they do this, economic activity starts to increase and therefore so does the natural rate. To get back to normalcy, the central bank then has to increase the market rate faster than the natural rate is increasing until the two ultimately converge.

Note that this seems to be an odd way to conduct monetary policy. For example, imagine that you have a bow and arrow and there is some target in the distance. Suppose that every time you move the arrow to adjust your aim, the target moves as well. Nonetheless, this is the basic concept behind the Wicksellian model.

Kocherlakota argues in his post that the natural rate of interest is too low and that the market interest rate cannot get low enough to accomplish the task described above to correct for previously tight monetary policy. As a result, we need our Congressmen to go out and pass legislation that will get the economy moving and raise the natural interest rate toward the market interest rate.

I find this view strange for several reasons. First, in a Wicksellian framework if the natural interest rate is below the market rate, this results in a deflationary spiral. Since this seems to be Kocherlakota’s model of choice, how does he explain the economic recovery? Second, standard economic theory suggests that the natural interest rate is the sum of the rate of time preference and expected growth. Real GDP growth (and expected real GDP growth) has been positive for some time. Even if we ignore my first point, why hasn’t this increase in growth led to an increase the natural interest rate?

My answer to these questions is that the federal funds rate essentially becomes a useless indicator at the zero lower bound. Quantitative easing is just open market operations by a different name. To demonstrate this, consider that measures of the so-called shadow federal funds rate have actually plummeted far below zero. Estimates of the shadow rate come from the framework initially described by Fischer Black in his paper “Interest Rates as Options.” In that paper, Black pointed out that the benefit of holding short term debt is that it includes an option to switch to currency if the yield ever becomes negative. What this implies, however, is that while the market interest rate can never go below zero, it is possible to estimate a shadow rate when the observed market rate hits the zero lower bound. Estimates of the shadow rate have gone as low as -3%. If we are to believe this methodology, what this says to me is that quantitative easing succeeded in doing what monetary policy was thought not to be able to do.

One could argue perhaps that the rounds of QE did not go far enough. For example, for the central bank to produce a significant recovery, the Wicksellian model suggests that the central bank must not only reduce the market interest rate, but that they should reduce the market interest rate below the natural rate. If they simply reduce the market rate to the natural rate, then this just stops the decline in economic activity rather than providing some catch-up growth.

Regardless of whether you believe that latter claim, this post essentially makes the following claim. If I am correct in saying that the shadow rate is preferable to the federal funds as an indicator of monetary policy, then even if you believe in the Wicksellian model, you needn’t believe that we need to have to rely on fiscal policy to raise the natural rate of interest. What my discussion implies is that the central bank need only to lower the shadow interest rate below the natural rate.

## Targets (Goals) Must Be Less Than or Equal to Instruments

In my most recent posts, I discussed the importance of using the proper semantics when discussing monetary policy. Central bankers should have an explicit numerical target for a goal variable. They should then describe how they are going to adjust their instrument to achieve this target, with particular reference to the intermediate variables that will provide guidance at higher frequencies. A related issue is that a central bank is limited in terms of its ultimate target (or targets) by the number of instruments it has at its disposal. This is discussed in an excellent post by Mike Belongia and Peter Ireland:

More than sixty years ago, Jan Tinbergen, a Dutch economist who shared the first Nobel Prize in Economics, derived this result: The number of goals a policymaker can pursue can be no greater than the number of instruments the policymaker can control. Traditionally, the Fed has been seen as a policy institution that has one instrument – the quantity of reserves it supplies to the banking system. More recently, the Fed may have acquired a second instrument when it received, in 2008, legislative authority to pay interest on those reserves.

Tinbergen’s constraint therefore limits the Fed to the pursuit, at most, of two independent objectives. To see the conflict between this constraint and statements made by assorted Fed officials, consider the following alternatives. If the Fed wishes to support U.S. exports by taking actions that reduce the dollar’s value, this implies a monetary easing that will increase output in the short run but lead to more inflation in the long run. Monetary ease might help reverse the stock market’s recent declines – or simply re-inflate bubbles in the eyes of those who see them. Conversely, if the Fed continues to focus on keeping inflation low, this requires a monetary tightening that will be expected, other things the same, to slow output growth, increase unemployment, and raise the dollar’s value with deleterious effects on US exports.

The Tinbergen constraint has led many economists outside the Fed to advocate that the Fed set a path for nominal GDP as its policy objective. Although this is a single variable, the balanced weights it places on output versus prices permit a central bank that targets nominal GDP to achieve modest countercyclical objectives in the short run while ensuring that inflation remains low and stable over longer horizons. But regardless of whether or not they choose this particular alternative, Federal Reserve officials need to face facts: They cannot possibly achieve all of the goals that, in their public statements, they have set for themselves.

## The New Keynesian Failure

In a previous post, I defended neo-Fisherism. A couple of days ago I wrote a post in which I discussed the importance of monetary semantics. I would like to tie together two of my posts so that I can present a more comprehensive view of my own thinking regarding monetary policy and the New Keynesian model.

My post on neo-Fisherism was intended to provide support for John Cochrane who has argued that the neo-Fisher result is part of the New Keynesian model. Underlying this entire issue, however, is what determines the price level and inflation. In traditional macroeconomics, the quantity theory was always lurking in the background (if not the foreground). Under the quantity theory, the money supply determined the price level. Inflation was always and everywhere a monetary phenomenon.

The New Keynesian model dispenses with money altogether. The initial impulse for doing so was the work of Michael Woodford, who wrote a paper discussing how monetary policy would be conducted in a world without money. The paper (to my knowledge) was not initially an attempt to remove money completely from analysis, but rather to figure out a role for monetary policy once technology had developed to a point in which the monetary base was arbitrarily small. However, it seems that once people realized that it was possible to exclude money completely, this literature sort of took that ball and ran with it. The case for doing so was further bolstered by the fact that money already seemed to lack any empirical relevance.

Of course, there are a few fundamental problems with this literature. First, my own research shows that the empirical analysis that claims money is unimportant is actually the result of the fact that the Federal Reserve publishes monetary aggregates that are not consistent with index number theory, aggregation theory, or economic theory. When one uses Divisia monetary aggregates, the empirical evidence is consistent with standard monetary predictions. This is not unique to my paper. My colleague, Mike Belongia, found similar results when he re-examined empirical evidence using Divisia aggregates.

Second, while Woodford emphasizes in Interest and Prices that a central bank’s interest rate target could be determined by a channel system, in the United States the rate is still determined through open market operations (although now that the Fed is paying interest on reserves, it could conceivably use a channel system). This distinction might not seem to be important, but as I alluded to in my previous post, the federal funds rate is an intermediate target. How the central bank influences the intermediate target is important for the conduct of policy. If the model presumes that the mechanism is different from reality, this is potentially important.

Third, Ed Nelson has argued that the quantity theory is actually lurking in the background of the New Keynesian model and that New Keynesians don’t seem to realize it.

With all that being said, let’s circle back to neo-Fisherism. Suppose that a central bank announced that they were going to target a short term nominal interest rate of zero for seven years. How would they accomplish this?

A good quantity theorist would suggest that there are two ways that they would try to accomplish this. The first way would be to continue to use open market purchases to prevent the interest rate from ever rising. However, open market purchases would be inflationary. Since higher inflation expectations puts upward pressure on nominal interest rates, this sort of policy is unsustainable.

The second way to accomplish the goal of the zero interest rate is to set money growth such that the sum of expected inflation and the real interest rate is equal to zero. In other words, the only sustainable way to commit to an interest rate of zero over the long term is deflation (or low inflation if the real interest rate is negative).

The New Keynesians, however, think that the quantity theory is dead and that we can think about policy without money. And in the New Keynesian model, one can supposedly peg the short term nominal interest rate at zero for a short period of time. Not only is this possible, but it also should lead to an increase in inflation and economic activity. Interestingly, however, as my post on neo-Fisherism demonstrated, this isn’t what happens in their model. According to their model, setting the nominal interest rate at zero leads to a reduction in the rate of inflation. This is so because (1) the nominal interest rate satisfies the Fisher equation, and (2) people have rational expectations. (Michael Woodford has essentially admitted this, but now wants to relax the assumption of rational expectations.)

So why am I bringing all of this up again and why should we care?

Well, it seems that Federal Reserve Bank of St. Louis President Jim Bullard recently gave a talk in which he discussed two competing hypotheses. The first is that lower interest rates should cause higher inflation (the conventional view of New Keynesians and others). The second is that lower interest rates should result in lower inflation. As you can see if you look through his slides, he seems to suggest that the neo-Fisher view is correct since we have a lower interest rate and we have lower inflation.

In my view, however, he has drawn the wrong lesson because he has ignored a third hypothesis. The starting point of his analysis seems to be that the New Keynesian model is the useful framework for analysis and given that this is true, which argument about interest rates is correct, the modified Woodford argument? Or the neo-Fisherites?

However, a third hypothesis is that the New Keynesian model is not the correct model to use for analysis. In the quantity theory view, inflation declines when money growth declines. Thus, if you see lower interest rates, the only way that they are sustainable for long periods of time is if money growth (and therefore inflation) declines as well. Below is a graph of Divisia M4 growth from 2004 to the present. Note that the growth rate seems to have permanently declined.

Also, note the following scatterplot between a 1-month lag in money growth and inflation. If you were to fit a line, you would find that the relationship is positive and statistically significant.

So perhaps money isn’t so useless after all.

To get back to my point from a previous post, it seems that discussions of policy need to take seriously the following. First, the central bank needs to specify its target variable (i.e. a specific numerical value for a variable, such as inflation or nominal GDP). Second, the central bank needs to describe how it is going to adjust its instrument (the monetary base) to hit its target. Third, the central bank needs to specify the transmission mechanism through which this will work. In other words, what intermediate variables will tell the central bank whether or not it is likely to hit its target.

As it currently stands, the short term nominal interest rate is the Federal Reserve’s preferred intermediate variable. Nonetheless, the federal funds rate has been close to zero for six and a half years (!) and yet inflation has not behaved in the way that policy would predict. At what point do we begin to question using this as an intermediate variable?

The idea that low nominal interest rates are associated with low inflation and high nominal interest rates are associated with high inflation is the Fisher equation. Milton Friedman argued this long ago. The New Keynesian model assumes that the Fisher identity holds, but it has no mechanism to explain why. It’s just true in equilibrium and therefore has to happen. Thus, when the nominal interest rate rises and individuals have rational expectations, they just expect more inflation and it happens. Pardon me if I don’t think that sounds like the world we live in. New Keynesians also don’t seem to think that this sounds like the world we live in, but this is their model!

To me, the biggest problem with the New Keynesian model is the lack of any mechanism. Without understanding the mechanisms through which policy works, how can one begin to offer policy advice and determine the likelihood of success? At the very least one should take steps to ensure that the policy mechanisms they think exist are actually in the model.

But the sheer dominance of the New Keynesian model in policy circles also leads to false dichotomies. Jim Bullard is basically asking the question: does the world look like the New Keynesian model says or does it look like the New Keynesians say? Maybe the answer is that it doesn’t look like either alternative.

## On Monetary Semantics

My colleague, Mike Belongia, was kind enough to pass along a book entitled, “Targets and Indicators of Monetary Policy.” The book was published in 1969 and features contributions of the Karl Brunner, Allan Meltzer, Anna Schwartz, James Tobin, and others. The book itself was a product of a conference at UCLA held in 1966. There are two overarching themes to the book. The first theme, which is captured implicitly by some papers and is discussed explicitly by others, is the need for clarification in monetary policy discussions regarding indicator variables and target variables. The second theme is that, given these common definitions, economic theory can be used to guide policymakers regarding what variables should be used as indicators and targets. While I’m not going to summarize all of the contributions, there is one paper that I wanted to discuss because of its continued relevance today and that is Tobin’s contribution entitled, “Monetary Semantics.”

Contemporary discussions of monetary policy often begin with a misguided notion. For example, I often hear something to the effect of “the Federal Reserve has one instrument and that is the federal funds rate.” This is incorrect. The federal funds rate is not and never has been an instrument of the Federal Reserve. One might think that this is merely semantics, but this gets to broader issues about the role of monetary policy.

This point is discussed at length in Tobin’s paper. It is useful here to quote Tobin at length:

No subject is engulfed in more confusion and controversy than the measurement of monetary policy. Is it tight? Is it easy? Is it tighter than it was last month, or last year, or ten years ago? Or is it easier? Such questions receive a bewildering variety of answers from Federal Reserve officials, private bankers, financial journalists, politicians, and academic economists…The problem is not only descriptive but normative; that is, we all want an indicator of ease or tightness not just to describe what is happening, but to appraisecurrent policy against some criterion of desirable or optimal policy.

[…]

I begin with some observations about policy making that apply not just to monetary policy, indeed not just to public policy, but quite generally. From the policy maker’s standpoint, there are three kinds of variables on which he obtains statistical or other observations: instruments, targets, and intermediate variables. Instruments are variables he controls completely himself. Targets are variables he is trying to control, that is, to cause to reach certain numerical values, or to minimize fluctuations. Intermediate variables lie in-between. Neither are they under perfect control nor are their values ends in themselves.

This quote is important in and of itself for clarifying language. However, there is a broader importance that can perhaps best be illustrated by a discussion of recent monetary policy.

In 2012, I wrote a very short paper (unpublished, but can be found here) about one of the main problems with monetary policy in the United States. I argued in that paper that the main problem was that the Federal Reserve lacked an explicit target for monetary policy. Without an explicit target, it was impossible to determine whether monetary policy was too loose, too tight, or just right. (By the time the paper was written, the Fed had announced a 2% target for inflation.) In the paper, I pointed out that folks like Scott Sumner were saying that monetary policy was too tight because nominal GDP had fallen below trend while people like John Taylor were arguing that monetary policy was too loose because the real federal funds rate was below the level consistent with the Taylor Rule. In case that was enough, people like Federal Reserve Bank of St. Louis President Jim Bullard claimed that monetary policy was actually just about right since inflation was near its recently announced 2% target. What was more remarkable is that if one looked at the data, all of these people were correct based on their criteria for evaluating monetary policy. This is actually quite disheartening considering the fact that these three ways of evaluating policy had been remarkably consistent in their evaluations of monetary policy in the past.

I only circulated the paper among a small group of people and much of the response that I received was something to the effect of “the Fed has a mandate to produce low inflation and full employment, its reasonable to think that’s how they should be evaluated.” That sort of response seems reasonable on first glance, but that view ignores the main point I was trying to make. Perhaps I did make the case poorly since I did not manage to convince anyone of my broader point. So I will try to clarify my position here.

All of us know the mandate of the Federal Reserve. That mandate consists of two goals (actually three if you include keeping interest rates “moderate” – no comment on that goal): stable prices and maximum employment. However, knowing the mandate doesn’t actually provide any guidance for policy. What does it actually mean to have stable prices and maximum employment? These are goals, not targets. This is like when a politician says, “I’m for improving our schools.” That’s great. I’m for million dollar salaries for economics professors with the last name Hendrickson. Without a plan, these goals are meaningless.

There is nothing wrong with the Federal Reserve having broadly defined goals, but along with these broadly defined goals needs to be an explicit target. Also, the central bank needs a plan to achieve the target. Conceivably, this plan would outline how the Federal Reserve planned to use its instrument to achieve its target, with a description of intermediate variables that it would use to provide guidance to ensure that their policy is successful.

The Federal Reserve has two goals, which conceivably also means that they have two targets (more on that later). So what are the Fed’s targets? According to a press release from the Federal Reserve:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee’s ability to promote maximum employment in the face of significant economic disturbances.

The maximum level of employment is largely determined by nonmonetary factors that affect the structure and dynamics of the labor market. These factors may change over time and may not be directly measurable. Consequently, it would not be appropriate to specify a fixed goal for employment; rather, the Committee’s policy decisions must be informed by assessments of the maximum level of employment, recognizing that such assessments are necessarily uncertain and subject to revision.

So the Federal Reserve’s targets are 2% inflation and whatever the FOMC thinks the maximum level of employment is. This hardly clarifies the Federal Reserve’s targets. In addition, the Fed provides no guidance as to how they intend to achieve these targets.

The fact that the Federal Reserve has two goals (or one target and one goal) for policy is also problematic because the Fed only has one instrument, the monetary base (the federal funds rate is an intermediate variable).* So how can policy adjust one variable to achieve two targets? Well, it would be possible to do such a thing if the two targets had some explicit relationship. However, at times policy might have to act when these targets are not behaving in a complementary fashion with respect to the dual mandate. The Fed admits as much in the very same press release:

These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.

I will leave it to the reader to determine whether this clarifies or obfuscates the stance of the FOMC.

Despite the widespread knowledge of the dual mandate and despite the fact that the Federal Reserve has been a bit more forthcoming about an explicit target associated with its mandate, those evaluating Fed policy are stuck relying on other indicators of the stance of policy. In other words, since the Federal Reserve still does not have an explicit target that we can look at to evaluate policy, economists have sought other ways to do it.

John Taylor has chosen to think about policy in terms of the Taylor Rule. He views the Fed as adjusting the monetary base to set the federal funds rate consistent with the Taylor Rule, which has been shown to produce low variability in inflation and output around targets. Empirical evidence exists that shows that when the Federal Reserve has conducted policy broadly consistent with its mandate, the behavior of the federal funds rate looks as it would under the Taylor Rule. As a result, the Taylor Rule becomes a guide for policy in the absence of explicit targets. But even this guidance is only guidance with respect to an intermediate variable.

Scott Sumner has chosen to think about policy in terms of nominal GDP. This follows from a quantity theoretic view of the world. If the central bank promotes stable nominal GDP growth, then inflation expectations will be stable and price mechanism will function efficiently. In addition, the central bank will respond to only the types of shocks that they can correct. Stable nominal GDP therefore implies low inflation and stable employment. My own research suggests that the Federal Reserve conducted monetary policy as if they were stabilizing nominal GDP growth during the Great Moderation. But even using nominal GDP as a guide is limited in the sense that this is not an official target of the Federal Reserve, so a deviation of nominal GDP from trend (even it is suboptimal) might be consistent with the Federal Reserve’s official targets since the latter is essentially unknown.

Nonetheless, the development of these different approaches (and others) was the necessary outgrowth of the desire to understand and evaluate monetary policy. Such an approach is only necessary when the central bank has broad goals without explicit targets and without an explicit description of how they are going to achieve those targets.

We therefore end up back at Tobin’s original questions. How do we know when policy is too loose? Or too tight?

During the Great Inflation and the Great Moderation, both the Taylor Rule and stable growth in nominal GDP provided a good way to evaluate policy. During the Great Inflation both evaluation methods suggest that policy was too loose (although this is less clear regarding the Taylor Rule with real time data). During the Great Moderation, both evaluation methods suggest that policy was conducted well. What is particularly problematic, however, is that the most recent period since 2007, the Taylor Rule and nominal GDP have given the opposite conclusions about the stance of monetary policy. This has further clouded the discussion surrounding policy because advocates of each approach can point to historical evidence as supportive of their approach.

With an explicit target evaluating the stance of policy would be simple. If the FOMC adopted a 2% inflation target (and nothing else), then whenever inflation was above 2% (give or take some measurement error), policy would be deemed loose. Whenever inflation was below 2%, policy would be deemed too tight. Since neither the federal funds rate prescribed by the Taylor Rule nor nominal GDP are an official target of the Fed, it’s not immediately obvious how to judge the stance based solely on these criteria. (And what is optimal is another issue.)

If we want to have a better understanding of monetary policy, we need to emphasize the proper monetary semantics. First, we as economists need to use consistent language regarding instruments, targets, and intermediate variables. No more referring to the federal funds rate as the instrument of policy. Second, the Federal Reserve’s mandate needs to be modified to have only one target for monetary policy. If not otherwise specified, the Federal Reserve needs to provide a specific numerical goal for this target variable and then they need to describe how they are going to use their instrument to achieve this goal. Taking monetary semantics seriously is about more than language, it is about creating clear guidelines and accountability at the FOMC.

* One could argue that the Federal Reserve now has two instruments, the interest rate on reserves and the monetary base. While they do have direct control over these things, it is also important to remember that these variables must be compatible in equilibrium.