The Importance of Safe Assets

A theme you often hear among bloggers, but a bit less so in seminars, is the idea that the supply of and demand for safe assets matter. David Beckworth is one such blogger who talks about this, but critics often find it hard to think about the macroeconomy in these terms since the role of money has been marginalized within the New Keynesian wing of macroeconomics. I say this because David’s intuitive explanation of safe asset equilibrium seems to be a cross between New Keynesian intuition and Old Monetarist intuition. He is trying to communicate his message to what is essentially the mainstream of the discipline, but by emphasizing something that isn’t generally in their models.

Along these lines, I was happy to stumble upon this paper by Caballero, Farhi, and Gourinchas. In my view this paper is quite similar to David’s views regarding safe assets and monetary policy and so I thought it might be interesting to outline the basic model in the paper and talk about the mechanisms for monetary policy.

The model is a modified version of an IS-LM model. The one modification to the model is a supply and demand condition for safe assets. Formally, the model consists of the following three equations:

y - \bar{y} = -\delta (r - \bar{r}) - \delta_s (r^s - \bar{r}^s)
r^s = \max[\hat{r}^s + \phi(y - \bar{y}), 0]
s = \psi_y y + \psi_s r^s - \psi_{\Delta} (r - r^s)

where y is output, r is the risky interest rate, r^s is the rate on safe assets, \hat{r}^s is the target interest rate, s is the supply of safe assets, \bar{y} is the natural rate of output, \bar{r} is the natural risky interest rate, and \bar{r}^s is the natural safe interest rate, and the greek letters are parameters. Inflation is assumed to be zero such that there is no difference between real and nominal interest rates.

This framework is a familiar IS-LM framework with the first equation is an IS equation, the second equation is a Taylor Rule subject to a zero lower bound, and the third equation determines the safe asset equilibrium.

The best interpretation of the safe asset equilibrium, as they describe it in the paper, is in terms of the flow of safe assets. According to this view, the flow demand for safe assets is a function of output, the rate of return on safe assets, and the risk premium (r - r^s). Thus the supply of safe assets, in this interpretation, is the net increase in the supply of safe assets.

Given that setup, let’s see what the model can tell us.

The first assumption that they make is that the supply of safe assets is unresponsive to the risk premium. In other words, in terms of the model, \psi_{\Delta} = 0. Given that many safe assets are exogenously supplied, this seems like a reasonable assumption.

Now, let’s think about the determination of the natural rate of interest. If the central bank sets the interest rate on safe assets equal to the natural rate, then output will be equal to potential (essentially by definition). It then follows from the IS equation that the risky interest rate is also equal to the natural risky interest rate. But how does one determine the natural interest rate?

Consider the equilibrium condition for safe assets. The interest rate on safe assets is the rate that exists when output is equal to potential. From the safe asset equilibrium condition it follows that

\bar{r}^s = {{s - \psi_y \bar{y}}\over{\psi_s}}

The central bank then needs to set r^s = \hat{r}^s = \bar{r}^s.

However, suppose that the net increase in the supply of safe assets is not high enough to keep up with the demand for new safe assets. In particular, suppose that the net increase in the supply of safe assets is so low that

s < \psi_y \bar{y}

In this scenario, the natural interest rate would be negative. However, from the Taylor rule, the market rate of interest is subject to a zero lower bound. As a result, the central bank cannot set the interest rate low enough to clear the market for safe assets. So what happens? Well, the central bank sets the safe interest rate as low as it can go r^s = 0. Which implies that output is pinned down by the net increase in the supply of safe assets:

y = {{s}\over{\psi_y}}

It then follows that r > \bar{r}. In other words, the risky interest rate is “too high” and the risk premium rises. But since the risky rate of interest is higher than the natural risky rate, the IS equation implies that output must fall in order to reduce the demand for safe assets and restore equilibrium.

The policy implication is that to escape this scenario, one needs to increase the supply of safe assets. By increasing the supply of safe assets, this increases output toward potential and thereby reduces the risk premium.

As the authors note, early attempts at quantitative easing in the United States did exactly what the model would prescribe because they swapped the risky assets in the market for safe assets. Fiscal stimulus can also help, but not through any sort of production done by the public sector, but because it increases the supply of safe assets (Treasuries).

Does Monetary Policy Influence the Natural Rate?

Narayana Kocherlakota is now blogging. His most recent post concerns the equilibrium rate of interest, or natural rate of interest as it is sometimes referred. Kocherlakota argues that those who would like to see higher interest rates should stop harping on the Federal Reserve and instead write their Congressman to encourage more fiscal stimulus. I think that this view is both conventional and also odd. Allow me to explain.

Consider the following simple thought experiment. Suppose that the market rate of interest targeted by the Federal Reserve, the federal funds rate, is equal to the equilibrium rate that would prevail in a perfect, frictionless world. We can think of this equilibrium rate as being the rate consistent with a consumption Euler equation. In particular, this implies that the real rate of interest is given by

Real natural interest rate = Rate of time preference + Expected Growth

Now suppose that the economy enters a recession and expected growth declines. This implies that the natural interest rate declines also. If the central bank stands firm and does not adjust its target for the federal funds rate, then monetary policy is too tight. The market interest rate is above the natural interest rate. In a standard Wicksellian world the fact that the market interest rate is “too high” would imply a further reduction in the economic activity, which would further reduce the natural rate of interest. Again, if the central bank continues stand firm, monetary policy actually tightens. The implication is that the central bank can passively tighten even though they haven’t taken any action. In the pure credit economy of Wicksell, this process would continue to produce a deflationary spiral until the central bank equated the market interest rate with the natural rate.

Note this important point. In the Wicksellian model, there is an accelerationist effect. The accelerationist effect is due to the fact that tight monetary policy actually reduces the natural rate. Thus, to get back to normalcy what the central bank needs to do is not only to lower the market interest rate, but to lower the market rate below the natural rate. Once they do this, economic activity starts to increase and therefore so does the natural rate. To get back to normalcy, the central bank then has to increase the market rate faster than the natural rate is increasing until the two ultimately converge.

Note that this seems to be an odd way to conduct monetary policy. For example, imagine that you have a bow and arrow and there is some target in the distance. Suppose that every time you move the arrow to adjust your aim, the target moves as well. Nonetheless, this is the basic concept behind the Wicksellian model.

Kocherlakota argues in his post that the natural rate of interest is too low and that the market interest rate cannot get low enough to accomplish the task described above to correct for previously tight monetary policy. As a result, we need our Congressmen to go out and pass legislation that will get the economy moving and raise the natural interest rate toward the market interest rate.

I find this view strange for several reasons. First, in a Wicksellian framework if the natural interest rate is below the market rate, this results in a deflationary spiral. Since this seems to be Kocherlakota’s model of choice, how does he explain the economic recovery? Second, standard economic theory suggests that the natural interest rate is the sum of the rate of time preference and expected growth. Real GDP growth (and expected real GDP growth) has been positive for some time. Even if we ignore my first point, why hasn’t this increase in growth led to an increase the natural interest rate?

My answer to these questions is that the federal funds rate essentially becomes a useless indicator at the zero lower bound. Quantitative easing is just open market operations by a different name. To demonstrate this, consider that measures of the so-called shadow federal funds rate have actually plummeted far below zero. Estimates of the shadow rate come from the framework initially described by Fischer Black in his paper “Interest Rates as Options.” In that paper, Black pointed out that the benefit of holding short term debt is that it includes an option to switch to currency if the yield ever becomes negative. What this implies, however, is that while the market interest rate can never go below zero, it is possible to estimate a shadow rate when the observed market rate hits the zero lower bound. Estimates of the shadow rate have gone as low as -3%. If we are to believe this methodology, what this says to me is that quantitative easing succeeded in doing what monetary policy was thought not to be able to do.

One could argue perhaps that the rounds of QE did not go far enough. For example, for the central bank to produce a significant recovery, the Wicksellian model suggests that the central bank must not only reduce the market interest rate, but that they should reduce the market interest rate below the natural rate. If they simply reduce the market rate to the natural rate, then this just stops the decline in economic activity rather than providing some catch-up growth.

Regardless of whether you believe that latter claim, this post essentially makes the following claim. If I am correct in saying that the shadow rate is preferable to the federal funds as an indicator of monetary policy, then even if you believe in the Wicksellian model, you needn’t believe that we need to have to rely on fiscal policy to raise the natural rate of interest. What my discussion implies is that the central bank need only to lower the shadow interest rate below the natural rate.

Targets (Goals) Must Be Less Than or Equal to Instruments

In my most recent posts, I discussed the importance of using the proper semantics when discussing monetary policy. Central bankers should have an explicit numerical target for a goal variable. They should then describe how they are going to adjust their instrument to achieve this target, with particular reference to the intermediate variables that will provide guidance at higher frequencies. A related issue is that a central bank is limited in terms of its ultimate target (or targets) by the number of instruments it has at its disposal. This is discussed in an excellent post by Mike Belongia and Peter Ireland:

More than sixty years ago, Jan Tinbergen, a Dutch economist who shared the first Nobel Prize in Economics, derived this result: The number of goals a policymaker can pursue can be no greater than the number of instruments the policymaker can control. Traditionally, the Fed has been seen as a policy institution that has one instrument – the quantity of reserves it supplies to the banking system. More recently, the Fed may have acquired a second instrument when it received, in 2008, legislative authority to pay interest on those reserves.

Tinbergen’s constraint therefore limits the Fed to the pursuit, at most, of two independent objectives. To see the conflict between this constraint and statements made by assorted Fed officials, consider the following alternatives. If the Fed wishes to support U.S. exports by taking actions that reduce the dollar’s value, this implies a monetary easing that will increase output in the short run but lead to more inflation in the long run. Monetary ease might help reverse the stock market’s recent declines – or simply re-inflate bubbles in the eyes of those who see them. Conversely, if the Fed continues to focus on keeping inflation low, this requires a monetary tightening that will be expected, other things the same, to slow output growth, increase unemployment, and raise the dollar’s value with deleterious effects on US exports.

The Tinbergen constraint has led many economists outside the Fed to advocate that the Fed set a path for nominal GDP as its policy objective. Although this is a single variable, the balanced weights it places on output versus prices permit a central bank that targets nominal GDP to achieve modest countercyclical objectives in the short run while ensuring that inflation remains low and stable over longer horizons. But regardless of whether or not they choose this particular alternative, Federal Reserve officials need to face facts: They cannot possibly achieve all of the goals that, in their public statements, they have set for themselves.

The New Keynesian Failure

In a previous post, I defended neo-Fisherism. A couple of days ago I wrote a post in which I discussed the importance of monetary semantics. I would like to tie together two of my posts so that I can present a more comprehensive view of my own thinking regarding monetary policy and the New Keynesian model.

My post on neo-Fisherism was intended to provide support for John Cochrane who has argued that the neo-Fisher result is part of the New Keynesian model. Underlying this entire issue, however, is what determines the price level and inflation. In traditional macroeconomics, the quantity theory was always lurking in the background (if not the foreground). Under the quantity theory, the money supply determined the price level. Inflation was always and everywhere a monetary phenomenon.

The New Keynesian model dispenses with money altogether. The initial impulse for doing so was the work of Michael Woodford, who wrote a paper discussing how monetary policy would be conducted in a world without money. The paper (to my knowledge) was not initially an attempt to remove money completely from analysis, but rather to figure out a role for monetary policy once technology had developed to a point in which the monetary base was arbitrarily small. However, it seems that once people realized that it was possible to exclude money completely, this literature sort of took that ball and ran with it. The case for doing so was further bolstered by the fact that money already seemed to lack any empirical relevance.

Of course, there are a few fundamental problems with this literature. First, my own research shows that the empirical analysis that claims money is unimportant is actually the result of the fact that the Federal Reserve publishes monetary aggregates that are not consistent with index number theory, aggregation theory, or economic theory. When one uses Divisia monetary aggregates, the empirical evidence is consistent with standard monetary predictions. This is not unique to my paper. My colleague, Mike Belongia, found similar results when he re-examined empirical evidence using Divisia aggregates.

Second, while Woodford emphasizes in Interest and Prices that a central bank’s interest rate target could be determined by a channel system, in the United States the rate is still determined through open market operations (although now that the Fed is paying interest on reserves, it could conceivably use a channel system). This distinction might not seem to be important, but as I alluded to in my previous post, the federal funds rate is an intermediate target. How the central bank influences the intermediate target is important for the conduct of policy. If the model presumes that the mechanism is different from reality, this is potentially important.

Third, Ed Nelson has argued that the quantity theory is actually lurking in the background of the New Keynesian model and that New Keynesians don’t seem to realize it.

With all that being said, let’s circle back to neo-Fisherism. Suppose that a central bank announced that they were going to target a short term nominal interest rate of zero for seven years. How would they accomplish this?

A good quantity theorist would suggest that there are two ways that they would try to accomplish this. The first way would be to continue to use open market purchases to prevent the interest rate from ever rising. However, open market purchases would be inflationary. Since higher inflation expectations puts upward pressure on nominal interest rates, this sort of policy is unsustainable.

The second way to accomplish the goal of the zero interest rate is to set money growth such that the sum of expected inflation and the real interest rate is equal to zero. In other words, the only sustainable way to commit to an interest rate of zero over the long term is deflation (or low inflation if the real interest rate is negative).

The New Keynesians, however, think that the quantity theory is dead and that we can think about policy without money. And in the New Keynesian model, one can supposedly peg the short term nominal interest rate at zero for a short period of time. Not only is this possible, but it also should lead to an increase in inflation and economic activity. Interestingly, however, as my post on neo-Fisherism demonstrated, this isn’t what happens in their model. According to their model, setting the nominal interest rate at zero leads to a reduction in the rate of inflation. This is so because (1) the nominal interest rate satisfies the Fisher equation, and (2) people have rational expectations. (Michael Woodford has essentially admitted this, but now wants to relax the assumption of rational expectations.)

So why am I bringing all of this up again and why should we care?

Well, it seems that Federal Reserve Bank of St. Louis President Jim Bullard recently gave a talk in which he discussed two competing hypotheses. The first is that lower interest rates should cause higher inflation (the conventional view of New Keynesians and others). The second is that lower interest rates should result in lower inflation. As you can see if you look through his slides, he seems to suggest that the neo-Fisher view is correct since we have a lower interest rate and we have lower inflation.

In my view, however, he has drawn the wrong lesson because he has ignored a third hypothesis. The starting point of his analysis seems to be that the New Keynesian model is the useful framework for analysis and given that this is true, which argument about interest rates is correct, the modified Woodford argument? Or the neo-Fisherites?

However, a third hypothesis is that the New Keynesian model is not the correct model to use for analysis. In the quantity theory view, inflation declines when money growth declines. Thus, if you see lower interest rates, the only way that they are sustainable for long periods of time is if money growth (and therefore inflation) declines as well. Below is a graph of Divisia M4 growth from 2004 to the present. Note that the growth rate seems to have permanently declined.

Also, note the following scatterplot between a 1-month lag in money growth and inflation. If you were to fit a line, you would find that the relationship is positive and statistically significant.

So perhaps money isn’t so useless after all.

To get back to my point from a previous post, it seems that discussions of policy need to take seriously the following. First, the central bank needs to specify its target variable (i.e. a specific numerical value for a variable, such as inflation or nominal GDP). Second, the central bank needs to describe how it is going to adjust its instrument (the monetary base) to hit its target. Third, the central bank needs to specify the transmission mechanism through which this will work. In other words, what intermediate variables will tell the central bank whether or not it is likely to hit its target.

As it currently stands, the short term nominal interest rate is the Federal Reserve’s preferred intermediate variable. Nonetheless, the federal funds rate has been close to zero for six and a half years (!) and yet inflation has not behaved in the way that policy would predict. At what point do we begin to question using this as an intermediate variable?

The idea that low nominal interest rates are associated with low inflation and high nominal interest rates are associated with high inflation is the Fisher equation. Milton Friedman argued this long ago. The New Keynesian model assumes that the Fisher identity holds, but it has no mechanism to explain why. It’s just true in equilibrium and therefore has to happen. Thus, when the nominal interest rate rises and individuals have rational expectations, they just expect more inflation and it happens. Pardon me if I don’t think that sounds like the world we live in. New Keynesians also don’t seem to think that this sounds like the world we live in, but this is their model!

To me, the biggest problem with the New Keynesian model is the lack of any mechanism. Without understanding the mechanisms through which policy works, how can one begin to offer policy advice and determine the likelihood of success? At the very least one should take steps to ensure that the policy mechanisms they think exist are actually in the model.

But the sheer dominance of the New Keynesian model in policy circles also leads to false dichotomies. Jim Bullard is basically asking the question: does the world look like the New Keynesian model says or does it look like the New Keynesians say? Maybe the answer is that it doesn’t look like either alternative.

On Monetary Semantics

My colleague, Mike Belongia, was kind enough to pass along a book entitled, “Targets and Indicators of Monetary Policy.” The book was published in 1969 and features contributions of the Karl Brunner, Allan Meltzer, Anna Schwartz, James Tobin, and others. The book itself was a product of a conference at UCLA held in 1966. There are two overarching themes to the book. The first theme, which is captured implicitly by some papers and is discussed explicitly by others, is the need for clarification in monetary policy discussions regarding indicator variables and target variables. The second theme is that, given these common definitions, economic theory can be used to guide policymakers regarding what variables should be used as indicators and targets. While I’m not going to summarize all of the contributions, there is one paper that I wanted to discuss because of its continued relevance today and that is Tobin’s contribution entitled, “Monetary Semantics.”

Contemporary discussions of monetary policy often begin with a misguided notion. For example, I often hear something to the effect of “the Federal Reserve has one instrument and that is the federal funds rate.” This is incorrect. The federal funds rate is not and never has been an instrument of the Federal Reserve. One might think that this is merely semantics, but this gets to broader issues about the role of monetary policy.

This point is discussed at length in Tobin’s paper. It is useful here to quote Tobin at length:

No subject is engulfed in more confusion and controversy than the measurement of monetary policy. Is it tight? Is it easy? Is it tighter than it was last month, or last year, or ten years ago? Or is it easier? Such questions receive a bewildering variety of answers from Federal Reserve officials, private bankers, financial journalists, politicians, and academic economists…The problem is not only descriptive but normative; that is, we all want an indicator of ease or tightness not just to describe what is happening, but to appraisecurrent policy against some criterion of desirable or optimal policy.


I begin with some observations about policy making that apply not just to monetary policy, indeed not just to public policy, but quite generally. From the policy maker’s standpoint, there are three kinds of variables on which he obtains statistical or other observations: instruments, targets, and intermediate variables. Instruments are variables he controls completely himself. Targets are variables he is trying to control, that is, to cause to reach certain numerical values, or to minimize fluctuations. Intermediate variables lie in-between. Neither are they under perfect control nor are their values ends in themselves.

This quote is important in and of itself for clarifying language. However, there is a broader importance that can perhaps best be illustrated by a discussion of recent monetary policy.

In 2012, I wrote a very short paper (unpublished, but can be found here) about one of the main problems with monetary policy in the United States. I argued in that paper that the main problem was that the Federal Reserve lacked an explicit target for monetary policy. Without an explicit target, it was impossible to determine whether monetary policy was too loose, too tight, or just right. (By the time the paper was written, the Fed had announced a 2% target for inflation.) In the paper, I pointed out that folks like Scott Sumner were saying that monetary policy was too tight because nominal GDP had fallen below trend while people like John Taylor were arguing that monetary policy was too loose because the real federal funds rate was below the level consistent with the Taylor Rule. In case that was enough, people like Federal Reserve Bank of St. Louis President Jim Bullard claimed that monetary policy was actually just about right since inflation was near its recently announced 2% target. What was more remarkable is that if one looked at the data, all of these people were correct based on their criteria for evaluating monetary policy. This is actually quite disheartening considering the fact that these three ways of evaluating policy had been remarkably consistent in their evaluations of monetary policy in the past.

I only circulated the paper among a small group of people and much of the response that I received was something to the effect of “the Fed has a mandate to produce low inflation and full employment, its reasonable to think that’s how they should be evaluated.” That sort of response seems reasonable on first glance, but that view ignores the main point I was trying to make. Perhaps I did make the case poorly since I did not manage to convince anyone of my broader point. So I will try to clarify my position here.

All of us know the mandate of the Federal Reserve. That mandate consists of two goals (actually three if you include keeping interest rates “moderate” – no comment on that goal): stable prices and maximum employment. However, knowing the mandate doesn’t actually provide any guidance for policy. What does it actually mean to have stable prices and maximum employment? These are goals, not targets. This is like when a politician says, “I’m for improving our schools.” That’s great. I’m for million dollar salaries for economics professors with the last name Hendrickson. Without a plan, these goals are meaningless.

There is nothing wrong with the Federal Reserve having broadly defined goals, but along with these broadly defined goals needs to be an explicit target. Also, the central bank needs a plan to achieve the target. Conceivably, this plan would outline how the Federal Reserve planned to use its instrument to achieve its target, with a description of intermediate variables that it would use to provide guidance to ensure that their policy is successful.

The Federal Reserve has two goals, which conceivably also means that they have two targets (more on that later). So what are the Fed’s targets? According to a press release from the Federal Reserve:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee’s ability to promote maximum employment in the face of significant economic disturbances.

The maximum level of employment is largely determined by nonmonetary factors that affect the structure and dynamics of the labor market. These factors may change over time and may not be directly measurable. Consequently, it would not be appropriate to specify a fixed goal for employment; rather, the Committee’s policy decisions must be informed by assessments of the maximum level of employment, recognizing that such assessments are necessarily uncertain and subject to revision.

So the Federal Reserve’s targets are 2% inflation and whatever the FOMC thinks the maximum level of employment is. This hardly clarifies the Federal Reserve’s targets. In addition, the Fed provides no guidance as to how they intend to achieve these targets.

The fact that the Federal Reserve has two goals (or one target and one goal) for policy is also problematic because the Fed only has one instrument, the monetary base (the federal funds rate is an intermediate variable).* So how can policy adjust one variable to achieve two targets? Well, it would be possible to do such a thing if the two targets had some explicit relationship. However, at times policy might have to act when these targets are not behaving in a complementary fashion with respect to the dual mandate. The Fed admits as much in the very same press release:

These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.

I will leave it to the reader to determine whether this clarifies or obfuscates the stance of the FOMC.

Despite the widespread knowledge of the dual mandate and despite the fact that the Federal Reserve has been a bit more forthcoming about an explicit target associated with its mandate, those evaluating Fed policy are stuck relying on other indicators of the stance of policy. In other words, since the Federal Reserve still does not have an explicit target that we can look at to evaluate policy, economists have sought other ways to do it.

John Taylor has chosen to think about policy in terms of the Taylor Rule. He views the Fed as adjusting the monetary base to set the federal funds rate consistent with the Taylor Rule, which has been shown to produce low variability in inflation and output around targets. Empirical evidence exists that shows that when the Federal Reserve has conducted policy broadly consistent with its mandate, the behavior of the federal funds rate looks as it would under the Taylor Rule. As a result, the Taylor Rule becomes a guide for policy in the absence of explicit targets. But even this guidance is only guidance with respect to an intermediate variable.

Scott Sumner has chosen to think about policy in terms of nominal GDP. This follows from a quantity theoretic view of the world. If the central bank promotes stable nominal GDP growth, then inflation expectations will be stable and price mechanism will function efficiently. In addition, the central bank will respond to only the types of shocks that they can correct. Stable nominal GDP therefore implies low inflation and stable employment. My own research suggests that the Federal Reserve conducted monetary policy as if they were stabilizing nominal GDP growth during the Great Moderation. But even using nominal GDP as a guide is limited in the sense that this is not an official target of the Federal Reserve, so a deviation of nominal GDP from trend (even it is suboptimal) might be consistent with the Federal Reserve’s official targets since the latter is essentially unknown.

Nonetheless, the development of these different approaches (and others) was the necessary outgrowth of the desire to understand and evaluate monetary policy. Such an approach is only necessary when the central bank has broad goals without explicit targets and without an explicit description of how they are going to achieve those targets.

We therefore end up back at Tobin’s original questions. How do we know when policy is too loose? Or too tight?

During the Great Inflation and the Great Moderation, both the Taylor Rule and stable growth in nominal GDP provided a good way to evaluate policy. During the Great Inflation both evaluation methods suggest that policy was too loose (although this is less clear regarding the Taylor Rule with real time data). During the Great Moderation, both evaluation methods suggest that policy was conducted well. What is particularly problematic, however, is that the most recent period since 2007, the Taylor Rule and nominal GDP have given the opposite conclusions about the stance of monetary policy. This has further clouded the discussion surrounding policy because advocates of each approach can point to historical evidence as supportive of their approach.

With an explicit target evaluating the stance of policy would be simple. If the FOMC adopted a 2% inflation target (and nothing else), then whenever inflation was above 2% (give or take some measurement error), policy would be deemed loose. Whenever inflation was below 2%, policy would be deemed too tight. Since neither the federal funds rate prescribed by the Taylor Rule nor nominal GDP are an official target of the Fed, it’s not immediately obvious how to judge the stance based solely on these criteria. (And what is optimal is another issue.)

If we want to have a better understanding of monetary policy, we need to emphasize the proper monetary semantics. First, we as economists need to use consistent language regarding instruments, targets, and intermediate variables. No more referring to the federal funds rate as the instrument of policy. Second, the Federal Reserve’s mandate needs to be modified to have only one target for monetary policy. If not otherwise specified, the Federal Reserve needs to provide a specific numerical goal for this target variable and then they need to describe how they are going to use their instrument to achieve this goal. Taking monetary semantics seriously is about more than language, it is about creating clear guidelines and accountability at the FOMC.

* One could argue that the Federal Reserve now has two instruments, the interest rate on reserves and the monetary base. While they do have direct control over these things, it is also important to remember that these variables must be compatible in equilibrium.

Some Thoughts on Cryptocurrencies and the Block Chain

Much of the discussion about cryptocurrencies has naturally centered around Bitcoin. Also, this discussion has been particularly focused on the role of Bitcoin as an alternative currency. However, I think that the most important aspect of Bitcoin (and cryptocurrencies more generally) is not necessarily the alternative currency arrangement, but the block chain. It seems to me that the future viability of cryptocurrencies themselves is not as an alternative to existing currencies, but as assets that are redeemable in a particular currency with payments settled much more efficiently using block chain technology.

For those who know little about cryptocurrencies, the block chain can be understood as follows. A block chain is a date store, or computer network in which information is stored on multiple nodes. In the case of cryptocurrency, such as Bitcoin, the block chain is used as a ledger of all transactions. Since every node has access to the block chain, there is no need for any centralized record-keeper or database. Transactions that are carried out using Bitcoin have to be verified by the nodes. A successful transaction is then added to the block chain. Individuals using the system must therefore have balances of the cryptocurrency recorded on the transaction ledger in order to transfer these balances to someone else. In addition, once the nodes verify the transferred balance, the transaction is time-stamped. This avoids scenarios in which people try to double spend a given balance.

This technology is what creates value for Bitcoin. One explanation for why money exists is that people cannot commit to future actions. The lack of commitment problem makes credit infeasible. Money is an alternative for carrying out exchange because money is a record-keeping device. The block chain associated with Bitcoin is quite literally a record-keeping device. It has value because provides a record of transactions. In addition, this simplifies the settlement process and therefore reduces the cost of transfers and settlement.

The benefit of using Bitcoin is thus the value of the record-keeping system, or the block chain. However, in order to be able to benefit from the use of the block chain, you need to have Bitcoins. This is problematic since there are a number of reasons that you might not want Bitcoins. For example, maybe you are perfectly happy with dollars or perhaps you’ve noticed that there are not a whole lot of places willing to accept Bitcoins just yet. Also, you might have noticed that the exchange rate between Bitcoins and dollars is quite volatile.

So if you are unwilling to trade your dollars for Bitcoins, then you don’t have access to the block chain and cannot take advantage of the more efficient settlement. This, it seems to me, is a critical flaw with Bitcoin.

Nonetheless, the technology embodied in Bitcoin is available to all and can therefore be adapted in other ways. Thus, the critical flaw in Bitcoin is not a critical flaw for cryptocurrencies more generally. The value of these cryptocurrencies is in the blockchain and the true value of the block chain is in figuring out how to use this technology to make transactions and banking better and more efficient. There are two particular alternatives that I think are on the right track, NuBits and Ripple.

Think back to pre-central banking days. Prior to central banks, there were individual banks that each issued their own notes. Each bank agreed to redeem its bank notes for a particular commodity, often gold or silver. Bank notes were priced in terms of the asset. In other words, one dollar would be defined as a particular quantity of gold or silver. This therefore implied that the price of the commodity was fixed in terms of the dollar. In order to maintain this exchange rate, the bank had to make sure not to issue too many bank notes. If the bank issued too many notes, they would see a wave of redemptions, which would reduce their reserves of the commodity. In order to prevent losses to reserves, the banks would therefore have an incentive to reduce the notes in circulation. The peg to the commodity therefore provided an anchor for the value of the bank notes and represented a natural mechanism to prevent over-issuance. Thus, fluctuations in the value of the bank notes tended to result from changes in the relative value of gold. (The U.S. experience was actually much different. Due to the existence of a unit banking system, notes often didn’t trade at par. Again, let’s ignore that for now.)

The way that NuBits work is a lot like these old banks worked (without the lending – we’ll have to get to that in a different post). The NuBits system consists of those who own NuShares and those who own NuBits. Those who own NuShares are like equity owners in the system whereas those who own NuBits are like holders of bank notes. The NuBits are redeemable in terms of U.S. dollars. In particular, one dollar is equal to one NuBit. If I own a NuBit, I can redeem that NuBit for one dollar. So how does NuBit manage to do this when Bitcoin clearly experiences a volatile exchange rate? They do so by putting the trust in the equity owners. Owners of NuShares have an incentive to maintain the stability of this exchange rate. If nobody is willing to use NuBits, then there is little value of ownership in the protocol and the shares will have little, if any, value. Thus, the NuBits system provides an incentive for NuShares holders to maintain the stability of the exchange rate and gives these shareholders the ability to do so. For example, if the demand for NuBits falls, this will be seen by a wave of redemptions. This is a signal that there are too many NuBits in circulation. In order to maintain the exchange rate, NuShares holders have an incentive to reduce the quantity of NuBits in circulation. They can do this by parking some of the NuBits (i.e. preventing people from using NuBits in transactions). This is not done forcibly, but rather by offering interest to those who are willing to forgo engaging in transactions. Similarly, if there is an increase in demand then new NuBits can be created.

But while NuBits has solved the volatility problem in a unique and interesting way, they still suffer from the same problem as Bitcoin. In order to be able to benefit from the technology, you need to hold NuBits and there is perhaps even less of an incentive to hold NuBits since it is much harder to use them for normal transactions. Thus, until cryptocurrencies like NuBits can be used in regular everyday transactions, there is little incentive to hold them. Thus, NuBits gets partially where the technology needs to go, but still suffers from a similar problem as Bitcoin.

This brings me to Ripple. Ripple is a much different system. With Ripple, one can set up an account using dollars, euros, Bitcoins, and even Ripple’s own cryptocurrency. One can then transfer funds using block chain technology, but the transfers do not have to take place using Ripple’s cryptocurrency or Bitcoin. In other words, I can transfer dollars or euros just like I transfer cryptocurrencies in other systems. I can do this by setting up an account and transferring the funds to another person with an account through an update the public ledger that is distributed across the nodes of the system. This streamlines the payment process without the need to adopt a particular cryptocurrency. One can even use dollars to pay someone in euros. The way the transaction is carried out is by finding traders on the system who are willing to trade dollars for euros and then transferring the euros to the desired party. This service seems to be immediately more valuable than any other service in this space.

So where do I see this going?

Suppose that you are Citibank or JP Morgan Chase. You could actually combine the types of services that are offered by NuBits and Ripple. You have the deposit infrastructure and are already offering online payment systems for bill paying and peer-to-peer exchanges. The major banks have two possible incentives. First, they could offer JP Morgan Bits (or whatever you want to call them) and have them redeemable 1-for-1 with the dollar. They could then partner with retailers (both online and brick and mortar) to offer a service in which JP Morgan deposit holders could carry around something akin to a debit card or even an app on their phone that allowed them to transact by transferring the JP Morgan Bits from the individual to the firm, charging a very small fee for the transfer. They could partner with firms for online bill paying as well. Alternatively, they could skip the issuance of their own “bank bits” and simply use their block chain to transfer dollar balances and other existing currencies. Whether or not the banks decide to have their own cryptocurrency for settling payments would be determined by whether there are advantages to developing brand loyalty and/or if retailers saw this as a way to generate greater competition for developing cheaper payments while maintaining the stability of purchasing power with the “bank bits.”

The basic point here is that banks could see a profit opportunity by eliminating the middleman and transferring funds between customers using the block chain. The payments would be faster and cheaper. In addition, it would provide retailers with much better protection from fraud.

Citibank is apparently already exploring the possibilities, developing a block chain with a cryptocurrency called “Citicoin.”

Regardless of what ultimately happens, it is an interesting time to be a monetary economist.

Bitcoin Papers

My paper with Thomas Hogan and Will Luther, “The Political Economy of Bitcoin” is now forthcoming from Economic Inquiry. A working paper version can still be found here.

Also, on the same topic, Aaron Yelowitz was kind enough to send me his recent paper that uses Google Trends data to identify the characteristics of Bitcoin users. A link to that paper can be found here.