Category Archives: Uncategorized

On What Monetarism Really Is/Was

Paul Krugman has a recent post on why monetarism failed. Subsequently a number of economics bloggers have replied with their views on monetarism. I don’t have time to summarize all of the viewpoints espoused in these posts, but a fundamental problem throughout these posts is that each author’s description of monetarism seems to be merely their opinion about the distinct characteristics of monetarism. The problem is that many of these opinions do not provide anyone with more than a surface-level view of monetarism (i.e. something one might find in a principles or intermediate macro textbook).

In reality, Old Monetarists not only had views on money and inflation, but also had important views on the monetary transmission mechanism. The role that Old Monetarists saw for money was much more nuanced than the crude quantity theory vision that is often attributed to them. On this note, it is probably more valuable to look to the academic literature that attempts to summarize these ideas and put them into context for a modern reader.

A good place to start for anyone interested in Old Monetarist ideas is the work of Ed Nelson. Nelson is someone who has spent his career studying these ideas and trying to test their importance within modern macroeconomic frameworks. He is also currently working on a book about Milton Friedman’s influence on the monetary policy debate in the United States. To get a sense of what Old Monetarists really believed and why those ideas are relevant, I would recommend Nelson’s 2003 JME paper “The Future of Monetary Aggregates in Monetary Policy Analyis.” Here is the abstract:

This paper considers the role of monetary aggregates in modern macroeconomic models of the New Keynesian type. The focus is on possible developments of these models that are suggested by the monetarist literature, and that in addition seem justified empirically. Both the relation between money and inflation, and between money and aggregate demand, are considered. Regarding the first relation, it is argued that both the mean and the dynamics of inflation in present-day models are governed by money growth. This relationship arises from a conventional aggregate-demand channel; claims that an emphasis on the link between monetary aggregates and inflation requires a direct channel connecting money and inflation, are wide of the mark. The relevance of money for aggregate demand, in turn, lies not via real balance effects (or any other justification for money in the IS equation), but on money’s ability to serve as a proxy for the various substitution effects of monetary policy that exist when many asset prices matter for aggregate demand. This role for monetary aggregates, which is supported by empirical evidence, enhances the value of money to monetary policy.

Here is the working paper version that is not behind a paywall.

On Public Infrastructure Investment

There are two popular narratives about our infrastructure in the United States. The first is that our infrastructure is crumbling. The second is that our infrastructure spending is allocated based on its political value rather than its economic value. Maybe you believe one of these stories. Maybe you believe both. Maybe you believe neither. Regardless, these narratives are indicative of two important questions. How can we efficiently manage our public infrastructure? And how can we ensure that infrastructure investment isn’t used as a political tool? I have a new paper that proposes an answer to both questions. My proposal is to create a rule of law for public infrastructure based on option values. This rule of law would ensure that infrastructure is maintained efficiently and also that politicians would not be able to use infrastructure spending as a political tool.

The standard way to evaluate public infrastructure projects is to figure out the benefits of the infrastructure over its entire lifespan and then compute the present value of those benefits. Then you do the same thing with the costs. When you subtract the present value of the costs from the present value of the benefits you get something call a net present value. Infrastructure investments are evaluated using a positive net present value criterion. In other words, as long as the present discounted value of the benefits exceeds the present discounted value of the costs the project is desirable.

In theory the net present value approach seems like a good idea. Of course we would want the benefits to outweigh the costs. This approach, however, is much different than how a private firm would value their assets. A firm that owns a factory knows that the factory can eventually become outdated. To the firm the value of the factory is the sum of two components. The first component is the value of the factory to the firm if the firm never shuts down the factory or builds a new one. The second component is the value of the option to build a new factory or add to the current factory’s capacity in the future.

The same general concept is true of public infrastructure investment. The value of any existing infrastructure is the value of the infrastructure over its entire lifetime plus the option value of replacing that infrastructure in the future. This option value is associated with a tradeoff. Since infrastructure depreciates over time, the value from existing infrastructure is declining. This means that as time goes by, the opportunity cost of replacing the infrastructure declines and therefore the option value of replacing the infrastructure rises. However, the longer the government waits to replace the infrastructure, the longer society has to wait to receive the benefit of replacement. This reduces the option value. My proposal suggests that the government should choose the value of the current infrastructure that optimally balances this tradeoff. What this ultimately implies is that the government should wait until the value of the current infrastructure is some fraction of the net present value of the proposed replacement project.

The reason that this option approach is preferable to the net present value approach is as follows. First, even though a current project has a positive net present value, this does not necessarily imply that now is the optimal time to undertake the project. Replacing infrastructure entails an opportunity cost associated with the foregone benefit that society would have received from the existing infrastructure. In other words, society might get greater value from the project if the government chooses to wait a little longer before replacing what is currently there. Second, my approach provides a precise moment at which it is optimal to replace the infrastructure. In contrast, the net present value approach says nothing about optimality; it’s simply a cost-benefit analysis. Given the possibility that society could get an even larger benefit in the future, the option approach should be strictly preferred. Third, the option approach provides an explicit way for the government to maintain an infrastructure fund. In my paper I provide a simple formula for computing the amount of money that needs to be in the fund. This formula is simple; it only needs to take into account the cost of each project and the relative distance that project is from its replacement threshold. This sort of fund is important because it would also allow the government to continue funding infrastructure projects at the optimal time even during a recession when infrastructure budgets, especially at the local level, are often cut.

The final and most significant benefit of my approach, however, is that it would provide the means for establishing a rule of law for public infrastructure projects. This rule of law should appeal to people across the ideological spectrum. I say that for the following reasons. First, if the government adopted this option value approach as a rule of law, this would require that the government fund any and all infrastructure projects that had reached their replacement thresholds. This would ensure that the infrastructure in the United States was maintained efficiently. Second, because the only projects that would receive funding would be those that had reached the replacement threshold, politicians would not be able to use infrastructure spending as a tool for reelection or repayment to supporters. As a result, the option approach would provide the means for a rule of law for infrastructure investment that is both transparent and efficient.

Establishing such a rule of law would be difficult. The same politicians that benefit from allocating infrastructure investment for political reasons would be the same ones who would have to vote on the legislation to enact this new rule. Nonetheless, there is evidence that politicians vote in favor of infrastructure projects that benefit their constituents, but vote against aggregate investment. If the group of politicians that benefit most from this state of affairs is small, then the legislation might be easier to pass. In addition, there is nothing to stop departments of transportation at both the state and federal level from calculating option value and making the data available to the public. This greater transparency, while not a rule of law, would at least be a step in the direction of a more efficient management of our public infrastructure.

The Importance of Safe Assets

A theme you often hear among bloggers, but a bit less so in seminars, is the idea that the supply of and demand for safe assets matter. David Beckworth is one such blogger who talks about this, but critics often find it hard to think about the macroeconomy in these terms since the role of money has been marginalized within the New Keynesian wing of macroeconomics. I say this because David’s intuitive explanation of safe asset equilibrium seems to be a cross between New Keynesian intuition and Old Monetarist intuition. He is trying to communicate his message to what is essentially the mainstream of the discipline, but by emphasizing something that isn’t generally in their models.

Along these lines, I was happy to stumble upon this paper by Caballero, Farhi, and Gourinchas. In my view this paper is quite similar to David’s views regarding safe assets and monetary policy and so I thought it might be interesting to outline the basic model in the paper and talk about the mechanisms for monetary policy.

The model is a modified version of an IS-LM model. The one modification to the model is a supply and demand condition for safe assets. Formally, the model consists of the following three equations:

y - \bar{y} = -\delta (r - \bar{r}) - \delta_s (r^s - \bar{r}^s)
r^s = \max[\hat{r}^s + \phi(y - \bar{y}), 0]
s = \psi_y y + \psi_s r^s - \psi_{\Delta} (r - r^s)

where y is output, r is the risky interest rate, r^s is the rate on safe assets, \hat{r}^s is the target interest rate, s is the supply of safe assets, \bar{y} is the natural rate of output, \bar{r} is the natural risky interest rate, and \bar{r}^s is the natural safe interest rate, and the greek letters are parameters. Inflation is assumed to be zero such that there is no difference between real and nominal interest rates.

This framework is a familiar IS-LM framework with the first equation is an IS equation, the second equation is a Taylor Rule subject to a zero lower bound, and the third equation determines the safe asset equilibrium.

The best interpretation of the safe asset equilibrium, as they describe it in the paper, is in terms of the flow of safe assets. According to this view, the flow demand for safe assets is a function of output, the rate of return on safe assets, and the risk premium (r - r^s). Thus the supply of safe assets, in this interpretation, is the net increase in the supply of safe assets.

Given that setup, let’s see what the model can tell us.

The first assumption that they make is that the supply of safe assets is unresponsive to the risk premium. In other words, in terms of the model, \psi_{\Delta} = 0. Given that many safe assets are exogenously supplied, this seems like a reasonable assumption.

Now, let’s think about the determination of the natural rate of interest. If the central bank sets the interest rate on safe assets equal to the natural rate, then output will be equal to potential (essentially by definition). It then follows from the IS equation that the risky interest rate is also equal to the natural risky interest rate. But how does one determine the natural interest rate?

Consider the equilibrium condition for safe assets. The interest rate on safe assets is the rate that exists when output is equal to potential. From the safe asset equilibrium condition it follows that

\bar{r}^s = {{s - \psi_y \bar{y}}\over{\psi_s}}

The central bank then needs to set r^s = \hat{r}^s = \bar{r}^s.

However, suppose that the net increase in the supply of safe assets is not high enough to keep up with the demand for new safe assets. In particular, suppose that the net increase in the supply of safe assets is so low that

s < \psi_y \bar{y}

In this scenario, the natural interest rate would be negative. However, from the Taylor rule, the market rate of interest is subject to a zero lower bound. As a result, the central bank cannot set the interest rate low enough to clear the market for safe assets. So what happens? Well, the central bank sets the safe interest rate as low as it can go r^s = 0. Which implies that output is pinned down by the net increase in the supply of safe assets:

y = {{s}\over{\psi_y}}

It then follows that r > \bar{r}. In other words, the risky interest rate is “too high” and the risk premium rises. But since the risky rate of interest is higher than the natural risky rate, the IS equation implies that output must fall in order to reduce the demand for safe assets and restore equilibrium.

The policy implication is that to escape this scenario, one needs to increase the supply of safe assets. By increasing the supply of safe assets, this increases output toward potential and thereby reduces the risk premium.

As the authors note, early attempts at quantitative easing in the United States did exactly what the model would prescribe because they swapped the risky assets in the market for safe assets. Fiscal stimulus can also help, but not through any sort of production done by the public sector, but because it increases the supply of safe assets (Treasuries).

Does Monetary Policy Influence the Natural Rate?

Narayana Kocherlakota is now blogging. His most recent post concerns the equilibrium rate of interest, or natural rate of interest as it is sometimes referred. Kocherlakota argues that those who would like to see higher interest rates should stop harping on the Federal Reserve and instead write their Congressman to encourage more fiscal stimulus. I think that this view is both conventional and also odd. Allow me to explain.

Consider the following simple thought experiment. Suppose that the market rate of interest targeted by the Federal Reserve, the federal funds rate, is equal to the equilibrium rate that would prevail in a perfect, frictionless world. We can think of this equilibrium rate as being the rate consistent with a consumption Euler equation. In particular, this implies that the real rate of interest is given by

Real natural interest rate = Rate of time preference + Expected Growth

Now suppose that the economy enters a recession and expected growth declines. This implies that the natural interest rate declines also. If the central bank stands firm and does not adjust its target for the federal funds rate, then monetary policy is too tight. The market interest rate is above the natural interest rate. In a standard Wicksellian world the fact that the market interest rate is “too high” would imply a further reduction in the economic activity, which would further reduce the natural rate of interest. Again, if the central bank continues stand firm, monetary policy actually tightens. The implication is that the central bank can passively tighten even though they haven’t taken any action. In the pure credit economy of Wicksell, this process would continue to produce a deflationary spiral until the central bank equated the market interest rate with the natural rate.

Note this important point. In the Wicksellian model, there is an accelerationist effect. The accelerationist effect is due to the fact that tight monetary policy actually reduces the natural rate. Thus, to get back to normalcy what the central bank needs to do is not only to lower the market interest rate, but to lower the market rate below the natural rate. Once they do this, economic activity starts to increase and therefore so does the natural rate. To get back to normalcy, the central bank then has to increase the market rate faster than the natural rate is increasing until the two ultimately converge.

Note that this seems to be an odd way to conduct monetary policy. For example, imagine that you have a bow and arrow and there is some target in the distance. Suppose that every time you move the arrow to adjust your aim, the target moves as well. Nonetheless, this is the basic concept behind the Wicksellian model.

Kocherlakota argues in his post that the natural rate of interest is too low and that the market interest rate cannot get low enough to accomplish the task described above to correct for previously tight monetary policy. As a result, we need our Congressmen to go out and pass legislation that will get the economy moving and raise the natural interest rate toward the market interest rate.

I find this view strange for several reasons. First, in a Wicksellian framework if the natural interest rate is below the market rate, this results in a deflationary spiral. Since this seems to be Kocherlakota’s model of choice, how does he explain the economic recovery? Second, standard economic theory suggests that the natural interest rate is the sum of the rate of time preference and expected growth. Real GDP growth (and expected real GDP growth) has been positive for some time. Even if we ignore my first point, why hasn’t this increase in growth led to an increase the natural interest rate?

My answer to these questions is that the federal funds rate essentially becomes a useless indicator at the zero lower bound. Quantitative easing is just open market operations by a different name. To demonstrate this, consider that measures of the so-called shadow federal funds rate have actually plummeted far below zero. Estimates of the shadow rate come from the framework initially described by Fischer Black in his paper “Interest Rates as Options.” In that paper, Black pointed out that the benefit of holding short term debt is that it includes an option to switch to currency if the yield ever becomes negative. What this implies, however, is that while the market interest rate can never go below zero, it is possible to estimate a shadow rate when the observed market rate hits the zero lower bound. Estimates of the shadow rate have gone as low as -3%. If we are to believe this methodology, what this says to me is that quantitative easing succeeded in doing what monetary policy was thought not to be able to do.

One could argue perhaps that the rounds of QE did not go far enough. For example, for the central bank to produce a significant recovery, the Wicksellian model suggests that the central bank must not only reduce the market interest rate, but that they should reduce the market interest rate below the natural rate. If they simply reduce the market rate to the natural rate, then this just stops the decline in economic activity rather than providing some catch-up growth.

Regardless of whether you believe that latter claim, this post essentially makes the following claim. If I am correct in saying that the shadow rate is preferable to the federal funds as an indicator of monetary policy, then even if you believe in the Wicksellian model, you needn’t believe that we need to have to rely on fiscal policy to raise the natural rate of interest. What my discussion implies is that the central bank need only to lower the shadow interest rate below the natural rate.

Targets (Goals) Must Be Less Than or Equal to Instruments

In my most recent posts, I discussed the importance of using the proper semantics when discussing monetary policy. Central bankers should have an explicit numerical target for a goal variable. They should then describe how they are going to adjust their instrument to achieve this target, with particular reference to the intermediate variables that will provide guidance at higher frequencies. A related issue is that a central bank is limited in terms of its ultimate target (or targets) by the number of instruments it has at its disposal. This is discussed in an excellent post by Mike Belongia and Peter Ireland:

More than sixty years ago, Jan Tinbergen, a Dutch economist who shared the first Nobel Prize in Economics, derived this result: The number of goals a policymaker can pursue can be no greater than the number of instruments the policymaker can control. Traditionally, the Fed has been seen as a policy institution that has one instrument – the quantity of reserves it supplies to the banking system. More recently, the Fed may have acquired a second instrument when it received, in 2008, legislative authority to pay interest on those reserves.

Tinbergen’s constraint therefore limits the Fed to the pursuit, at most, of two independent objectives. To see the conflict between this constraint and statements made by assorted Fed officials, consider the following alternatives. If the Fed wishes to support U.S. exports by taking actions that reduce the dollar’s value, this implies a monetary easing that will increase output in the short run but lead to more inflation in the long run. Monetary ease might help reverse the stock market’s recent declines – or simply re-inflate bubbles in the eyes of those who see them. Conversely, if the Fed continues to focus on keeping inflation low, this requires a monetary tightening that will be expected, other things the same, to slow output growth, increase unemployment, and raise the dollar’s value with deleterious effects on US exports.

The Tinbergen constraint has led many economists outside the Fed to advocate that the Fed set a path for nominal GDP as its policy objective. Although this is a single variable, the balanced weights it places on output versus prices permit a central bank that targets nominal GDP to achieve modest countercyclical objectives in the short run while ensuring that inflation remains low and stable over longer horizons. But regardless of whether or not they choose this particular alternative, Federal Reserve officials need to face facts: They cannot possibly achieve all of the goals that, in their public statements, they have set for themselves.

On Monetary Semantics

My colleague, Mike Belongia, was kind enough to pass along a book entitled, “Targets and Indicators of Monetary Policy.” The book was published in 1969 and features contributions of the Karl Brunner, Allan Meltzer, Anna Schwartz, James Tobin, and others. The book itself was a product of a conference at UCLA held in 1966. There are two overarching themes to the book. The first theme, which is captured implicitly by some papers and is discussed explicitly by others, is the need for clarification in monetary policy discussions regarding indicator variables and target variables. The second theme is that, given these common definitions, economic theory can be used to guide policymakers regarding what variables should be used as indicators and targets. While I’m not going to summarize all of the contributions, there is one paper that I wanted to discuss because of its continued relevance today and that is Tobin’s contribution entitled, “Monetary Semantics.”

Contemporary discussions of monetary policy often begin with a misguided notion. For example, I often hear something to the effect of “the Federal Reserve has one instrument and that is the federal funds rate.” This is incorrect. The federal funds rate is not and never has been an instrument of the Federal Reserve. One might think that this is merely semantics, but this gets to broader issues about the role of monetary policy.

This point is discussed at length in Tobin’s paper. It is useful here to quote Tobin at length:

No subject is engulfed in more confusion and controversy than the measurement of monetary policy. Is it tight? Is it easy? Is it tighter than it was last month, or last year, or ten years ago? Or is it easier? Such questions receive a bewildering variety of answers from Federal Reserve officials, private bankers, financial journalists, politicians, and academic economists…The problem is not only descriptive but normative; that is, we all want an indicator of ease or tightness not just to describe what is happening, but to appraisecurrent policy against some criterion of desirable or optimal policy.

[…]

I begin with some observations about policy making that apply not just to monetary policy, indeed not just to public policy, but quite generally. From the policy maker’s standpoint, there are three kinds of variables on which he obtains statistical or other observations: instruments, targets, and intermediate variables. Instruments are variables he controls completely himself. Targets are variables he is trying to control, that is, to cause to reach certain numerical values, or to minimize fluctuations. Intermediate variables lie in-between. Neither are they under perfect control nor are their values ends in themselves.

This quote is important in and of itself for clarifying language. However, there is a broader importance that can perhaps best be illustrated by a discussion of recent monetary policy.

In 2012, I wrote a very short paper (unpublished, but can be found here) about one of the main problems with monetary policy in the United States. I argued in that paper that the main problem was that the Federal Reserve lacked an explicit target for monetary policy. Without an explicit target, it was impossible to determine whether monetary policy was too loose, too tight, or just right. (By the time the paper was written, the Fed had announced a 2% target for inflation.) In the paper, I pointed out that folks like Scott Sumner were saying that monetary policy was too tight because nominal GDP had fallen below trend while people like John Taylor were arguing that monetary policy was too loose because the real federal funds rate was below the level consistent with the Taylor Rule. In case that was enough, people like Federal Reserve Bank of St. Louis President Jim Bullard claimed that monetary policy was actually just about right since inflation was near its recently announced 2% target. What was more remarkable is that if one looked at the data, all of these people were correct based on their criteria for evaluating monetary policy. This is actually quite disheartening considering the fact that these three ways of evaluating policy had been remarkably consistent in their evaluations of monetary policy in the past.

I only circulated the paper among a small group of people and much of the response that I received was something to the effect of “the Fed has a mandate to produce low inflation and full employment, its reasonable to think that’s how they should be evaluated.” That sort of response seems reasonable on first glance, but that view ignores the main point I was trying to make. Perhaps I did make the case poorly since I did not manage to convince anyone of my broader point. So I will try to clarify my position here.

All of us know the mandate of the Federal Reserve. That mandate consists of two goals (actually three if you include keeping interest rates “moderate” – no comment on that goal): stable prices and maximum employment. However, knowing the mandate doesn’t actually provide any guidance for policy. What does it actually mean to have stable prices and maximum employment? These are goals, not targets. This is like when a politician says, “I’m for improving our schools.” That’s great. I’m for million dollar salaries for economics professors with the last name Hendrickson. Without a plan, these goals are meaningless.

There is nothing wrong with the Federal Reserve having broadly defined goals, but along with these broadly defined goals needs to be an explicit target. Also, the central bank needs a plan to achieve the target. Conceivably, this plan would outline how the Federal Reserve planned to use its instrument to achieve its target, with a description of intermediate variables that it would use to provide guidance to ensure that their policy is successful.

The Federal Reserve has two goals, which conceivably also means that they have two targets (more on that later). So what are the Fed’s targets? According to a press release from the Federal Reserve:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee’s ability to promote maximum employment in the face of significant economic disturbances.

The maximum level of employment is largely determined by nonmonetary factors that affect the structure and dynamics of the labor market. These factors may change over time and may not be directly measurable. Consequently, it would not be appropriate to specify a fixed goal for employment; rather, the Committee’s policy decisions must be informed by assessments of the maximum level of employment, recognizing that such assessments are necessarily uncertain and subject to revision.

So the Federal Reserve’s targets are 2% inflation and whatever the FOMC thinks the maximum level of employment is. This hardly clarifies the Federal Reserve’s targets. In addition, the Fed provides no guidance as to how they intend to achieve these targets.

The fact that the Federal Reserve has two goals (or one target and one goal) for policy is also problematic because the Fed only has one instrument, the monetary base (the federal funds rate is an intermediate variable).* So how can policy adjust one variable to achieve two targets? Well, it would be possible to do such a thing if the two targets had some explicit relationship. However, at times policy might have to act when these targets are not behaving in a complementary fashion with respect to the dual mandate. The Fed admits as much in the very same press release:

These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.

I will leave it to the reader to determine whether this clarifies or obfuscates the stance of the FOMC.

Despite the widespread knowledge of the dual mandate and despite the fact that the Federal Reserve has been a bit more forthcoming about an explicit target associated with its mandate, those evaluating Fed policy are stuck relying on other indicators of the stance of policy. In other words, since the Federal Reserve still does not have an explicit target that we can look at to evaluate policy, economists have sought other ways to do it.

John Taylor has chosen to think about policy in terms of the Taylor Rule. He views the Fed as adjusting the monetary base to set the federal funds rate consistent with the Taylor Rule, which has been shown to produce low variability in inflation and output around targets. Empirical evidence exists that shows that when the Federal Reserve has conducted policy broadly consistent with its mandate, the behavior of the federal funds rate looks as it would under the Taylor Rule. As a result, the Taylor Rule becomes a guide for policy in the absence of explicit targets. But even this guidance is only guidance with respect to an intermediate variable.

Scott Sumner has chosen to think about policy in terms of nominal GDP. This follows from a quantity theoretic view of the world. If the central bank promotes stable nominal GDP growth, then inflation expectations will be stable and price mechanism will function efficiently. In addition, the central bank will respond to only the types of shocks that they can correct. Stable nominal GDP therefore implies low inflation and stable employment. My own research suggests that the Federal Reserve conducted monetary policy as if they were stabilizing nominal GDP growth during the Great Moderation. But even using nominal GDP as a guide is limited in the sense that this is not an official target of the Federal Reserve, so a deviation of nominal GDP from trend (even it is suboptimal) might be consistent with the Federal Reserve’s official targets since the latter is essentially unknown.

Nonetheless, the development of these different approaches (and others) was the necessary outgrowth of the desire to understand and evaluate monetary policy. Such an approach is only necessary when the central bank has broad goals without explicit targets and without an explicit description of how they are going to achieve those targets.

We therefore end up back at Tobin’s original questions. How do we know when policy is too loose? Or too tight?

During the Great Inflation and the Great Moderation, both the Taylor Rule and stable growth in nominal GDP provided a good way to evaluate policy. During the Great Inflation both evaluation methods suggest that policy was too loose (although this is less clear regarding the Taylor Rule with real time data). During the Great Moderation, both evaluation methods suggest that policy was conducted well. What is particularly problematic, however, is that the most recent period since 2007, the Taylor Rule and nominal GDP have given the opposite conclusions about the stance of monetary policy. This has further clouded the discussion surrounding policy because advocates of each approach can point to historical evidence as supportive of their approach.

With an explicit target evaluating the stance of policy would be simple. If the FOMC adopted a 2% inflation target (and nothing else), then whenever inflation was above 2% (give or take some measurement error), policy would be deemed loose. Whenever inflation was below 2%, policy would be deemed too tight. Since neither the federal funds rate prescribed by the Taylor Rule nor nominal GDP are an official target of the Fed, it’s not immediately obvious how to judge the stance based solely on these criteria. (And what is optimal is another issue.)

If we want to have a better understanding of monetary policy, we need to emphasize the proper monetary semantics. First, we as economists need to use consistent language regarding instruments, targets, and intermediate variables. No more referring to the federal funds rate as the instrument of policy. Second, the Federal Reserve’s mandate needs to be modified to have only one target for monetary policy. If not otherwise specified, the Federal Reserve needs to provide a specific numerical goal for this target variable and then they need to describe how they are going to use their instrument to achieve this goal. Taking monetary semantics seriously is about more than language, it is about creating clear guidelines and accountability at the FOMC.

* One could argue that the Federal Reserve now has two instruments, the interest rate on reserves and the monetary base. While they do have direct control over these things, it is also important to remember that these variables must be compatible in equilibrium.

Some Thoughts on Cryptocurrencies and the Block Chain

Much of the discussion about cryptocurrencies has naturally centered around Bitcoin. Also, this discussion has been particularly focused on the role of Bitcoin as an alternative currency. However, I think that the most important aspect of Bitcoin (and cryptocurrencies more generally) is not necessarily the alternative currency arrangement, but the block chain. It seems to me that the future viability of cryptocurrencies themselves is not as an alternative to existing currencies, but as assets that are redeemable in a particular currency with payments settled much more efficiently using block chain technology.

For those who know little about cryptocurrencies, the block chain can be understood as follows. A block chain is a date store, or computer network in which information is stored on multiple nodes. In the case of cryptocurrency, such as Bitcoin, the block chain is used as a ledger of all transactions. Since every node has access to the block chain, there is no need for any centralized record-keeper or database. Transactions that are carried out using Bitcoin have to be verified by the nodes. A successful transaction is then added to the block chain. Individuals using the system must therefore have balances of the cryptocurrency recorded on the transaction ledger in order to transfer these balances to someone else. In addition, once the nodes verify the transferred balance, the transaction is time-stamped. This avoids scenarios in which people try to double spend a given balance.

This technology is what creates value for Bitcoin. One explanation for why money exists is that people cannot commit to future actions. The lack of commitment problem makes credit infeasible. Money is an alternative for carrying out exchange because money is a record-keeping device. The block chain associated with Bitcoin is quite literally a record-keeping device. It has value because provides a record of transactions. In addition, this simplifies the settlement process and therefore reduces the cost of transfers and settlement.

The benefit of using Bitcoin is thus the value of the record-keeping system, or the block chain. However, in order to be able to benefit from the use of the block chain, you need to have Bitcoins. This is problematic since there are a number of reasons that you might not want Bitcoins. For example, maybe you are perfectly happy with dollars or perhaps you’ve noticed that there are not a whole lot of places willing to accept Bitcoins just yet. Also, you might have noticed that the exchange rate between Bitcoins and dollars is quite volatile.

So if you are unwilling to trade your dollars for Bitcoins, then you don’t have access to the block chain and cannot take advantage of the more efficient settlement. This, it seems to me, is a critical flaw with Bitcoin.

Nonetheless, the technology embodied in Bitcoin is available to all and can therefore be adapted in other ways. Thus, the critical flaw in Bitcoin is not a critical flaw for cryptocurrencies more generally. The value of these cryptocurrencies is in the blockchain and the true value of the block chain is in figuring out how to use this technology to make transactions and banking better and more efficient. There are two particular alternatives that I think are on the right track, NuBits and Ripple.

Think back to pre-central banking days. Prior to central banks, there were individual banks that each issued their own notes. Each bank agreed to redeem its bank notes for a particular commodity, often gold or silver. Bank notes were priced in terms of the asset. In other words, one dollar would be defined as a particular quantity of gold or silver. This therefore implied that the price of the commodity was fixed in terms of the dollar. In order to maintain this exchange rate, the bank had to make sure not to issue too many bank notes. If the bank issued too many notes, they would see a wave of redemptions, which would reduce their reserves of the commodity. In order to prevent losses to reserves, the banks would therefore have an incentive to reduce the notes in circulation. The peg to the commodity therefore provided an anchor for the value of the bank notes and represented a natural mechanism to prevent over-issuance. Thus, fluctuations in the value of the bank notes tended to result from changes in the relative value of gold. (The U.S. experience was actually much different. Due to the existence of a unit banking system, notes often didn’t trade at par. Again, let’s ignore that for now.)

The way that NuBits work is a lot like these old banks worked (without the lending – we’ll have to get to that in a different post). The NuBits system consists of those who own NuShares and those who own NuBits. Those who own NuShares are like equity owners in the system whereas those who own NuBits are like holders of bank notes. The NuBits are redeemable in terms of U.S. dollars. In particular, one dollar is equal to one NuBit. If I own a NuBit, I can redeem that NuBit for one dollar. So how does NuBit manage to do this when Bitcoin clearly experiences a volatile exchange rate? They do so by putting the trust in the equity owners. Owners of NuShares have an incentive to maintain the stability of this exchange rate. If nobody is willing to use NuBits, then there is little value of ownership in the protocol and the shares will have little, if any, value. Thus, the NuBits system provides an incentive for NuShares holders to maintain the stability of the exchange rate and gives these shareholders the ability to do so. For example, if the demand for NuBits falls, this will be seen by a wave of redemptions. This is a signal that there are too many NuBits in circulation. In order to maintain the exchange rate, NuShares holders have an incentive to reduce the quantity of NuBits in circulation. They can do this by parking some of the NuBits (i.e. preventing people from using NuBits in transactions). This is not done forcibly, but rather by offering interest to those who are willing to forgo engaging in transactions. Similarly, if there is an increase in demand then new NuBits can be created.

But while NuBits has solved the volatility problem in a unique and interesting way, they still suffer from the same problem as Bitcoin. In order to be able to benefit from the technology, you need to hold NuBits and there is perhaps even less of an incentive to hold NuBits since it is much harder to use them for normal transactions. Thus, until cryptocurrencies like NuBits can be used in regular everyday transactions, there is little incentive to hold them. Thus, NuBits gets partially where the technology needs to go, but still suffers from a similar problem as Bitcoin.

This brings me to Ripple. Ripple is a much different system. With Ripple, one can set up an account using dollars, euros, Bitcoins, and even Ripple’s own cryptocurrency. One can then transfer funds using block chain technology, but the transfers do not have to take place using Ripple’s cryptocurrency or Bitcoin. In other words, I can transfer dollars or euros just like I transfer cryptocurrencies in other systems. I can do this by setting up an account and transferring the funds to another person with an account through an update the public ledger that is distributed across the nodes of the system. This streamlines the payment process without the need to adopt a particular cryptocurrency. One can even use dollars to pay someone in euros. The way the transaction is carried out is by finding traders on the system who are willing to trade dollars for euros and then transferring the euros to the desired party. This service seems to be immediately more valuable than any other service in this space.

So where do I see this going?

Suppose that you are Citibank or JP Morgan Chase. You could actually combine the types of services that are offered by NuBits and Ripple. You have the deposit infrastructure and are already offering online payment systems for bill paying and peer-to-peer exchanges. The major banks have two possible incentives. First, they could offer JP Morgan Bits (or whatever you want to call them) and have them redeemable 1-for-1 with the dollar. They could then partner with retailers (both online and brick and mortar) to offer a service in which JP Morgan deposit holders could carry around something akin to a debit card or even an app on their phone that allowed them to transact by transferring the JP Morgan Bits from the individual to the firm, charging a very small fee for the transfer. They could partner with firms for online bill paying as well. Alternatively, they could skip the issuance of their own “bank bits” and simply use their block chain to transfer dollar balances and other existing currencies. Whether or not the banks decide to have their own cryptocurrency for settling payments would be determined by whether there are advantages to developing brand loyalty and/or if retailers saw this as a way to generate greater competition for developing cheaper payments while maintaining the stability of purchasing power with the “bank bits.”

The basic point here is that banks could see a profit opportunity by eliminating the middleman and transferring funds between customers using the block chain. The payments would be faster and cheaper. In addition, it would provide retailers with much better protection from fraud.

Citibank is apparently already exploring the possibilities, developing a block chain with a cryptocurrency called “Citicoin.”

Regardless of what ultimately happens, it is an interesting time to be a monetary economist.