Category Archives: Uncategorized

Targets (Goals) Must Be Less Than or Equal to Instruments

In my most recent posts, I discussed the importance of using the proper semantics when discussing monetary policy. Central bankers should have an explicit numerical target for a goal variable. They should then describe how they are going to adjust their instrument to achieve this target, with particular reference to the intermediate variables that will provide guidance at higher frequencies. A related issue is that a central bank is limited in terms of its ultimate target (or targets) by the number of instruments it has at its disposal. This is discussed in an excellent post by Mike Belongia and Peter Ireland:

More than sixty years ago, Jan Tinbergen, a Dutch economist who shared the first Nobel Prize in Economics, derived this result: The number of goals a policymaker can pursue can be no greater than the number of instruments the policymaker can control. Traditionally, the Fed has been seen as a policy institution that has one instrument – the quantity of reserves it supplies to the banking system. More recently, the Fed may have acquired a second instrument when it received, in 2008, legislative authority to pay interest on those reserves.

Tinbergen’s constraint therefore limits the Fed to the pursuit, at most, of two independent objectives. To see the conflict between this constraint and statements made by assorted Fed officials, consider the following alternatives. If the Fed wishes to support U.S. exports by taking actions that reduce the dollar’s value, this implies a monetary easing that will increase output in the short run but lead to more inflation in the long run. Monetary ease might help reverse the stock market’s recent declines – or simply re-inflate bubbles in the eyes of those who see them. Conversely, if the Fed continues to focus on keeping inflation low, this requires a monetary tightening that will be expected, other things the same, to slow output growth, increase unemployment, and raise the dollar’s value with deleterious effects on US exports.

The Tinbergen constraint has led many economists outside the Fed to advocate that the Fed set a path for nominal GDP as its policy objective. Although this is a single variable, the balanced weights it places on output versus prices permit a central bank that targets nominal GDP to achieve modest countercyclical objectives in the short run while ensuring that inflation remains low and stable over longer horizons. But regardless of whether or not they choose this particular alternative, Federal Reserve officials need to face facts: They cannot possibly achieve all of the goals that, in their public statements, they have set for themselves.

On Monetary Semantics

My colleague, Mike Belongia, was kind enough to pass along a book entitled, “Targets and Indicators of Monetary Policy.” The book was published in 1969 and features contributions of the Karl Brunner, Allan Meltzer, Anna Schwartz, James Tobin, and others. The book itself was a product of a conference at UCLA held in 1966. There are two overarching themes to the book. The first theme, which is captured implicitly by some papers and is discussed explicitly by others, is the need for clarification in monetary policy discussions regarding indicator variables and target variables. The second theme is that, given these common definitions, economic theory can be used to guide policymakers regarding what variables should be used as indicators and targets. While I’m not going to summarize all of the contributions, there is one paper that I wanted to discuss because of its continued relevance today and that is Tobin’s contribution entitled, “Monetary Semantics.”

Contemporary discussions of monetary policy often begin with a misguided notion. For example, I often hear something to the effect of “the Federal Reserve has one instrument and that is the federal funds rate.” This is incorrect. The federal funds rate is not and never has been an instrument of the Federal Reserve. One might think that this is merely semantics, but this gets to broader issues about the role of monetary policy.

This point is discussed at length in Tobin’s paper. It is useful here to quote Tobin at length:

No subject is engulfed in more confusion and controversy than the measurement of monetary policy. Is it tight? Is it easy? Is it tighter than it was last month, or last year, or ten years ago? Or is it easier? Such questions receive a bewildering variety of answers from Federal Reserve officials, private bankers, financial journalists, politicians, and academic economists…The problem is not only descriptive but normative; that is, we all want an indicator of ease or tightness not just to describe what is happening, but to appraisecurrent policy against some criterion of desirable or optimal policy.


I begin with some observations about policy making that apply not just to monetary policy, indeed not just to public policy, but quite generally. From the policy maker’s standpoint, there are three kinds of variables on which he obtains statistical or other observations: instruments, targets, and intermediate variables. Instruments are variables he controls completely himself. Targets are variables he is trying to control, that is, to cause to reach certain numerical values, or to minimize fluctuations. Intermediate variables lie in-between. Neither are they under perfect control nor are their values ends in themselves.

This quote is important in and of itself for clarifying language. However, there is a broader importance that can perhaps best be illustrated by a discussion of recent monetary policy.

In 2012, I wrote a very short paper (unpublished, but can be found here) about one of the main problems with monetary policy in the United States. I argued in that paper that the main problem was that the Federal Reserve lacked an explicit target for monetary policy. Without an explicit target, it was impossible to determine whether monetary policy was too loose, too tight, or just right. (By the time the paper was written, the Fed had announced a 2% target for inflation.) In the paper, I pointed out that folks like Scott Sumner were saying that monetary policy was too tight because nominal GDP had fallen below trend while people like John Taylor were arguing that monetary policy was too loose because the real federal funds rate was below the level consistent with the Taylor Rule. In case that was enough, people like Federal Reserve Bank of St. Louis President Jim Bullard claimed that monetary policy was actually just about right since inflation was near its recently announced 2% target. What was more remarkable is that if one looked at the data, all of these people were correct based on their criteria for evaluating monetary policy. This is actually quite disheartening considering the fact that these three ways of evaluating policy had been remarkably consistent in their evaluations of monetary policy in the past.

I only circulated the paper among a small group of people and much of the response that I received was something to the effect of “the Fed has a mandate to produce low inflation and full employment, its reasonable to think that’s how they should be evaluated.” That sort of response seems reasonable on first glance, but that view ignores the main point I was trying to make. Perhaps I did make the case poorly since I did not manage to convince anyone of my broader point. So I will try to clarify my position here.

All of us know the mandate of the Federal Reserve. That mandate consists of two goals (actually three if you include keeping interest rates “moderate” – no comment on that goal): stable prices and maximum employment. However, knowing the mandate doesn’t actually provide any guidance for policy. What does it actually mean to have stable prices and maximum employment? These are goals, not targets. This is like when a politician says, “I’m for improving our schools.” That’s great. I’m for million dollar salaries for economics professors with the last name Hendrickson. Without a plan, these goals are meaningless.

There is nothing wrong with the Federal Reserve having broadly defined goals, but along with these broadly defined goals needs to be an explicit target. Also, the central bank needs a plan to achieve the target. Conceivably, this plan would outline how the Federal Reserve planned to use its instrument to achieve its target, with a description of intermediate variables that it would use to provide guidance to ensure that their policy is successful.

The Federal Reserve has two goals, which conceivably also means that they have two targets (more on that later). So what are the Fed’s targets? According to a press release from the Federal Reserve:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee’s ability to promote maximum employment in the face of significant economic disturbances.

The maximum level of employment is largely determined by nonmonetary factors that affect the structure and dynamics of the labor market. These factors may change over time and may not be directly measurable. Consequently, it would not be appropriate to specify a fixed goal for employment; rather, the Committee’s policy decisions must be informed by assessments of the maximum level of employment, recognizing that such assessments are necessarily uncertain and subject to revision.

So the Federal Reserve’s targets are 2% inflation and whatever the FOMC thinks the maximum level of employment is. This hardly clarifies the Federal Reserve’s targets. In addition, the Fed provides no guidance as to how they intend to achieve these targets.

The fact that the Federal Reserve has two goals (or one target and one goal) for policy is also problematic because the Fed only has one instrument, the monetary base (the federal funds rate is an intermediate variable).* So how can policy adjust one variable to achieve two targets? Well, it would be possible to do such a thing if the two targets had some explicit relationship. However, at times policy might have to act when these targets are not behaving in a complementary fashion with respect to the dual mandate. The Fed admits as much in the very same press release:

These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.

I will leave it to the reader to determine whether this clarifies or obfuscates the stance of the FOMC.

Despite the widespread knowledge of the dual mandate and despite the fact that the Federal Reserve has been a bit more forthcoming about an explicit target associated with its mandate, those evaluating Fed policy are stuck relying on other indicators of the stance of policy. In other words, since the Federal Reserve still does not have an explicit target that we can look at to evaluate policy, economists have sought other ways to do it.

John Taylor has chosen to think about policy in terms of the Taylor Rule. He views the Fed as adjusting the monetary base to set the federal funds rate consistent with the Taylor Rule, which has been shown to produce low variability in inflation and output around targets. Empirical evidence exists that shows that when the Federal Reserve has conducted policy broadly consistent with its mandate, the behavior of the federal funds rate looks as it would under the Taylor Rule. As a result, the Taylor Rule becomes a guide for policy in the absence of explicit targets. But even this guidance is only guidance with respect to an intermediate variable.

Scott Sumner has chosen to think about policy in terms of nominal GDP. This follows from a quantity theoretic view of the world. If the central bank promotes stable nominal GDP growth, then inflation expectations will be stable and price mechanism will function efficiently. In addition, the central bank will respond to only the types of shocks that they can correct. Stable nominal GDP therefore implies low inflation and stable employment. My own research suggests that the Federal Reserve conducted monetary policy as if they were stabilizing nominal GDP growth during the Great Moderation. But even using nominal GDP as a guide is limited in the sense that this is not an official target of the Federal Reserve, so a deviation of nominal GDP from trend (even it is suboptimal) might be consistent with the Federal Reserve’s official targets since the latter is essentially unknown.

Nonetheless, the development of these different approaches (and others) was the necessary outgrowth of the desire to understand and evaluate monetary policy. Such an approach is only necessary when the central bank has broad goals without explicit targets and without an explicit description of how they are going to achieve those targets.

We therefore end up back at Tobin’s original questions. How do we know when policy is too loose? Or too tight?

During the Great Inflation and the Great Moderation, both the Taylor Rule and stable growth in nominal GDP provided a good way to evaluate policy. During the Great Inflation both evaluation methods suggest that policy was too loose (although this is less clear regarding the Taylor Rule with real time data). During the Great Moderation, both evaluation methods suggest that policy was conducted well. What is particularly problematic, however, is that the most recent period since 2007, the Taylor Rule and nominal GDP have given the opposite conclusions about the stance of monetary policy. This has further clouded the discussion surrounding policy because advocates of each approach can point to historical evidence as supportive of their approach.

With an explicit target evaluating the stance of policy would be simple. If the FOMC adopted a 2% inflation target (and nothing else), then whenever inflation was above 2% (give or take some measurement error), policy would be deemed loose. Whenever inflation was below 2%, policy would be deemed too tight. Since neither the federal funds rate prescribed by the Taylor Rule nor nominal GDP are an official target of the Fed, it’s not immediately obvious how to judge the stance based solely on these criteria. (And what is optimal is another issue.)

If we want to have a better understanding of monetary policy, we need to emphasize the proper monetary semantics. First, we as economists need to use consistent language regarding instruments, targets, and intermediate variables. No more referring to the federal funds rate as the instrument of policy. Second, the Federal Reserve’s mandate needs to be modified to have only one target for monetary policy. If not otherwise specified, the Federal Reserve needs to provide a specific numerical goal for this target variable and then they need to describe how they are going to use their instrument to achieve this goal. Taking monetary semantics seriously is about more than language, it is about creating clear guidelines and accountability at the FOMC.

* One could argue that the Federal Reserve now has two instruments, the interest rate on reserves and the monetary base. While they do have direct control over these things, it is also important to remember that these variables must be compatible in equilibrium.

Some Thoughts on Cryptocurrencies and the Block Chain

Much of the discussion about cryptocurrencies has naturally centered around Bitcoin. Also, this discussion has been particularly focused on the role of Bitcoin as an alternative currency. However, I think that the most important aspect of Bitcoin (and cryptocurrencies more generally) is not necessarily the alternative currency arrangement, but the block chain. It seems to me that the future viability of cryptocurrencies themselves is not as an alternative to existing currencies, but as assets that are redeemable in a particular currency with payments settled much more efficiently using block chain technology.

For those who know little about cryptocurrencies, the block chain can be understood as follows. A block chain is a date store, or computer network in which information is stored on multiple nodes. In the case of cryptocurrency, such as Bitcoin, the block chain is used as a ledger of all transactions. Since every node has access to the block chain, there is no need for any centralized record-keeper or database. Transactions that are carried out using Bitcoin have to be verified by the nodes. A successful transaction is then added to the block chain. Individuals using the system must therefore have balances of the cryptocurrency recorded on the transaction ledger in order to transfer these balances to someone else. In addition, once the nodes verify the transferred balance, the transaction is time-stamped. This avoids scenarios in which people try to double spend a given balance.

This technology is what creates value for Bitcoin. One explanation for why money exists is that people cannot commit to future actions. The lack of commitment problem makes credit infeasible. Money is an alternative for carrying out exchange because money is a record-keeping device. The block chain associated with Bitcoin is quite literally a record-keeping device. It has value because provides a record of transactions. In addition, this simplifies the settlement process and therefore reduces the cost of transfers and settlement.

The benefit of using Bitcoin is thus the value of the record-keeping system, or the block chain. However, in order to be able to benefit from the use of the block chain, you need to have Bitcoins. This is problematic since there are a number of reasons that you might not want Bitcoins. For example, maybe you are perfectly happy with dollars or perhaps you’ve noticed that there are not a whole lot of places willing to accept Bitcoins just yet. Also, you might have noticed that the exchange rate between Bitcoins and dollars is quite volatile.

So if you are unwilling to trade your dollars for Bitcoins, then you don’t have access to the block chain and cannot take advantage of the more efficient settlement. This, it seems to me, is a critical flaw with Bitcoin.

Nonetheless, the technology embodied in Bitcoin is available to all and can therefore be adapted in other ways. Thus, the critical flaw in Bitcoin is not a critical flaw for cryptocurrencies more generally. The value of these cryptocurrencies is in the blockchain and the true value of the block chain is in figuring out how to use this technology to make transactions and banking better and more efficient. There are two particular alternatives that I think are on the right track, NuBits and Ripple.

Think back to pre-central banking days. Prior to central banks, there were individual banks that each issued their own notes. Each bank agreed to redeem its bank notes for a particular commodity, often gold or silver. Bank notes were priced in terms of the asset. In other words, one dollar would be defined as a particular quantity of gold or silver. This therefore implied that the price of the commodity was fixed in terms of the dollar. In order to maintain this exchange rate, the bank had to make sure not to issue too many bank notes. If the bank issued too many notes, they would see a wave of redemptions, which would reduce their reserves of the commodity. In order to prevent losses to reserves, the banks would therefore have an incentive to reduce the notes in circulation. The peg to the commodity therefore provided an anchor for the value of the bank notes and represented a natural mechanism to prevent over-issuance. Thus, fluctuations in the value of the bank notes tended to result from changes in the relative value of gold. (The U.S. experience was actually much different. Due to the existence of a unit banking system, notes often didn’t trade at par. Again, let’s ignore that for now.)

The way that NuBits work is a lot like these old banks worked (without the lending – we’ll have to get to that in a different post). The NuBits system consists of those who own NuShares and those who own NuBits. Those who own NuShares are like equity owners in the system whereas those who own NuBits are like holders of bank notes. The NuBits are redeemable in terms of U.S. dollars. In particular, one dollar is equal to one NuBit. If I own a NuBit, I can redeem that NuBit for one dollar. So how does NuBit manage to do this when Bitcoin clearly experiences a volatile exchange rate? They do so by putting the trust in the equity owners. Owners of NuShares have an incentive to maintain the stability of this exchange rate. If nobody is willing to use NuBits, then there is little value of ownership in the protocol and the shares will have little, if any, value. Thus, the NuBits system provides an incentive for NuShares holders to maintain the stability of the exchange rate and gives these shareholders the ability to do so. For example, if the demand for NuBits falls, this will be seen by a wave of redemptions. This is a signal that there are too many NuBits in circulation. In order to maintain the exchange rate, NuShares holders have an incentive to reduce the quantity of NuBits in circulation. They can do this by parking some of the NuBits (i.e. preventing people from using NuBits in transactions). This is not done forcibly, but rather by offering interest to those who are willing to forgo engaging in transactions. Similarly, if there is an increase in demand then new NuBits can be created.

But while NuBits has solved the volatility problem in a unique and interesting way, they still suffer from the same problem as Bitcoin. In order to be able to benefit from the technology, you need to hold NuBits and there is perhaps even less of an incentive to hold NuBits since it is much harder to use them for normal transactions. Thus, until cryptocurrencies like NuBits can be used in regular everyday transactions, there is little incentive to hold them. Thus, NuBits gets partially where the technology needs to go, but still suffers from a similar problem as Bitcoin.

This brings me to Ripple. Ripple is a much different system. With Ripple, one can set up an account using dollars, euros, Bitcoins, and even Ripple’s own cryptocurrency. One can then transfer funds using block chain technology, but the transfers do not have to take place using Ripple’s cryptocurrency or Bitcoin. In other words, I can transfer dollars or euros just like I transfer cryptocurrencies in other systems. I can do this by setting up an account and transferring the funds to another person with an account through an update the public ledger that is distributed across the nodes of the system. This streamlines the payment process without the need to adopt a particular cryptocurrency. One can even use dollars to pay someone in euros. The way the transaction is carried out is by finding traders on the system who are willing to trade dollars for euros and then transferring the euros to the desired party. This service seems to be immediately more valuable than any other service in this space.

So where do I see this going?

Suppose that you are Citibank or JP Morgan Chase. You could actually combine the types of services that are offered by NuBits and Ripple. You have the deposit infrastructure and are already offering online payment systems for bill paying and peer-to-peer exchanges. The major banks have two possible incentives. First, they could offer JP Morgan Bits (or whatever you want to call them) and have them redeemable 1-for-1 with the dollar. They could then partner with retailers (both online and brick and mortar) to offer a service in which JP Morgan deposit holders could carry around something akin to a debit card or even an app on their phone that allowed them to transact by transferring the JP Morgan Bits from the individual to the firm, charging a very small fee for the transfer. They could partner with firms for online bill paying as well. Alternatively, they could skip the issuance of their own “bank bits” and simply use their block chain to transfer dollar balances and other existing currencies. Whether or not the banks decide to have their own cryptocurrency for settling payments would be determined by whether there are advantages to developing brand loyalty and/or if retailers saw this as a way to generate greater competition for developing cheaper payments while maintaining the stability of purchasing power with the “bank bits.”

The basic point here is that banks could see a profit opportunity by eliminating the middleman and transferring funds between customers using the block chain. The payments would be faster and cheaper. In addition, it would provide retailers with much better protection from fraud.

Citibank is apparently already exploring the possibilities, developing a block chain with a cryptocurrency called “Citicoin.”

Regardless of what ultimately happens, it is an interesting time to be a monetary economist.

Bitcoin Papers

My paper with Thomas Hogan and Will Luther, “The Political Economy of Bitcoin” is now forthcoming from Economic Inquiry. A working paper version can still be found here.

Also, on the same topic, Aaron Yelowitz was kind enough to send me his recent paper that uses Google Trends data to identify the characteristics of Bitcoin users. A link to that paper can be found here.

Some Fun Stuff: Seinfeld and Optimal Stopping Times

Sometimes while you are proctoring exams, you realize that an episode of Seinfeld can be understood as an optimal stopping time problem and you write a short paper about it. Enjoy.

Review of Piketty’s Capital in the 21st Century

My review of Piketty’s Capital in the 21st Century will run in the May 5 issue of National Review. In the meantime, here is a link to a longer version of the review.

What I’m Reading

1. The New Dynamic Public Finance by Narayana Kocherlakota

2. The Redistribution Recession by Casey Mulligan

3. The Bretton Woods Transcripts, edited by Kurt Schuler and Andrew Rosenberg

4. Misunderstanding Financial Crises by Gary Gorton