Yellen, Optimal Control, and Dynamic Inconsistency

For much of his career, Milton Friedman advocated a constant rate of money growth — the so-called k-percent rule. According to this rule, the central bank would increase the money supply at a constant rate, k, every year. In this case, there would be no need for an FOMC. A computer could conduct monetary policy.

The k-percent rule has often been derided as a sub-optimal policy. Suppose, for example, that there was an increase in money demand. Without a corresponding increase in the money supply, there would be excess money demand that even Friedman believed would cause a reduction in both nominal income and real economic activity. So why would Friedman advocate such a policy?

The reason Friedman advocated the k-percent rule was not because he believed that it was the optimal policy in the modern sense of phrase, but rather that it limited the damage done by activist monetary policy. In Friedman’s view, shaped by his empirical work on monetary history, central banks tended to be a greater source of business cycle fluctuations than they were a source of stability. Thus, the k-percent rule would eliminate recessions caused by bad monetary policy.

The purpose of this discussion is not to take a position on the k-percent rule, but rather to point out the fundamental nature of discretionary monetary policy. A central bank that uses discretion has the ability to deviate from its traditional approach or pre-announced policy if it believes that doing so would be in the best interest of the economy. In other words, the central bank can respond to unique events with unique policy actions. There are certainly desirable characteristics of this approach. However, Friedman’s point was that there are very undesirable characteristics of discretion. Just because a central bank has discretion doesn’t necessarily mean that the central bank will use it wisely. This is true even of central banks that have the best intentions (more on this point later).

The economic literature on rules versus discretion is now quite extensive. In fact, a substantial amount of research within the New Keynesian paradigm is dedicated to identifying the optimal monetary policy rule and examining the robustness of this rule to different assumptions about the economy. In addition, there has been a substantial amount of work on credible commitment on the part of policymakers.

Much of the modern emphasis on rules versus discretion traces back to the work of Kydland and Prescott and the idea of dynamic inconsistency. The basic idea is that when the central bank cannot perfectly commit to future plans, we end up with suboptimal outcomes. The idea is important because Kydland and Prescott’s work was largely a response to those who viewed optimal control theory as a proper way to determine the stance of monetary policy. The optimal control approach can be summarized as follows:

The Federal Open Market Committee (FOMC) targets an annual inflation rate of 2% over the long run and an unemployment rate of 6% (the latter number an estimate of the economy’s “natural” unemployment rate).

Under the optimal control approach, the central bank would then use a model to calculate the optimal path of short-term interest rates in order to hit these targets.

In short optimal control theory seems to have a lot of desirable characteristics in that policy is based on the explicit dynamics of a particular economic framework. In addition, it is possible for one to consider what the path of policy should look like given different paths for the models state variables. Given these characteristics, the story describing optimal control linked above is somewhat favorable to this approach and notes that the optimal control approach to monetary policy is favored by incoming Fed chair Janet Yellen. Thus, it is particularly useful to understand the criticisms of optimal control levied by Kydland and Prescott.

As noted above, the basic conclusion that Kydland and Prescott reached was the when the central bank has discretionary power and use optimal control theory to determine policy, this will often result in suboptimal policy. Their critique of optimal control theory rests on the belief that economic agents form expectations about the future and those expectations influence their current decision-making. In addition, since these expectations will be formed based in part on their expectations of future policy, this results in a breakdown of the optimal control framework. The reason that this is true is based on the way in which optimal control theory is used. In particular, optimal control theory chooses the current policy (or the expected future path of policy, if you prefer) based on the current state variables and the history of policy. If expectations about future policy affect current outcomes, then this violates the assumptions of optimal control theory.

Put differently, optimal control theory generates a path for the policy instrument for the present policy decision and the future path of policy. This expected future path of the monetary policy instrument is calculated taking all information available today as given — including past expectations. However, this means that the value of the policy instrument tomorrow is based, in part, on the decisions made today, which are based, in part, on the expectations about policy tomorrow.

There are two problems here. First, if the central bank could perfectly commit to future actions, then this wouldn’t necessarily be a problem. The central bank could, for example, announce some state-contingent policy and perfectly commit to that policy. If the central bank’s commitment was seen as credible, this would help to anchor expectations thereby reinforcing the policy commitment and allowing the central bank to remain on its stated policy path. However, central banks cannot perfectly commit (this is why Friedman not only wanted a k-percent rule, but also sometimes advocated that it be administered by a computer). Thus, when a central bank has some degree of discretion, using optimal control theory to guide policy will result in suboptimal outcomes.

In addition, discretion creates additional problems if there is some uncertainty about the structure of the economy. If the central bank has imperfect information about the structure of the macroeconomy or an inability to foresee all possible future states of the world, then optimal control theory will not be a useful guide for policy. (To see an illustration of this, see this post by Marcus Nunes.) But note that while this assertion casts further doubt on the ability of optimal control theory to be a useful guide for policy, it is not a necessary condition for suboptimal policy.

In short Kydland and Prescott expanded and bolstered Friedman’s argument. Whereas Friedman had argued that rules were necessary to prevent central banks from making errors that were due to timing and ignorance of the lag in effect of policy, Kydland and Prescott showed that even when the central bank knows the model of the economy and tries to maximize an explicit social welfare function known to everyone, using optimal control theory to guide policy can still be suboptimal. This is a remarkable insight and an important factor in Kydland and Prescott receiving the Nobel Prize. Most importantly, it should give one pause about the favored approach to policy by the incoming chair of the Fed.

My Two Cents on QE and Deflation

Steve Williamson has caused quite the controversy in the blogosphere regarding his argument that quantitative easing is reducing inflation. Unfortunately, I think that much of the debate surrounding this claim can be summarized as: “Steve, of course you’re wrong. Haven’t you read an undergraduate macro text?” I think that this is unfair. Steve is a good economist. He is curious about the world and he likes to think about problems within the context of frameworks that he is familiar with. Sometimes this gives him fairly standard conclusions. Sometimes it doesn’t. Nonetheless, this is what we should all do. And we should evaluate claims based on their merit rather than whether they reinforce our prior beliefs. Thus, I would much rather try to figure out what Steve is saying and then evaluate what he has to say based on its merits.

My commentary on this is going to be somewhat short because I have identified the point at which I think is the source of disagreement. If I am wrong, hopefully Steve or someone else will point out the error in my understanding.

The crux of Steve’s argument seems to be that there is a distinct equilibrium relationship between the rate of inflation and the liquidity premium on money. For example, he writes:

Similarly, for money to be held,

(2) 1 – L(t) = B[u'(c(t+1))/u'(c(t))][p(t)/p(t+1)],

where L(t) is the liquidity premium on money. For example, L(t) is associated with a binding cash-in-advance constraint in a cash-in-advance model, or with some inefficiency of exchange in a deeper model of money.

He then explains why QE might cause a reduction in inflation using this equation:

…the effect of QE is to lower the liquidity premium (collateral constraints are relaxed) which … will lower inflation and increase the real interest rate.

Like Steve, I agree that such a relationship between inflation and the liquidity premium exists. However, where I differ with Steve seems to be in the interpretation of causation. Steve seems to be arguing that causation runs from the liquidity premium to inflation. In addition, since the liquidity premium is determined by the relative supplies of alternative transaction assets, monetary policy controls inflation by controlling the liquidity premium. My thinking is distinct from this. I tend to think of the supply of public transaction assets determining the price level (and thereby the rate of inflation) with the liquidity premium determined given the relative supply of assets and the rate of inflation. Thus, we both seem to think that there is this important equilibrium relationship between the rate of inflation and the liquidity premium, but I tend to see causation running in the opposite direction.

But rather than simply conclude here, let me outline what I am saying within the context of a simple model. Consider the equilibrium condition for money in a monetary search model:

E_t{{p_{t+1}}\over{\beta p_t}} = \sigma E_t[{{u'(q_{t+1})}\over{c'(q_{t+1})}} - 1] + 1

where p_t is the price level, \beta is the discount factor, q_t is consumption, and \sigma is the probability that a buyer and seller is matched. Thus, the term in brackets measures the value of spending money balances and \sigma the probability that those balances are spent. The product of these two terms we will refer to as the liquidity premium, \ell. Thus, the equation can be written:

E_t{{p_{t+1}}\over{\beta p_t}} = 1 + \ell

So here we have the same relationship between the liquidity premium and the inflation rate that we have in Williamson’s framework. In fact, I think that it is through this equation that I can explain our differences on policy.

For example, let’s use our equilibrium expression to illustrate the Friedman rule. The Friedman rule is designed to eliminate a friction. Namely the friction that arises because currency pays zero interest. As a result, individuals economize on money balances and this is inefficient. Milton Friedman recommended maintaining a market interest rate of zero to eliminate the inefficiency. Doing so would also eliminate the liquidity premium on money. In terms of the equation above, it is important to note that the left-hand side can be re-written as:

{{p_{t+1}}\over{\beta p_t}} = (1 + E_t \pi_{t + 1})(1 + r) = 1 + i

where \pi is the inflation rate and r is the rate of time preference. Thus, it is clear that by setting i = 0, it follows from the expression above that \ell = 0 as well.

Steve seems to be thinking about policy within this context. The Fed is pushing the federal funds rate down toward the zero lower bound. Thus, in the context of our discussion above, this should result in a reduction in inflation. If the nominal interest rate is zero, this reduces the liquidity premium on money. From the expression above, if the liquidity premium falls, then the inflation rate must fall to maintain equilibrium.

HOWEVER, there seems to be one thing that is missing. That one thing is how the policy is implemented. Friedman argued that to maintain a zero percent market interest rate the central bank would have to conduct policy such that the inflation rate was negative. In particular, in the context of our basic framework, the central bank would reduce the interest rate to zero by setting

\pi_t = \beta

Since 0 < \beta < 1, this implies deflation. More specifically, Friedman argued that the way in which the central bank could produce deflation was by shrinking the money supply. In other words, Friedman argued that the way to produce a zero percent interest rate was by reducing the money supply and producing deflation.

In practice, the current Federal Reserve policy has been to conduct large scale asset purchases, which have substantially increased the monetary base and have more modestly increased broader measures of the money supply.

In Williamson's framework, it doesn't seem to matter how we get to the zero lower bound on nominal interest rates. All that matters is that we are there, which reduces the liquidity premium on money and therefore must reduce inflation to satisfy our equilibrium condition.

In my view, it is the rate of money growth that determines the rate of inflation and the liquidity premium on money then adjusts. Of course, my view requires a bit more explanation of why we are at the zero lower bound despite LSAPs and positive rates of inflation. The lazy answer is that \beta changed. However, if one allows for the non-neutrality of money, then it is possible that the liquidity premium not only adjusts to the relative supplies of different assets, but also to changes in real economic activity (i.e. q_t above). In particular, if LSAPs increase real economic activity, this could reduce the liquidity premium (given standard assumptions about the shape and slope of the functions u and c).

This is I think the fundamental area of disagreement between Williamson and his critics — whether his critics even know it or not. If you tend to think that non-neutralities are important and persistent then you are likely to think that Williamson is wrong. If you think that non-neutralities are relatively unimportant or that they aren't very persistent, then you are likely to think Williamson might be on to something.

In any event, the blogosphere could stand to spend more time trying to identify the source of disagreement and less time bickering over prior beliefs.

On SNAP Eligibility and Spending

William Galston has an op-ed in the Wall Street Journal that begins as follows:

We are entering a divisive debate on the Supplemental Nutrition Assistance Program (SNAP), popularly known as food stamps. Unless facts drive the debate, it will be destructive as well.

I certainly agree with this statement. Unfortunately, I found the op-ed misleading and vague (a vague op-ed can be somewhat forgiven since word counts are limited).

The basic premise of Galston’s op-ed is that critics of the increased spending on food stamps are misguided in their criticisms. For example, he explains:

The large increase in the program’s cost over the past decade mostly reflects worsening economic conditions rather than looser eligibility standards, increased benefits, or more waste, fraud and abuse.

[...]

The food-stamp program’s costs have soared since 2000, and especially since 2007. Here’s why.

First, there are many more poor people than there were at the end of the Clinton administration. Since 2000, the number of individuals in poverty has risen to 46.5 million from 31.6 million—to 15% of the total population from 11.3%. During the same period, the number of households with annual incomes under $25,000 rose to 30.2 million (24.7% of total households) from 21.9 million (21.2%).

Critics complain that beneficiaries and costs have continued to rise, even though the Great Recession officially ended in 2009. They’re right, but the number of poor people and low-income households has continued to rise as well.

Thus, according to Galston, we can explain much of the increase in food stamp spending on the rise of poverty over the last 13 years (and especially the last 6 years). If Galston is correct, then we could examine the ratio of households who are receiving SNAP benefits to the number of people below the poverty line. Supposing that he is correct, we would expect that this ratio would be constant (or at least roughly so). In other words, as the number of people below the poverty line increased, the number of households receiving SNAP benefits would increase in direct proportion.

Such a comparison, however, casts doubt on Galston’s claim. Casey Mulligan, in his book The Redistribution Recession, has taken great effort to actually calculate such ratios. What Mulligan found is that from 2007 to 2010, the number of families below 125% of the federal poverty level increased by 16%. That is indeed a large increase. However, the number of households receiving SNAP benefits increased by 58%. This means that the SNAP recipiency ratio, or the ratio of households receiving SNAP to that below 125% of the poverty line (a higher threshold that Galston himself uses), rose by 37%.

So what can explain the fact that recipients are rising so much faster than poverty? One possible explanation are eligibility requirements. Since 2008, there have been several changes to eligibility for food stamps. For example, the Farm Bill passed in 2008 increased the maximum benefit that beneficiaries could receive, it excluded some income from the formula used to determine eligibility, and it weakened the evaluation of assets of potential enrollees. In addition, the American Reinvestment and Recovery Act also loosed eligibility requirements by once again increasing the maximum benefit that one could receive, gave states the ability to loosen the work requirement, and further loosened income requirements.

Galston, however, downplays most of these changes and argues that macroeconomic trends explain the vast majority of the rise of SNAP spending. However, the use of this type of explanation is problematic because it is taking the actual increase in recipients and then explaining the increase in spending ex post. To understand why this is misleading, consider the following example. Suppose that there is an individual who lost his job in 2009. Prior to 2007, he would not have been eligible for SNAP whereas after the changes he is now eligible. Thus, after 2007, this increases the number of recipients of SNAP. Galston might claim that this change is the result of macroeconomic trends because this person would not have enrolled in SNAP had he not lost his job. Others might say that this change is due to eligibility requirements becaus if the worker had lost his job two years prior, he would not have been eligible. While I certainly understand Galston’s perspective on this, the relevant comparison is to the counterfactual. In other words we can’t explain the rise in SNAP recipients ex post, we need to consider what actually happened to what would have happened in the absence of a policy change.

So what do the counterfactuals say?

Again, Casey Mulligan has constructed these counterfactuals. What he finds is that between 2007 and 2010, the increase in per capita SNAP spending was 100%, adjusted for inflation. He then constructs two counterfactuals. The first counterfactual takes macroeconomic trends as given and computes the increase in per capita SNAP spending under 2007 eligibility rules. The second counterfactual does the same thing assuming that in addition to maintaining 2007 eligibility rules, the government had maintained constant real benefit rules (i.e. would not have increased the after-inflation maximum benefit).

The first counterfactual suggests that from 2007 to 2010 per capita SNAP spending would have only increased by 60%, adjusted for inflation. The second counterfactual suggests that per capita SNAP spending would have increased only 24%, adjusted for inflation. Had no policy changes been enacted in 2008 and 2009, per capita spending on SNAP would have been 62% of what it actually was in 2010. Put differently, 48% of the per capita 2010 spending is attributable to changes in eligibility. Thus, contrary to the claims of Galston, a very large fraction of the increase in SNAP spending is explained by changes in eligibility.

An entirely separate question is whether or not this increased spending is worth it. Answering that question is certainly beyond the scope of this post. However, it is important to be mindful that such analysis must consider both the costs and the benefits of the expansion. The benefits are obvious. Households receive assistance in purchasing food and feeding their families. The costs, however, are more complex. A significant fraction of the increase in spending can be explained by changes in eligibility. Thus, we need to consider the counterfactual. One big issue is to consider how much of the increased benefits are going to those who would not have qualified under the asset tests. Another issue is to consider is the effect of changes in eligibility on the labor supply of those at or near the new eligibility requirements, especially given the work requirement waiver. And there is the obvious monetary cost to the taxpayer. Too often those on each side of the debate focus on only the benefits or only the costs.

Whether the policy changes are worth it depends on a careful analysis of these questions. I will remain agnostic with regards to that type of analysis. However, to argue that those concerned about the expansion spending due to changes in eligibility are misguided and driven by “anti-government ideology”, as Galston argues, is an unfair criticism to those who have carefully looked at the data.

Monetarism, Debt, and Observational Equivalence

I have heard a number of people say over the years that one of the best things about reading Adam Smith and Henry Thornton and other classical economists is that they argued their points fairly. In particular, Smith and Thornton argued in favor of their own views and against opposing views while taking these opposing views at face value. They did not attack straw men. They did not caricature their intellectual adversaries (in fact, Thornton and Smith were intellectual adversaries to some extent in their views on the role of bank notes, bills of exchange, and the operation of the monetary system).

This characteristic is, at times, missing from contemporary discourse. This doesn’t mean that modern disagreements are fraught with malice. However, sometimes ideas are not given the proper understanding sufficient for critique. Franco Modigliani, for example, once joked that what we would now call real business cycle theory blamed recessions on mass outbreaks of laziness. Similarly, when Casey Mulligan published his most recent book on the recession in which he argued that expansions of the social safety net can explain a significant fraction of the increase in unemployment, others shrugged this off by saying that this was akin to saying that soup lines caused the Great Depression.

My point is not to defend Casey Mulligan or the real business cycle theorists. It is perfectly reasonable to view real business cycle theory as unconvincing without referencing mass outbreaks of laziness. Rather my point is that more care needs to be taken to understand opposing theories and views of business cycles, growth, etc. so that one can adequately articulate criticisms and rebuttals to such views.

The fact that there is little understanding of (or perhaps just little credit given to) opposing viewpoints is never more apparent than when predictions of two different theories are observationally equivalent. To give an example, consider two explanations of the cause of the most recent recession. Please note that these are not the only two explanations and that the explanations that I give are sufficiently broad to encapsulate a number of more nuanced views.

The first explanation of the recession is what I will refer to as the Debt Theory. According to this view, the expansion that preceded the recession was fueled by an unsustainable accumulation of debt. There are many varieties of this theory that emphasize different factors that caused the run-up of debt, such as monetary policy, policies that subsidize housing, etc. Regardless of the reason that “too much” debt was accumulated, the debt eventually reached a point (most often argued as the beginning of the collapse in housing prices) that was unsustainable and hence the beginning of a recession. The recession is largely the result of de-leveraging.

The second explanation is what I will refer to as the Money Theory. According to this view, it is a deviation between the supply and demand of money (broadly defined) that ultimately results in reduced spending and, as a result, a lower level of real economic activity. As a result, when the large haircuts became apparent in the market for mortgage-backed securities, this reduced the supply of transaction assets thereby causing a deviation between the supply and demand for money. The Federal Reserve, in its failure to provide a sufficient quantity of transactions assets, thereby allowed this deviation to persist and resulted in decline in nominal, and ultimately, real spending.

As these brief descriptions imply, there doesn’t appear to be much overlap between the two views. However, they actually produce a number of observationally equivalent implications. For example, advocates of the Money Theory point to the negative rates of money growth in broad measures of the money supply as evidence that the Federal Reserve failed to provide adequate liquidity. Nonetheless, this observation is consistent with the Debt Theory. According to this view, de-leveraging reduces the demand for credit and therefore reduces the need of financial intermediaries to create new debt instruments that are used as transaction assets. Thus, we would expect a decline in money growth in both cases.

On the other hand, advocates of the Debt Theory point out that there is a strong relationship between counties that had higher levels of debt prior to the recession and the reductions in consumption during the recession. Nonetheless, this observation is also consistent with the Money Theory. Most advocates of the Money Theory are intellectual descendants of Milton Friedman. In Friedman’s theory of money demand, money is considered similar to a durable good in that individuals hold a stock of money to get the flow of services that come from holding money. Thus, contra the transactions view of money demand, individuals do not draw down money balances during a recession. Instead individuals make adjustments to different parts of their portfolio, most notably consumer debt. In other words, we would observe de-leveraging under both frameworks.

To distinguish between the two views it is not sufficient to point to characteristics that they have in common (although those observations are still important). It is also necessary to find areas in which the theories differ so that one is able to develop an empirical approach to assess each framework’s validity.

The examples given above are obviously simplifications, but this is what makes being an economist difficult. It is not enough to use inductive reasoning to support one’s theory. One must be able to differentiate between other theories that would produce observationally equivalent results. Admittedly, this is a problem that exists to a greater extent in the blogosphere than it does in academic journals. The reason is obvious. If one submits a paper to an academic journal, a good reviewer is able to spot the ambiguities between testing the predictions of a particular theory and contrasting the predictions of theories with observationally equivalent predictions. In the blogosphere, the “reviewers” are commenters and colleagues. However, the differences don’t often get resolved. Perhaps this is because there is no gatekeeper that prevents the blog post from being published. (Ironically, the lack of a gatekeeper is perhaps the best quality of the blogosphere because it allows discourse to take place in public view.) Nonetheless, given the degree to which blog posts and debates in the blogosphere ultimately spill over into the popular financial press and public debate, it is important to be careful and considerate regarding opposing views.

[Note: For an example of someone who tries to disentangle the issues surrounding the Debt View and the Money View, see Robert Hetzel's The Great Recession: Market Failure or Policy Failure?]

Are Capital Requirements Meaningless?

Yes, essentially.

The push for more strict capital requirements has become very popular among economists and policy pundits. To understand the calls for stricter capital requirements, consider a basic textbook analysis of a consolidated bank balance sheet. On the asset side, banks have things like loans, securities, reserves, etc. On the liability side a traditional commercial bank has deposits and something called equity capital. Given our example, equity capital is defined as the difference between the bank assets and deposits (note that banks don’t actually “hold” capital).

So why do we care about capital?

Suppose that assets were exactly equal to deposits. In this case the equity capital of bank would be non-existent. As a result, any loss on the asset side of the balance sheet of the bank would leave the bank with insufficient assets to cover outstanding liabilities. The bank would be insolvent.

Now suppose instead that bank’s assets exceed their deposits and the bank experiences the same loss. If this loss is less than the bank’s equity capital then the bank remains solvent and there is no loss to depositors.

The call for capital requirements is driven by examples like those above coupled with the institutional environment in which banks operate. For example, banks have limited liability. This means that shareholders are subjected only to losses to their initial investment in the event that a bank becomes insolvent. Put differently, bank shareholders are not assessed for the losses to depositors. Since the private cost of insolvency is less than the public cost, shareholders have an incentive to favor riskier assets than they would otherwise. Conceivably, this notion is well-understood since the the shift to limited liability for banks in the early 1930s was coupled with the creation of government deposit insurance. However, while deposit insurance insulates deposits from losses due to insolvency, it also has the effect of encouraging banks to take on more risk since depositors have little incentive to monitor the bank balance sheet.

Within this environment capital requirements are thought to reduce the risk of insolvency. By requiring that banks have equity capital greater than or equal to some percentage of their assets, this should make banks less likely to become insolvent. This is because, all else equal, a greater amount of capital means that a bank can withstand larger losses on the asset side of their balance sheet without becoming insolvent.

There is nothing logically wrong with the call for greater capital requirements. In fact, calls for greater capital requirements represent a seemingly simple and intuitive solution to the risk of financial instability. So why then does the title of this post ask if capital requirements are meaningless? The answer is that calls for higher capital requirements ignore some of the realities of how banks actually operate.

The first problem with capital requirements is that they impose a substantial information cost on bank regulators. How should value be reported? Should assets be marked-to-market or considered at book value? Some assets are opaque. More importantly, this practice shifts the responsibility of risk management from the bank to the regulator. This makes the actual regulation of banks difficult.

Second, and more importantly, is that capital requirements provide banks with an incentive to circumvent the intentions of the regulation while appearing compliant. In particular, a great deal of banking over the last couple of decades has been pushed off the bank balance sheet. Capital requirements provide banks with an incentive to move more assets off their balance sheet. Similarly, banks can always start doing bank-like things without actually being considered a bank thereby avoiding capital requirements altogether. In addition, this provides an opportunity for non-banks to enter the market to provide bank-like services. In other words, the effect of capital requirements is to make the official banking sector smaller without necessarily changing what we would consider banking activity. Put simply, capital requirements become less meaningful when there are substitutes for bank loans.

Advocates of capital requirements certainly have arguments against these criticisms. They would be perhaps correct to conclude that the costs and imperfections associated with the actual regulation would be work it if capital requirements brought greater stability to the system. However, the second point that I made above would seemingly render this point moot.

For example, advocates of higher capital requirements seem to think that redefining what we call a bank and adopt better general practices for accounting reporting would eliminate the second and most important problem that I highlighted above. I remain doubtful that such actions would have any meaningful impact. First, redefining what a bank is to ensure that banks and non-banks in the current sense remain on equal footing regarding capital requirements is at best a static solution. Over time, firms that would like to be non-banks will figure out how to avoid being considered a bank. Changing the definition of a bank only gives them the incentive to change the definition of their firm. In addition, I remain unconvinced that banks will be unable to circumvent changes to general accounting practices. Banks are already quite adept at circumventing accounting practices and hiding loans off of their balance sheets.

Those who advocate capital requirements are likely to find this criticism wanting. If so, I am happy to have that debate. In addition, I would like to point out that my skepticism about capital requirements should not be seen as advocacy of the status quo. In reality, I favor a different change to the banking system that would provide banks with better incentives. I have written about this alternative here and I will be writing another post on this topic soon.

Some Thoughts on Liquidity

The quantity theory relates not so much to money as to the whole array of financial assets exogenously supplied by the government. If the government debt is doubled in the absence of a government-determined monetary base the price level doubles just as well as in the case of a doubling of the monetary base in the absence of government debt. — Jurg Niehans, 1982

Seemingly lost in the discussion of monetary policies various QEs is a meaningful resolution of our understanding of the monetary transmission mechanism.  Sure, New Keynesians argue that forward guidance about the time path of the short term nominal interest rate is the mechanism, Bernanke argues that long term interest rates are the mechanism, and skeptics of the effectiveness of QE argue that it is the interest rate on excess reserves that is the mechanism.  I actually think that these are not the correct way to think about monetary policy.  For example, there are an infinite number of paths for the money supply consistent with a zero lower bound on interest rates.  Even in the New Keynesian model, which purportedly recuses money from monetary policy, the rate of inflation is pinned down by the rate of money growth (see Ed Nelson’s paper on this).  It follows that it is the path of the money supply that is more important to the central bank’s intermediate- and long-term goals.  In addition, it must be the case that the time path of the interest rate outlined by the central bank is consistent with expectations about the future time path of interest rates.  The mechanism advocated by Bernanke is also flawed because the empirical evidence suggests that long term interest rates just don’t matter all that much for investment.

The fact that I see the monetary transmission mechanism differently is because you could consider me an Old Monetarist dressed in New Monetarist clothes with Market Monetarist policy leanings (see why labels are hard in macro).  Given my Old Monetarist sympathies it shouldn’t be surprising that I think the aforementioned mechanisms are not very important.  Old Monetarists long favored quantity targets rather than price targets (i.e. the money supply rather than the interest rate).  I remain convinced that the quantity of money is a much better indicators of the stance of monetary policy.  The reason is not based on conjecture, but actual empirical work that I have done.  For example, in my forthcoming paper in Macroeconomic Dynamics, I show that many of the supposed problems with using money as an indicator of the stance of monetary policy are the result of researchers using simple sum aggregates.  I show that if one uses the Divisia monetary aggregates, monetary variables turn out to be a good indicator of policy.  In addition, changes in real money balances are a good predictor of the output gap (interestingly enough, when you use real balances as an indicator variable, the real interest rate — the favored mechanism of New Keynesians — is statistically insignificant).

Where my New Monetarist sympathies arise is from the explicit nature in which New Monetarism discusses and analyzes the role of money, collateral, bonds, and other assets.  This literature asks important macroeconomic questions using rich microfoundations (as an aside, many of the critics of the microfoundations of modern macro are either not reading the correct literature or aren’t reading the literature at all).  Why do people hold money?  Why do people hold money when other assets that are useful in transactions have a higher yield?  Using frameworks that explicitly provide answers to these questions, New Monetarists then ask bigger questions. What is the cost associated with inflation? What is the optimal monetary policy? How do open market operations work?  The importance of the strong microfoundations is that one is able to answer these latter questions by being explicit about the microeconomic assumptions.  Thus, it is possible to make predictions about policy with an explicit understanding of the underlying mechanisms.

An additional insight of the New Monetarist literature is that the way in which we define “money” has changed substantially over time.  A number of assets such as bonds, mortgage-backed securities, and agency securities are effectively money because of the shadow banking system and the corresponding prevalence of repurchase agreements.  As a result, if one cares about quantitative targets, then one must expand the definition of money.  David Beckworth and I have been working on this issue in various projects.  In our paper on transaction assets shortages, we suggest that the definition of transaction assets needs to be expanded to include Treasuries and privately produced assets that serve as collateral in repurchase agreements.  In addition, we show that the haircuts of private assets significantly reduced the supply of transaction assets and that this decline in transaction assets explains a significant portion of the decline in both nominal and real GDP observed over the most recent recession.

The reason that I bring this up is because this framework allows us not only to suggest a mechanism through which transaction assets shortages emerge and to examine the role of these shortages in the context of the most recent recession, but also because the theoretical framework can provide some insight into how monetary policy works.  So briefly I’d like to explain how monetary policy would work in our model and then discuss how my view of this mechanism is beginning to evolve and what the implications are for policy.

A standard New Monetarist model employs the monetary search framework of Lagos and Wright (2005).  In this framework, economic agents interact in two different markets — a decentralized market and a centralized market.  The terms of trade negotiated in the decentralized market can illustrate the effect of monetary policy on the price level. (I am going to focus my analysis on nominal variables for the time being.  If you want to imagine these policy changes having real effects, just imagine that there is market segmentation between the decentralized market and centralized market such that there are real balance effects from changes in policy.)  In particular the equilibrium condition can be written quite generally as:

P = (M+B)/z(q)

where P is the price level, M is the money supply, B is the supply of bonds, and z is money demand as a function of consumption q.  I am abstracting from the existence of private assets, but the implications are similar to those of bonds.  There are a couple of important things to note here.  First, it is the interaction of the supply and demand for money that determines the price level.  Second, it is the total supply of transaction assets that determines the price level.  This is true regardless of how money is defined.  Third, note that as this equation is presented it is only the total supply of transaction assets that determine the price level and not the composition of those assets.  In other words, as presented above, an exchange of money for bonds does not change the price level.  Open market operations are irrelevant.  However, this point deserves further comment.  While I am not going to derive the conditions in a blog post, the equilibrium terms of trade in the decentralized market will only include the total stock of bonds in the event that all bonds are held for transaction purposes.  In other words, if someone is holding bonds, they are only doing so to finance a transaction.  In this case, money and bonds are perfect substitutes for liquidity.  This implication, however, implies that bonds cannot yield interest.  If bonds yield interest and are just as liquid as money, why would anyone hold money? New Monetarists have a variety of reasons why this might not be the case.  For example, it is possible that bonds are imperfectly recognizable (i.e. they could be counterfeit at a low cost). Alternatively, there might simply be legal restrictions that prevent bonds from being used in particular transactions or since bonds are book-entry items, they might not as easily circulate.  And there are many other explanations as well.  Any of these reasons will suffice for our purposes, so let’s assume that that is a fixed fraction v of bonds that can be used in transactions.  The equilibrium condition from the terms of trade can now be re-written:

P = (M + vB)/z(q)

It now remains true that the total stock of transaction assets (holding money demand constant) determines the price level.  It is now also true that open market operations are effective in influencing the price level.  To summarize, in order for money to circulate alongside interest-bearing government debt (or any other asset for that matter) that can be used in transactions, it must be the case that money yields more liquidity services than bonds.  The difference in the liquidity of the two assets, however, make them imperfect substitutes and imply that open market operations are effective.  It is similarly important to note that nothing has been said about the role of the interest rate.  Money and bonds are not necessarily perfect substitutes even when the nominal interest on bonds is close to zero. Thus, open market operations can be effective for the central bank even if the short term interest rate is arbitrarily close to zero.  In addition, this doesn’t require any assumption about expectations.

The ability of the central bank to hit its nominal target is an important point, but it is also important to examine the implications of alternative nominal targets.  Old Monetarists wanted to target the money supply.  While I’m not opposed to the central bank using money as an intermediate target, I think that there are much better policy targets.  Most central banks target the inflation rate.  Recently, some have advocated targeting the price level and, of course, advocacy for nominal income targeting has similarly been growing.  As I indicated above, my policy leanings are more in line with the Market Monetarist approach, which is to target nominal GDP (preferable the level rather than the growth rate).  The reason that I advocate nominal income targeting, however, differs from some of the traditional arguments.

We live in a world of imperfect information and imperfect markets. As a result, some people face borrowing constraints.  Often these borrowing constraints mean that individuals have to have collateral.  In addition, lending is often constrained by expected income over the course of the loan.  The fact that we have imperfect information, imperfect markets, and subjective preferences means that these debt contracts are often in nominal terms and that the relevant measure of income used in screening for loans is nominal income.  A monetary policy that targets nominal income can potentially play an important role in two ways.  First, a significant decline in nominal income can be potentially harmful in the aggregate.  While there are often claims that households have “too much debt” a collapse in nominal income can actually cause a significant increase and defaults and household deleveraging that reduces output in the short run.  Second, because banks have a dual role in intermediation and money creation, default and deleveraging can reduce the stock of transaction assets.  This is especially problematic in the event of a financial crisis in which the demand for such assets is rising.  Targeting nominal income would therefore potentially prevent widespread default and develeraging (holding other factors constant) as well as allow for the corresponding stability in the stock of privately-produced transaction assets.

Postscript:  Overall, this represents my view on money and monetary policy.  However, recently I have begun to think about the role and the effectiveness of monetary policy more deeply, particularly with regards to the recent recession.  In the example given above, it is assumed that the people using money and bonds for transactions are the same people.  In reality, this isn’t strictly the case.  Bonds are predominantly used in transactions by banks and other firms whereas money is used to some extent by firms, but its use is more prevalent among households.  David Beckworth and I have shown in some of our work together that significant recessions associated with declines in nominal income can be largely explained through monetary factors.  However, in our most recent work, it seems that this particular recession is unique.  Previous monetary explanations can largely be thought of as currency shortages in which households seek to turn deposits into currency and banks seek to build reserves.  The most recent recession seems to be better characterized as a collateral shortage, in particular with respect to privately produced assets.  If that is the case, this calls into question the use of traditional open market operations.  While I don’t doubt the usefulness of these traditional measures, the effects of such operations might be reduced in the present environment since OMOs effectively remove collateral from the system.  It would seem to me that the policy implications are potentially different.  Regardless, I think this is an important point and one worth thinking about.

How Much Capital?

Recently, it has become very popular to argue that the best means of financial reform is to require banks to hold more capital. Put differently, banks should finance using more equity relative to debt. This idea is certainly not without merit. In a Modigliani-Miller world, banks should be indifferent between debt and equity. I would like to take a step back from the policy response and ask why banks overwhelmingly finance their activities with debt. It is my hope that the answer to this question will provide some way to focus the debate.

It is clear that when banks finance primarily using equity, adverse shocks to the asset side of a bank’s balance sheet primarily affect shareholders. This seems at least to be socially desirable if not privately desirable. The imposition of capital requirements would therefore seem to imply that there is some market failure (i.e. the private benefit from holding more capital is less than the social benefit). Even if this is true, however, one needs to consider what makes it so.

One hypothesis for why banks hold too little capital is because they don’t internalize the total cost of a bank failure. For example, banks are limited liability corporations and covered by federal deposit insurance. Thus, if the bank takes on too much risk and becomes insolvent, shareholders lose their initial investment. Depositors are made whole through deposit insurance. It is this latter characteristic that is key. If bank shareholders were responsible not only for their initial level of investment, but also for the losses to depositors, banks would have different incentives. In fact, this was the case under the U.S. system of double liability that lasted from just after the Civil War until the Banking Act of 1933. (I have written about this previous here.) Under that system bank shareholders had a stronger incentive to finance using equity. In fact, evidence shows that banks with double liability took on less leverage and less risk than their limited liability counterparts.

Along similar lines the existence of Too Big Too Fail similarly creates greater incentives toward risk-taking and leverage because in the event that the bank becomes insolvent, it will be rescued by the government. Finally, the U.S. tax system treats debt finance more favorable than equity finance.

Of course, a first-best policy solution to these incentive problems would be to eliminate deposit insurance, Too Big to Fail, and the favorable tax treatment of debt finance. However, such reform is either politically infeasible or, in the case of eliminating Too Big to Fail, relies on a strong commitment mechanism by the government. Thus, a second-best policy prescription is to impose higher capital requirements.

This second-best policy solution, however, is contingent upon the characteristics above being the only source of the socially inefficient level of capital. I would argue that even in the absence of these characteristics banks might still be biased toward debt finance and that imposing capital requirements could actually result in a loss in efficiency along a different dimension of welfare.

The reason that capital requirements could be welfare-reducing has to do with the unique nature of bank liabilities. Banks issue debt in the form of deposits (and, historically, bank notes), which circulate as a medium of exchange. Thus, bank debt serves a social purpose over and above the private purpose of debt finance. This social function is important. In a world that consists entirely of base money, for example, individuals will economize on money balances because money does not earn a pecuniary yield. As a result, the equilibrium quantity of consumption and production will not equal the socially optimum quantity. Bank money, or inside money, has the potential to be welfare improving. In fact, the main result of Cavalcanti and Wallace was that feasible allocations with outside (or base) money are a strict subset of those with inside money. Imposing strict capital requirements would reduce the set of feasible allocations and thereby reduce welfare along this dimension.

Now some might be quick to dismiss this particular welfare criteria. After all, greater stability of the financial system would seem to be more important than whether the equilibrium quantity of production is the socially optimum quantity. However, this ignores the potential interaction between the two. Caballero, for example, has argued that there is a shortage of safe assets. This claim is consistent with what I argued above. If the supply of media of exchange is not sufficient to allow for the socially optimum quantity of output then there is a transaction asset shortage. As a result, there is a strong incentive for banks to create more transaction assets. This can explain while interest rates were low in early part of the decade and can similarly explain the expansion in the use of highly-rated tranches of MBS in repurchase agreements prior to the financial crisis.

In other words, the shortage of transaction assets described above creates an incentive for banks to create new such assets in the form of new debt finance. Thus, it is possible that banks have a bias toward debt finance that would exist even independent of Too Big To Fail, deposit insurance, limited liability, and the tax system. In addition, one could argue that the desire to create such transaction assets played an important role in the subsequent financial crisis as some of the assets that were previously considered safe become information-sensitive and thereby less useful in this role.

To the extent that one believes that the transaction asset shortage is significant, policymakers face a difficult decision with respect to capital requirements. While imposing stronger capital requirements might lead to greater financial stability by imposing greater losses on shareholders, this requirement can also exacerbate the shortage of transaction assets. Banks and other financial institutions will then have a strong incentive to attempt to mitigate this shortage and will likely try to do so through off-balance sheet activities.

This is not meant to be a critique of capital requirements in general. However, in my view, it is not obvious that they are sufficient to produce the desired result. One must be mindful of the role that banks play in the creation of transaction assets. It would be nice to have an explicit framework in which to examine these issues more carefully. In the meantime, hopefully this provides some food for thought.

P.S. Miles Kimball has suggested to me that capital requirements coupled with a sovereign wealth fund could assist in financial stability and fill the gap in transaction assets. I am still thinking this over. I hope to have some thoughts on this soon.