Category Archives: Politics

What is Fair?

I recently read Thomas Piketty’s Capital in the 21st Century (my review of which will soon be published by National Review, for those interested). In reading the book, an implicit theme is that of fairness. Throughout the text, Piketty argues that his evidence on inequality suggests that there is a growing importance of inheritance in the determination of income and that this trend is likely to continue. It seems that Piketty sees this as problematic because it undermines meritocracy and even democracy. Nonetheless, when we start talking about there being too much inequality or too great of an importance of inheritance, this necessarily begs the question: How much is too much?

Economists have common ways of dealing with that question. There are vast literatures on optimal policies of all different types. The literature on optimal policy has a very consistent theme. First, the economist writes down a set of assumptions. Second, the economist solves for the efficient allocation given those assumptions. Third, the economist considers whether a decentralized system can produce the efficient allocation. If the decentralized allocation is inefficient, then there is a role for policy. The optimal policy is the one that produces the efficient allocation.

When Piketty and others talk about inequality and policy, however, they aren’t really talking about efficiency. Meritocracy-type arguments are about fairness. Economists, however, often shy away from discussing fairness. The reason is often simple. Who defines what is fair? Let’s consider an example. Suppose there are two workers, Adam and Steve, who are identical in every possible way and to this point have had the exact same economic outcomes. In addition, assume that we only observe what happens to these individuals at annual frequencies. Now suppose that this year, Adam receives an entirely random increase in pay, where random simply refers to something that was completely unanticipated by everyone. However, this year Steve loses his job for an entirely random reason (e.g. a clerical error removed Steve from the payroll and it cannot be fixed until next year). After this year, Adam and Steve go back to being identical (the clerical error is fixed!) and continue to experience the same outcomes the rest of their lives.

This is clearly a ridiculously stylized example. However, we can use this example to illustrate the difference between how economists evaluate policies. For someone concerned with a meritocratic view of fairness, the ideal policy in the ridiculous example above is quite clear. Adam, through no actions of his own, has received a windfall in income. Steve, through no fault of his own, has lost his income for an entire year. Someone only concerned with meritocracy would argue that the ideal policy is therefore to tax the extra income of Adam and give it to Steve.

Most economists, armed with the same example would not necessarily agree that the meritocratic policy is ideal. The most frequently used method of welfare analysis is the idea of Pareto optimality. According to Pareto optimality, a welfare improvement occurs when at least one person can be made better off without making another person worse off. In our example above, Pareto optimality implies that the optimal policy is to do nothing because taxing Adam and giving the money to Steve makes Adam worse off.

Advocates of meritocracy, however, are unlikely to be convinced by such an argument. And there is reason to believe that they shouldn’t be convinced. For example, if Adam and Steve both knew that there was some random probability of unemployment ex ante, they might have chosen to behave differently. For example, suppose that Adam and Steve each knew in advance that there was some probability that one of them would lose their job. They might have each purchased insurance against this risk. If we assume the third party insurer can costly issue insurance and earns zero economic profit, then when Steve became unemployed, he would receive his premium back plus what is effectively a transfer from Adam.

Of course, in this example, there still isn’t any role for policy. Private insurance, rather than policy, can solve the problem. Nonetheless, as I detail below, this does give us a potentially better starting place for discussing fairness, efficiency, and inequality.

Suppose that inequality is entirely driven by random idiosyncratic shocks to individuals and that these events are uninsurable (e.g. one cannot insure themselves against being born to poor parents, for example). There is a potential role for policy here that is both fair and efficient. In particular, the policy would correspond to what economists traditionally think of as ex ante efficiency. In other words, a fair policy would be the policy that individuals would choose before they knew the realization of these random shocks.

As it turns out there is a sizable literature in economics that examines these very issues and derives optimal policy. The conclusions of this literature are important because (1) they take the meritocratic view seriously, and (2) they arrive at policy conclusions that are often at odds with those proposed by advocates of meritocracy.

It is easy to make an argument for meritocracy. If people make deliberate decisions that improve their well-being, then it is easy to make the case that they are “deserving” of the spoils. However, if people’s well-being is entirely determined by sheer luck, then those who are worse off than others are simply worse off due to bad luck and a case can be made that this is unfair. Unfortunately, for advocates of meritocracy, all we observe in reality are equilibrium outcomes. In addition, individual success is often determined by both deliberate decision-making and luck. (No amount of anecdotes about Paris Hilton can prove otherwise.) I say this is unfortunate for advocates of meritocracy because it makes it difficult to determine what amount of success is due to luck and what is due to deliberate actions. (Of course, this is further muddled by the fact that when I say luck, I am referring to entirely random events, not the definition of the person who once told me that “luck is when preparation meets opportunity.”)

Nevertheless, our economic definition of fairness allows us to discuss issues of inequality and policy without having to disentangle the complex empirical relationships between luck, deliberate action, and success. Chris Phelan, for example, has made a number of contributions to this literature. One of his papers examines the equality of opportunity and the equality of outcome using a definition of fairness consistent with that described above. Rather than examining policy, he examines the equality of opportunity and outcome within contracting framework. What he shows is that inequality of both opportunity and outcome are both consistent with this notion of fairness in a dynamic context. In addition, even extreme inequality of result is consistent with this definition of fairness (such extreme inequality of opportunity, however, are not supported so long as people care about future generations).

Now, of course, this argument is not in any way the definitive word on the subject. However, the main point is that a high degree of inequality is not prima facie evidence of unfairness. In other words, it is not only difficult to disentangle the effects of luck and deliberate action in determining an individuals income and/or wealth, it is actually quite difficult to figure out whether a particular society is fair simply by looking at aggregate statistics on inequality.

This point is especially important when one thinks about what types of policies should be pursued. Advocates of a meritocracy, for example, often promote punitive policies — especially policies pertaining to wealth and inheritance. Piketty, for example, advocates a global, progressive tax on wealth. The idea behind the tax is to forestall the importance of inheritance in the determination of income and wealth. While this policy might be logically consistent with that aim, but it completely ignores the types of things that we care about when thinking about optimal policy.

For example, consider the Mirrlees approach to optimal taxation. The basic starting point in this type of analysis is to assume that skills are stochastic and the government levies taxes on income. The government therefore faces a trade-off. They could tax income highly and redistribute that income to those with lower skill realizations. This represents a type of insurance against having low skills. On the other hand, high taxes on income would discourage high skill workers from producing. The optimal policy is one that best balances this trade-off. As I note in my review of Piketty in National Review, this literature also considers optimal taxation with regards to inheritance. The trade-off here is that high taxes on inheritance discourage wealth accumulation, but provide insurance to those who are born to poor parents. The optimal policy is the one that best balances these incentives. As Farhi and Werning point out in their work on inheritance, it turns out that the optimal tax system for inheritance is a progressive system. However, the tax rates in the progressive system are negative (i.e. we subsidize inheritance with the subsidization getting smaller as the size of the inheritance gets larger). The intuition behind this is simple. This system provides insurance without reducing incentives regarding wealth accumulation.

Economists are often criticized as being unconcerned with fairness. This is at least partially untrue. Economists are typically accustomed to thinking about optimality in the context of Pareto efficiency. As a result, economists looking at two different outcomes will be hesitant to suggest that a particular policy might be better than another if neither represents a Pareto improvement. Nonetheless, this doesn’t mean that economists are unconcerned with the issue of fairness nor does it suggest that economists are incapable of thinking about fairness. In fact, economists are capable of producing a definition of fairness and the policy implications thereof. The problem for those most concerned with fairness is the economic outcomes and policy conclusions consistent with this definition might not reinforce their ideological priors.

On SNAP Eligibility and Spending

William Galston has an op-ed in the Wall Street Journal that begins as follows:

We are entering a divisive debate on the Supplemental Nutrition Assistance Program (SNAP), popularly known as food stamps. Unless facts drive the debate, it will be destructive as well.

I certainly agree with this statement. Unfortunately, I found the op-ed misleading and vague (a vague op-ed can be somewhat forgiven since word counts are limited).

The basic premise of Galston’s op-ed is that critics of the increased spending on food stamps are misguided in their criticisms. For example, he explains:

The large increase in the program’s cost over the past decade mostly reflects worsening economic conditions rather than looser eligibility standards, increased benefits, or more waste, fraud and abuse.

[...]

The food-stamp program’s costs have soared since 2000, and especially since 2007. Here’s why.

First, there are many more poor people than there were at the end of the Clinton administration. Since 2000, the number of individuals in poverty has risen to 46.5 million from 31.6 million—to 15% of the total population from 11.3%. During the same period, the number of households with annual incomes under $25,000 rose to 30.2 million (24.7% of total households) from 21.9 million (21.2%).

Critics complain that beneficiaries and costs have continued to rise, even though the Great Recession officially ended in 2009. They’re right, but the number of poor people and low-income households has continued to rise as well.

Thus, according to Galston, we can explain much of the increase in food stamp spending on the rise of poverty over the last 13 years (and especially the last 6 years). If Galston is correct, then we could examine the ratio of households who are receiving SNAP benefits to the number of people below the poverty line. Supposing that he is correct, we would expect that this ratio would be constant (or at least roughly so). In other words, as the number of people below the poverty line increased, the number of households receiving SNAP benefits would increase in direct proportion.

Such a comparison, however, casts doubt on Galston’s claim. Casey Mulligan, in his book The Redistribution Recession, has taken great effort to actually calculate such ratios. What Mulligan found is that from 2007 to 2010, the number of families below 125% of the federal poverty level increased by 16%. That is indeed a large increase. However, the number of households receiving SNAP benefits increased by 58%. This means that the SNAP recipiency ratio, or the ratio of households receiving SNAP to that below 125% of the poverty line (a higher threshold that Galston himself uses), rose by 37%.

So what can explain the fact that recipients are rising so much faster than poverty? One possible explanation are eligibility requirements. Since 2008, there have been several changes to eligibility for food stamps. For example, the Farm Bill passed in 2008 increased the maximum benefit that beneficiaries could receive, it excluded some income from the formula used to determine eligibility, and it weakened the evaluation of assets of potential enrollees. In addition, the American Reinvestment and Recovery Act also loosed eligibility requirements by once again increasing the maximum benefit that one could receive, gave states the ability to loosen the work requirement, and further loosened income requirements.

Galston, however, downplays most of these changes and argues that macroeconomic trends explain the vast majority of the rise of SNAP spending. However, the use of this type of explanation is problematic because it is taking the actual increase in recipients and then explaining the increase in spending ex post. To understand why this is misleading, consider the following example. Suppose that there is an individual who lost his job in 2009. Prior to 2007, he would not have been eligible for SNAP whereas after the changes he is now eligible. Thus, after 2007, this increases the number of recipients of SNAP. Galston might claim that this change is the result of macroeconomic trends because this person would not have enrolled in SNAP had he not lost his job. Others might say that this change is due to eligibility requirements becaus if the worker had lost his job two years prior, he would not have been eligible. While I certainly understand Galston’s perspective on this, the relevant comparison is to the counterfactual. In other words we can’t explain the rise in SNAP recipients ex post, we need to consider what actually happened to what would have happened in the absence of a policy change.

So what do the counterfactuals say?

Again, Casey Mulligan has constructed these counterfactuals. What he finds is that between 2007 and 2010, the increase in per capita SNAP spending was 100%, adjusted for inflation. He then constructs two counterfactuals. The first counterfactual takes macroeconomic trends as given and computes the increase in per capita SNAP spending under 2007 eligibility rules. The second counterfactual does the same thing assuming that in addition to maintaining 2007 eligibility rules, the government had maintained constant real benefit rules (i.e. would not have increased the after-inflation maximum benefit).

The first counterfactual suggests that from 2007 to 2010 per capita SNAP spending would have only increased by 60%, adjusted for inflation. The second counterfactual suggests that per capita SNAP spending would have increased only 24%, adjusted for inflation. Had no policy changes been enacted in 2008 and 2009, per capita spending on SNAP would have been 62% of what it actually was in 2010. Put differently, 48% of the per capita 2010 spending is attributable to changes in eligibility. Thus, contrary to the claims of Galston, a very large fraction of the increase in SNAP spending is explained by changes in eligibility.

An entirely separate question is whether or not this increased spending is worth it. Answering that question is certainly beyond the scope of this post. However, it is important to be mindful that such analysis must consider both the costs and the benefits of the expansion. The benefits are obvious. Households receive assistance in purchasing food and feeding their families. The costs, however, are more complex. A significant fraction of the increase in spending can be explained by changes in eligibility. Thus, we need to consider the counterfactual. One big issue is to consider how much of the increased benefits are going to those who would not have qualified under the asset tests. Another issue is to consider is the effect of changes in eligibility on the labor supply of those at or near the new eligibility requirements, especially given the work requirement waiver. And there is the obvious monetary cost to the taxpayer. Too often those on each side of the debate focus on only the benefits or only the costs.

Whether the policy changes are worth it depends on a careful analysis of these questions. I will remain agnostic with regards to that type of analysis. However, to argue that those concerned about the expansion spending due to changes in eligibility are misguided and driven by “anti-government ideology”, as Galston argues, is an unfair criticism to those who have carefully looked at the data.

Are Capital Requirements Meaningless?

Yes, essentially.

The push for more strict capital requirements has become very popular among economists and policy pundits. To understand the calls for stricter capital requirements, consider a basic textbook analysis of a consolidated bank balance sheet. On the asset side, banks have things like loans, securities, reserves, etc. On the liability side a traditional commercial bank has deposits and something called equity capital. Given our example, equity capital is defined as the difference between the bank assets and deposits (note that banks don’t actually “hold” capital).

So why do we care about capital?

Suppose that assets were exactly equal to deposits. In this case the equity capital of bank would be non-existent. As a result, any loss on the asset side of the balance sheet of the bank would leave the bank with insufficient assets to cover outstanding liabilities. The bank would be insolvent.

Now suppose instead that bank’s assets exceed their deposits and the bank experiences the same loss. If this loss is less than the bank’s equity capital then the bank remains solvent and there is no loss to depositors.

The call for capital requirements is driven by examples like those above coupled with the institutional environment in which banks operate. For example, banks have limited liability. This means that shareholders are subjected only to losses to their initial investment in the event that a bank becomes insolvent. Put differently, bank shareholders are not assessed for the losses to depositors. Since the private cost of insolvency is less than the public cost, shareholders have an incentive to favor riskier assets than they would otherwise. Conceivably, this notion is well-understood since the the shift to limited liability for banks in the early 1930s was coupled with the creation of government deposit insurance. However, while deposit insurance insulates deposits from losses due to insolvency, it also has the effect of encouraging banks to take on more risk since depositors have little incentive to monitor the bank balance sheet.

Within this environment capital requirements are thought to reduce the risk of insolvency. By requiring that banks have equity capital greater than or equal to some percentage of their assets, this should make banks less likely to become insolvent. This is because, all else equal, a greater amount of capital means that a bank can withstand larger losses on the asset side of their balance sheet without becoming insolvent.

There is nothing logically wrong with the call for greater capital requirements. In fact, calls for greater capital requirements represent a seemingly simple and intuitive solution to the risk of financial instability. So why then does the title of this post ask if capital requirements are meaningless? The answer is that calls for higher capital requirements ignore some of the realities of how banks actually operate.

The first problem with capital requirements is that they impose a substantial information cost on bank regulators. How should value be reported? Should assets be marked-to-market or considered at book value? Some assets are opaque. More importantly, this practice shifts the responsibility of risk management from the bank to the regulator. This makes the actual regulation of banks difficult.

Second, and more importantly, is that capital requirements provide banks with an incentive to circumvent the intentions of the regulation while appearing compliant. In particular, a great deal of banking over the last couple of decades has been pushed off the bank balance sheet. Capital requirements provide banks with an incentive to move more assets off their balance sheet. Similarly, banks can always start doing bank-like things without actually being considered a bank thereby avoiding capital requirements altogether. In addition, this provides an opportunity for non-banks to enter the market to provide bank-like services. In other words, the effect of capital requirements is to make the official banking sector smaller without necessarily changing what we would consider banking activity. Put simply, capital requirements become less meaningful when there are substitutes for bank loans.

Advocates of capital requirements certainly have arguments against these criticisms. They would be perhaps correct to conclude that the costs and imperfections associated with the actual regulation would be work it if capital requirements brought greater stability to the system. However, the second point that I made above would seemingly render this point moot.

For example, advocates of higher capital requirements seem to think that redefining what we call a bank and adopt better general practices for accounting reporting would eliminate the second and most important problem that I highlighted above. I remain doubtful that such actions would have any meaningful impact. First, redefining what a bank is to ensure that banks and non-banks in the current sense remain on equal footing regarding capital requirements is at best a static solution. Over time, firms that would like to be non-banks will figure out how to avoid being considered a bank. Changing the definition of a bank only gives them the incentive to change the definition of their firm. In addition, I remain unconvinced that banks will be unable to circumvent changes to general accounting practices. Banks are already quite adept at circumventing accounting practices and hiding loans off of their balance sheets.

Those who advocate capital requirements are likely to find this criticism wanting. If so, I am happy to have that debate. In addition, I would like to point out that my skepticism about capital requirements should not be seen as advocacy of the status quo. In reality, I favor a different change to the banking system that would provide banks with better incentives. I have written about this alternative here and I will be writing another post on this topic soon.

How Much Capital?

Recently, it has become very popular to argue that the best means of financial reform is to require banks to hold more capital. Put differently, banks should finance using more equity relative to debt. This idea is certainly not without merit. In a Modigliani-Miller world, banks should be indifferent between debt and equity. I would like to take a step back from the policy response and ask why banks overwhelmingly finance their activities with debt. It is my hope that the answer to this question will provide some way to focus the debate.

It is clear that when banks finance primarily using equity, adverse shocks to the asset side of a bank’s balance sheet primarily affect shareholders. This seems at least to be socially desirable if not privately desirable. The imposition of capital requirements would therefore seem to imply that there is some market failure (i.e. the private benefit from holding more capital is less than the social benefit). Even if this is true, however, one needs to consider what makes it so.

One hypothesis for why banks hold too little capital is because they don’t internalize the total cost of a bank failure. For example, banks are limited liability corporations and covered by federal deposit insurance. Thus, if the bank takes on too much risk and becomes insolvent, shareholders lose their initial investment. Depositors are made whole through deposit insurance. It is this latter characteristic that is key. If bank shareholders were responsible not only for their initial level of investment, but also for the losses to depositors, banks would have different incentives. In fact, this was the case under the U.S. system of double liability that lasted from just after the Civil War until the Banking Act of 1933. (I have written about this previous here.) Under that system bank shareholders had a stronger incentive to finance using equity. In fact, evidence shows that banks with double liability took on less leverage and less risk than their limited liability counterparts.

Along similar lines the existence of Too Big Too Fail similarly creates greater incentives toward risk-taking and leverage because in the event that the bank becomes insolvent, it will be rescued by the government. Finally, the U.S. tax system treats debt finance more favorable than equity finance.

Of course, a first-best policy solution to these incentive problems would be to eliminate deposit insurance, Too Big to Fail, and the favorable tax treatment of debt finance. However, such reform is either politically infeasible or, in the case of eliminating Too Big to Fail, relies on a strong commitment mechanism by the government. Thus, a second-best policy prescription is to impose higher capital requirements.

This second-best policy solution, however, is contingent upon the characteristics above being the only source of the socially inefficient level of capital. I would argue that even in the absence of these characteristics banks might still be biased toward debt finance and that imposing capital requirements could actually result in a loss in efficiency along a different dimension of welfare.

The reason that capital requirements could be welfare-reducing has to do with the unique nature of bank liabilities. Banks issue debt in the form of deposits (and, historically, bank notes), which circulate as a medium of exchange. Thus, bank debt serves a social purpose over and above the private purpose of debt finance. This social function is important. In a world that consists entirely of base money, for example, individuals will economize on money balances because money does not earn a pecuniary yield. As a result, the equilibrium quantity of consumption and production will not equal the socially optimum quantity. Bank money, or inside money, has the potential to be welfare improving. In fact, the main result of Cavalcanti and Wallace was that feasible allocations with outside (or base) money are a strict subset of those with inside money. Imposing strict capital requirements would reduce the set of feasible allocations and thereby reduce welfare along this dimension.

Now some might be quick to dismiss this particular welfare criteria. After all, greater stability of the financial system would seem to be more important than whether the equilibrium quantity of production is the socially optimum quantity. However, this ignores the potential interaction between the two. Caballero, for example, has argued that there is a shortage of safe assets. This claim is consistent with what I argued above. If the supply of media of exchange is not sufficient to allow for the socially optimum quantity of output then there is a transaction asset shortage. As a result, there is a strong incentive for banks to create more transaction assets. This can explain while interest rates were low in early part of the decade and can similarly explain the expansion in the use of highly-rated tranches of MBS in repurchase agreements prior to the financial crisis.

In other words, the shortage of transaction assets described above creates an incentive for banks to create new such assets in the form of new debt finance. Thus, it is possible that banks have a bias toward debt finance that would exist even independent of Too Big To Fail, deposit insurance, limited liability, and the tax system. In addition, one could argue that the desire to create such transaction assets played an important role in the subsequent financial crisis as some of the assets that were previously considered safe become information-sensitive and thereby less useful in this role.

To the extent that one believes that the transaction asset shortage is significant, policymakers face a difficult decision with respect to capital requirements. While imposing stronger capital requirements might lead to greater financial stability by imposing greater losses on shareholders, this requirement can also exacerbate the shortage of transaction assets. Banks and other financial institutions will then have a strong incentive to attempt to mitigate this shortage and will likely try to do so through off-balance sheet activities.

This is not meant to be a critique of capital requirements in general. However, in my view, it is not obvious that they are sufficient to produce the desired result. One must be mindful of the role that banks play in the creation of transaction assets. It would be nice to have an explicit framework in which to examine these issues more carefully. In the meantime, hopefully this provides some food for thought.

P.S. Miles Kimball has suggested to me that capital requirements coupled with a sovereign wealth fund could assist in financial stability and fill the gap in transaction assets. I am still thinking this over. I hope to have some thoughts on this soon.

Re-Thinking Financial Reform

Over at National Review Online I advocate reviving double liability for banks. Here is an excerpt:

The banking system in the U.S. hasn’t always been like this. Between the Civil War and the Great Depression, banks did not have limited liability. Instead, they had double liability. When a bank became insolvent, shareholders lost their initial investment (just as they do under limited liability today). But in addition, a receiver would assess the value of the asset holdings of the bank to determine the par value of the outstanding shares. Shareholders had to pay an amount that could be as high as the current value of their shares in compensation to depositors and creditors.

Shareholders and bank managers (who were often shareholders themselves) thus had a stronger incentive than they do today to assess the risk of investments accurately, because they were risking not just their initial investment but the total value of the banks’ assets. Shareholders also had an incentive to better monitor bank managers and the bank balance sheet.

Monetary Theory and the Platinum Coin

Yesterday I argued that the platinum coin is a bad idea. In doing so I received a substantial amount of pushback. Some have argued that while the platinum coin might be a dumb idea, it is preferable to being held hostage by recalcitrant Republicans. Others argued that my claims about the potential inflationary effect of the platinum coin were overblown. With regards to the first claim, I have very little to add other than the fact that I don’t subscribe to the “two wrongs make a right” theory of public policy. The second claim, however, is more substantive. It is also something about which economic theory has something to say.

In many contemporary models, money is either excluded completely or is introduced using a reduced form approach, such as including real money balances in the utility function. These models are ill-equipped to tackle the effects of the introduction of the platinum coin because they either assume that money always has value (it generates utility) or that it has no value whatsoever. An analysis of the effects of the platinum coin should be backed by an understanding of what gives money value in a world of fiat money and the conditions necessary to insure a unique equilibrium in which money has value. In doing so, one can show that having the Fed conduct open market sales to offset the increase in the monetary base from the minting of the platinum coin (i.e. holding the money supply constant) might not be sufficient to prevent a significant inflation.

To illustrate the properties of money, I am going to employ the monetary search model of Lagos and Wright. (If you’re allergic to math, scroll down a bit.) The reason that I am employing this approach is because it is built on first principles, its explicit about the conditions under which a monetary equilibrium exists, and can be used to derive a dynamic equilibrium condition that can shed light on the value of money.

The basic setup is as follows. Time is discrete and continues forever. There are two types of agents, buyers and sellers. Each time period is divided into two subperiods. In the first subperiod, buyers and sellers are matched pairwise and anonymously to trade (we will call this the decentralized market, or DM). In the second subperiod, buyers and sellers all meet in a centralized (Walrasian) market (we will call this the centralized market, or CM). What makes buyers and sellers different are their preferences. Buyers want to purchase goods in the DM, but cannot produce in that subperiod. Sellers want to purchase goods in the CM, but cannot produce in that subperiod. Thus, there is a basic absence of double-coincidence of wants problem. The anonymity of buyers and sellers in the DM means that money is essential for trade. Given this basic setup, we can examine the conditions under which money has value and this will allow us to discuss the implications of the platinum coin. (Note that we can confine our analysis to buyers since sellers will never carry money into the DM since they never consume in the DM.)

Suppose that buyers have preferences:

E_0 \sum_{t = 0}^{\infty} \beta^t [u(q_t) - x_t]

where \beta is the discount factor, q is the quantity of goods purchased in the DM, and x is the quantity of goods produced by the buyer in the CM. Consumption of the DM good provides utility to the buyer and production of the CM good generates disutility of production. Here, the utility function satisfies u'>0 ; u''<0.

The evolution of money balances for the buyer is given by:

\phi_t m' = \phi_t m + x_t

where \phi denotes the price of money in terms of goods, m denotes money balances, and the apostrophe denotes an end of period value. Now let's denote the value function for buyers in the DM as V_t(m) and the value function for buyers entering the CM as W_t(m).

Thus, entering the CM, the buyer's value function satisfies:

W_t(m) = \max_{x,m'} [-x_t + \beta V_{t + 1}(m')]

Using the evolution of money balances equation, we can re-write this as

W_t(m) = \phi_t m + \max_{m'} [-\phi_t m' + \beta V_{t + 1}(m')]

In the DM, buyers and sellers are matched pairwise. Once matched, the buyers offer money in exchange for goods. For simplicity, we assume that buyers make take-it-or-leave-it offers to sellers such that \phi_t d = c(q_t) where d \in [0,m] represents the quantity of money balances offered for trade and c(q_t) represents the disutility generated by sellers from producing the DM good. The value function for buyers in the DM is given as

V_t(m) = u(q_t) + W_t(m - d)

Using the linearity of W and the conditions of the buyers' offer, this can be re-written as:

V_t(m) = u(q_t) - c(q_t) + \phi_t m

Iterating this expression forward and substituting into $W$, we can then write the buyer's problem as:

max_{m} \bigg[-\bigg({{\phi_t/\phi_{t + 1}}\over{\beta}} - 1\bigg)\phi_{t + 1} m + u(q_{t+1}) - c(q_{t+1}) \bigg]

[If you're trying to skip the math, pick things up here.]

From this last expression, we can now place conditions on whether anyone will actually hold fiat money. It follows from the maximization problem above that the necessary condition for a monetary equilibrium is that \phi_t \geq \beta \phi_{t + 1}. Intuitively, this means that the value of holding fiat money today is greater than or equal to the discounted value of holding money tomorrow. If this condition is violated, everyone would be better off holding their money until tomorrow indefinitely. No monetary equilibrium could exist.

Thus, let's suppose that this condition is satisfied. If so, this also means that money is costly to hold (i.e. there is an opportunity cost of holding money). As a result, buyers will only hold an amount of money necessary to finance consumption (in mathematical terms, this means d = m). This means that the buyers' offer can now be written \phi_t m = c(q_t). This gives us the necessary envelope conditions to solve the maximization problem above. Doing so, yields our equilibrium difference equation that will allow us to talk about the effects of the platinum coin. The difference equation is given as

\phi_t = \beta \phi_{t + 1}\bigg[ \bigg(u'(q_{t + 1})/c'(q_{t + 1}) - 1 \bigg) + 1 \bigg]

Since money is neutral in our framework, we can assume that there is a steady state solution such that q_t = q \forall t. Thus, the difference equation can be written:

\phi_t = \beta \phi_{t + 1}\bigg[ \bigg(u'(q)/c'(q) - 1 \bigg) + 1 \bigg]

This difference equation now governs the dynamics of the price of money. We can now use this assess claims that the platinum coin would not have any inflationary effect.

Suppose that u and c have standard functional forms. Specifically, assume that u(q) = {{q^{1 - \gamma}}\over{1 - \gamma}} and c(q) = q. [I should note that the conclusions here are robust to more general functional forms as well.] If this is the case, then the difference equation is a convex function up to a certain point at which the difference equation becomes linear. The convex portion is what is important for our purposes. The fact that the difference equation is convex implies that the difference equation intersects the 45-degree line used to plot the steady-state equilibrium in two different places. This means that there are multiple equilibria. One equilibrium, which we will call \phi_{ss} is the equilibrium that is assumed to be the case by advocates of the platinum coin. They assume that if we begin in this equilibrium, the Federal Reserve can simply hold the money supply constant through open market operations and in so doing prevent the price of money (i.e. the inverse of the price level) from fluctuating.

However, what this suggestion ignores is that the difference equation also intersects the 45-degree line at the origin. Coupled with the range of convexity of the difference equation, this implies that there are multiple equilibria that converge to an equilibrium in which money does not have value (i.e. \phi = 0). Put in economic terms, there are multiple equilibria that are decreasing in \phi, which means that they increasing in the price level. It is therefore possible to have inflation even with a constant money supply. The beliefs of economic agents are self-fulfilling.

In terms of the platinum coin, this implies that the explicit monetization of the debt by minting the platinum coin can potentially have disastrous effects even if the president states that the infusion is temporary and even if the Federal Reserve conducts open market operations to offset the increase in the monetary base caused by the deposit of the coin by the Treasury. In short, if the debt monetization were to have a significant impact on inflation expectations, it is possible that the United States could experience significant inflation even if the Federal Reserve tried to hold the money supply constant. The very idea that this represents a possible outcome should render the platinum coin to be a bad idea.

The Debt Ceiling, Platinum Coins, and Other Nonsense

In the coming months, it is very likely that the president and Congressional Republicans will once again go to battle over the debt ceiling. Like many others, I am already lamenting the idea of more “negotiations” between the president and Congress. However, unlike others I see this as a problem with the debt ceiling itself, not the Congressional Republicans. So long as it is within their power to use the debt ceiling as a bargaining chip, they should be free to do so if they wish. (They should recognize, of course, that this is not as strong a bargaining chip as they realize, however. A refusal to raise the debt ceiling without spending concessions from the president is simply a game of chicken. Anti-coordination games are unlikely to be the best strategy for achieving one’s objective.)

Nonetheless, a growing subset of individuals who believe that the Congressional Republicans are recalcitrant have suggested that the president authorize the Treasury department to mint a $1 trillion platinum coin (because this is within constitutional authority) and deposit it with the Federal Reserve to enable the payment of the federal government debt. The argument is that in doing so the president can circumvent the debt ceiling within constitutional limits. In addition, advocates argue that, since the coin will never circulate, the minting of the coin will not be inflationary.

If this idea sounds ludicrous, that is because it is.

Minting a platinum coin sufficient to pay off the deficit is what is traditionally known as monetizing the debt. To put it bluntly, large-scale debt monetization is bad. This is traditionally how hyperinflations start. Nonetheless, we are told that we needn’t be concerned because the coin won’t circulate. This would seem to ignore two factors: (1) the point of the coin is to pay for the debt, and (2) money is fungible. Thus, if the Treasury minted a $1 trillion platinum coin and deposited it at the Federal Reserve, the entire point of doing so would be to allow the Federal Reserve to make payments on behalf of the Treasury for government spending that exceeds tax revenue. Even if the coin itself doesn’t circulate (how could it?), the money supply can still increase substantially as the Treasury writes checks out of its account at the Federal Reserve.

Advocates, however, dismiss this possibility. Josh Barro, for example, argues:

[Inflation] is a more serious objection, and it gets at what the platinum coin strategy really is — financing the federal government’s operations by printing money instead of borrowing it. The trillion- dollar coin will never circulate, but it will be used to back cash payments coming from the Treasury that would have otherwise been financed by bond purchases.

If the government financed itself this way in general, that would absolutely be inflationary. But the president can hold inflation expectations steady by making absolutely clear that the policy will not lead to a net change in the money supply over the long term. Obama should pledge that once Congress authorizes additional borrowing, he will direct the Treasury to issue bonds to cover the government’s coin-backed spending and then to melt the coin.

I similarly believe that expectations are important. However, Barro seems to fall into the growing category of folks who think that expectations are all that matters and that policymakers can perfectly affect expectations. An announcement from the president that the increase in the money supply isn’t permanent does not guarantee that the minting of the coin is seen as such. In order to believe that the money supply would not increase, we would not only have to believe that the Treasury would commit to borrowing money in the future once the debt ceiling was lifted, but also that the Treasury would borrow enough money to finance the previously financed cash payments necessary to enable them to withdraw the $1 trillion coin. In other words, we would have to believe that the Treasury could perfectly commit itself to actions it would prefer not to take. Or we would have to assume that the Federal Reserve would conduct large scale asset sales to prevent increases in the money supply. Put differently, in the midst of conducting large scale asset purchases, the Fed must commit to large scale asset sales to prevent the money supply from growing by more than they wish as a result of the minting of the coin. The policy would not only tie the hands of monetary policymakers, but forcing the Federal Reserve to conduct such policy is a threat to its independence. And if inflation expectations became unanchored, this could exasperate the effects of the increased money supply and the coin could be particularly harmful.

Advocates think that it gives the president an upper hand in debt ceiling negotiations. However, all it does is increase the stakes of the chicken game. The platinum coin is a bad idea.

On Fiscal Policy

In recent weeks, there seems to have been a resurgence in the discussion of the relative effectiveness of counter-cyclical fiscal policy. This discussion is clouded by the fact that there are some whose political ideology seems to get in the way of reasonable discussion of evidence (and who believe that only those who disagree with them are biased!). In this post I would like to make the following points: (1) there is no such thing as “the” fiscal multiplier, (2) empirical and theoretical estimates are highly sensitive to assumptions about monetary policy — assumptions that seem to be violated by the behavior of central banks, and (3) New Keynesian models are flawed models for estimating a fiscal multiplier (especially in the context of log-linearized equations).

The most fundamental point surrounding the discussion of the fiscal multiplier is that there is, in fact, no such thing as “the” fiscal multiplier. Put differently, the fiscal multiplier is not a structural parameter that can be identified through careful theoretical or empirical work. To the extent that it is possible for a fiscal multiplier to exist, such a multiplier is likely to be dependent on a number of other factors such as the monetary regime and the composition of spending, to name two.

This point is important as it pertains to interpretations of empirical work designed to measure the magnitude of response of a change in fiscal policy. For example, in order to empirically estimate the magnitude of the effect of fiscal policy on output, one needs to find some sort of exogenous change in government purchases to avoid problems of endogeneity in estimation. To avoid the problem of endogeneity, many researchers have used military purchases since military build-ups in the face of war can be considered exogenous (i.e. the government isn’t building tanks to increase GDP, but to fight a war). These types of studies provide estimates of a multiplier effect of military purchases on real output. However, it is important to note that these estimates do not necessarily provide an estimate of a fiscal multiplier that corresponds with all forms of government spending. The composition of spending matters.

This point is particularly important when we consider the differences between the these estimates and the likely effects of the American Recovery and Reinvestment Act (ARRA), or as it is commonly referred as “the stimulus package.” The ARRA is not made up of a significant chunk of military spending. In fact, a significant portion of the ARRA consists of transfer payments. Even in the Keynesian income-expenditure model that is unfortunately still taught to undergraduates to understand macroeconomics, transfer payments have no effect on GDP. Thus, the multiplier effect of these provisions is zero. It follows that it would be incorrect to take an estimate of a fiscal multiplier from studies that use military spending as an explanatory variable and apply that multiplier to the total amount of spending. In addition, there is no obvious reason to apply this multiplier to the non-transfer payment fraction of the ARRA as it is not obvious that the marginal impact on real output from building a road, a bridge, or a school or buying a new fleet of government vehicles is equal to the marginal impact of military spending.

Even if we ignore the issue of the composition of spending on estimates of the multiplier, it is necessary to consider the effects of fiscal policy in light of monetary policy. If monetary policy responds actively to changes in economic conditions, then a purportedly effective fiscal policy will cause monetary policy to be more contractionary that it would have been otherwise. Put differently, monetary policy will offset, either in whole or in part, the effects of fiscal policy.

Recent theoretical and empirical work seems to appreciate this point, but argues that at the zero lower bound on nominal interest rates, monetary policy is ineffective and therefore fiscal policy can be effective. But how valid is this assumption? Central bankers certainly don’t believe that monetary policy is ineffective at the zero lower bound. If so, there would be no debate about quantitative easing because none would have taken place. In addition, this assumption requires that monetary policy work solely through the nominal interest rate (or the expected time path of the nominal interest rate). However, if this is the case, then monetary policy is always relatively ineffective because interest rates do not have strong marginal effects on variables like investment. Empirical work on monetary policy over the last 20 years seems to refute that ineffectiveness proposition. In fact, Ben Bernanke’s work on the credit channel is motivated by the very fact that the federal funds rate seems insufficient to understand transmission of monetary policy. Once we dispense with this notion of the ineffectiveness of monetary policy at the zero lower bound, we realize that empirical studies that estimate a fiscal multiplier by holding monetary policy constant are really estimating a strict upper bound.

These empirical estimates, however, have been informed by the predominant framework for monetary policy and business cycle analysis, the New Keynesian model. In the NK model, monetary policy works solely through changes in the interest rate. As a result, at the zero lower bound, fiscal policy can be effective — quite effective in some cases. Nonetheless, there are reasons to doubt these estimates of the fiscal multiplier. First, if monetary policy works through alternative transmission mechanisms, then the assumption that we can hold monetary policy constant is flawed. Second, even if we believe that the zero lower bound is a legitimate constraint on policy there is reason to believe that the estimated marginal effect of fiscal policy in the NK model is flawed.

The most compelling reason to doubt the multipliers that come from NK models, even imposing the constraint of the zero lower bound, is that these estimates are driven by the particular way in which these models are solved. For example, Gauti Eggertsson (and others) have pointed out that in the NK model at the zero lower bound, there is something called the paradox of toil. Intuitively, the paradox of toil refers to the characteristic in which the labor supply actually declines following a decrease in taxes. A paradox indeed! (Upon hearing this a commenter who shall remain nameless at a recent conference at the St. Louis Fed found it interesting that presumably it would be possible to increase government spending and fund the increase through higher taxes on labor income all while generating a multiplier effect.) This characteristic is part of a broader conceptualization of the world at the zero lower bound. In short, things look profoundly different than when the interest rate is positive.

But is the world really that different at the zero lower bound? The answer turns out to be no. As Tony Braun and his co-authors have shown, the funny business that goes on at the zero lower bound (i.e. the conclusions that run counter to the conventional wisdom in the discipline) is a figment of the way in which NK models are solved. In particular, the standard way to solve models in the literature is to take a set of non-linear equations that summarize equilibrium and log-linearize around the steady state. One can then generate theoretical impulse response functions from the log-linearized solution to the model. The impact multiplier from the change in government spending in the NK model is therefore a theoretical estimate of the fiscal multiplier. However, it turns out that when the models are solved through non-linear methods the counter-intuitive results disappear and the theoretical estimates of the multiplier are substantially lower — again, even imposing the zero lower bound as a constraint.

The general takeaway from all of this is that there is reason to be skeptical about the discussions and the purported precision of estimates of the fiscal multiplier — whether theoretical or empirical. (And that is to say nothing about the political constraints that go into devising the composition and allocation of spending!) However, what I have written does NOT necessarily imply that there is no role for fiscal policy during a recession. If some form of infrastructure investment by the government passes the cost-benefit test, I think that it is certainly reasonable to move such projects closer to the present because even in the absence of a multiplier effect these projects provide something of value to society. If there is an additional effect on output, then all the better.

We Are Not Entitled to Our Own Facts

Contrarianism is running rampant. Go to a local bookstore and you will note countless “what you know just ain’t so”-type arguments. I am beginning to wonder if this type of trend downplays serious analysis and leaves us open to any argument regardless of whether it is blatantly incorrect. A case in point is a recent op-ed in the New York Times entitled, “Why Chavez Was Re-Elected.”

The op-ed purports that Chavez was re-elected because his policies have been successful. According to the op-ed:

Since the Chávez government got control over the national oil industry, poverty has been cut by half, and extreme poverty by 70 percent. College enrollment has more than doubled, millions of people have access to health care for the first time and the number of people eligible for public pensions has quadrupled.

According to survey evidence by the World Bank, poverty has fallen in Venezuela. However, it is important to put this in context. The earliest data that we have available on poverty from the World Bank is from 2002 and according to the survey the poverty rate was around 60%. It has come down appreciably since. However, this statistic and the corresponding claims in the op-ed ignore what is driving these measured changes and the long-run implications thereof.

Throughout Chavez’s tenure, he has seized thousands of businesses and imposed controls on foreign exchange as well as strict price controls. Any improvement in economic statistics in this type of regime is meaningless. The reason is because Venezuela is experiencing extractive growth. We know from economic theory and historical experience that extractive growth cannot last. (We need only look to the last 15 or so years of research by Daron Acemoglu and James Robinson to understand this conclusion.) The improvement in the statistics that the author highlights are merely the result of the fact that Chavez has seized control of the oil companies and uses revenues to finance social programs. Extractive growth, however, deters foreign direct investment and it reduces the incentives to innovate and re-invest in existing businesses. More broadly, higher risks of expropriation lead to lower income per capita. Meanwhile, according to Bloomberg, price controls have created shortages in “everything from electricity to sugar and beef.”

The author, however, seems to believe that Chavez and others like him have discussed some alternative to “neoliberalism” to foster growth. This view is misplaced. Where so-called neo-liberalism has tried and purportedly failed is in countries that have insufficiently inclusive societal institutions. Yet the author seems to accept correlation as causation in these cases.

If this is where the op-ed ended, I would conclude that the author’s assertions were misguided, but would be content to agree to disagree. However, the author’s claims only become more dubious as the op-ed proceeds. For example, he argues:

Not surprisingly, the leftist leaders have seen Venezuela as part of a team that has brought more democracy, national sovereignty and economic and social progress to the region. Yes, democracy: even the much-maligned Venezuela is recognized by many scholars to be more democratic than it was in the pre-Chávez era.

I’m not sure what I am to make of such a dubious statement. Chavez controls the voter rolls. There have been no external audits of the election. In addition, according to this piece in the Wall Street Journal, there are 10,000 voters who are registered between the ages of 111 and 129. Perhaps Chavez has also improved longevity!

Markets and migration also tell a different story. Since 2000, approximately 120,000 Venezuelans have migrated to the United States. To put that in perspective, that represents a 125% increase in the number of Venezuelans in the United States. In addition, Chavez’s victory was met with a sharp decline in the price of Venezuelan bonds, which had previously rallied on the prospect of his defeat.

Curiously, there was also no mention of lawlessness, violence, and kidnappings under the current regime. One Venezuelan criminologist says that there have been over 155,000 murders in Venezuela during Chavez’s tenure. Gangs rule the streets of Caracas and many crimes go unsolved. This is evident in news reports, but I also know this from talking to Venezuelans who have left.

The op-ed seemingly seems impervious to facts as well:

After recovering from a recession that began in 2009, the Venezuelan economy has been growing for two-and-a-half years now and inflation has fallen sharply while growth has accelerated.

According to the World Bank, inflation has been very high. Over the last three years, consumer price have risen by 27.1%, 28.2%, and 26.1%, respectively. When measured by the GDP deflator, inflation has been even worse. Over the last five years, annual inflation by this metric has been 15.4%, 30.1%, 7.8%, 45.9%, 28.1%, respectively. And this is in the context of a regime of strict price controls and therefore there might be reason to believe that these number understate the actual inflation rate. To put this in perspective, Bloomberg reports that out of all of the countries that they track, only Iran and Belarus have higher rates of inflation than Venezuela.

Venezuela is not prospering and Chavez has not discovered an alternative path toward economic growth and prosperity. The Chavez regime is an extractive regime. There are no incentives for long-run growth, inflation is high, lawlessness is rampant, and there are serious reasons to doubt the validity of the election process. This is the reality. And all of this is contrary to the recent New York Times op-ed.

On Administrative Costs in Health Insurance

In a recent post, Garett Jones asks, “Will ACA’s cost-cutters outcut private insurers?” The post was inspired by a new paper in the New England Journal of Medicine presents an argument in favor of the ACA. I would like to offer some corresponding comments.

One thing that the paper emphasizes is the role of administrative costs. One argument often made in favor of a single payer system is that there are lower administrative costs with one insurer. This is thought to be true of both the insurers and the providers who would only have to negotiate payment rates with one insurer rather than many. Typically, single payer advocates use this to argue that more administrative costs imply that there is a waste of resources. Nonetheless, there are important reasons to question these claims.

First, the game is rigged. Estimates of administrative costs for government-provided insurance never include any estimate of the deadweight loss from taxation that would result from switching individuals on private insurance plans to a public plan.

Second, and substantially more important, is that this argument treats the problem as static rather than dynamic. Insurance companies have an incentive to reduce these costs. If these firms innovate in eliminating some of these costs, these innovations will also leak over into other areas of the economy. To the extent to which insurance companies are marginalized, such innovations will be less likely, which can potentially reduce the benefits of positive externalities that result from innovation.

Third, there seems to be either a misunderstanding or a lack of curiosity with respect to the issue of administrative costs on the insurer side. For example, if the government exhibits economies of scale and the private sector doesn’t then the government can provide the service more efficiently. However, the observation of lower administrative costs on the part of the government does NOT imply greater efficiency. Suppose that administrative costs are predominantly variable costs (the more claims, the higher the cost). It is possible that each individual firm’s variable cost curve lies below the government’s variable cost curve, but that the sum of the variable costs of all private firms is above that of the government. Since we are generally looking at aggregate costs of the private sector versus the public sector, this is consistent with the observation that administrative costs in the private sector are above the public sector, but does not imply any gain in efficiency by switching to the government.

Finally, on the provider sign, the claim is that providers are wasting resources by negotiating with multiple insurers. But, this argument begs the question. Why don’t providers simply negotiate rates multilaterally with insurers? Why do they choose to negotiate individually with insurers with different characteristics like size? To the extent that we believe that health care providers are profit-seeking, why wouldn’t they explore other arrangements? The observation that providers voluntarily choose to negotiate different rates with different insurers suggests not that these negotiations are a waste of resources, but rather that they are beneficial. Thus, in this instance, “waste of resources” seems to imply “not using resources the way we want them to.”