The Fed, Populism, and Related Topics

Jon Hilsenrath has quite the article in The Wall Street Journal, the title of which is “Years of Fed Missteps Fueled Disillusion With the Economy and Washington”. The article criticizes Fed policy, suggests these policy failures are at least partially responsible for the rise in populism in the United States, and presents a rather incoherent view of monetary policy. As one should be able to tell, the article is wide-ranging, so I want to do something different than I do in a typical blog post. I am going to go through the article point-by-point and deconstruct the narrative.

Let’s start with the lede:

Once-revered central bank failed to foresee the crisis and has struggled in its aftermath, fostering the rise of populism and distrust of institutions

There is a lot tied up in this lede. First, has the Federal Reserve ever been a revered institution? According to Hilsenrath’s own survey evidence, in 2003 only 53% of the population rated the Fed as “Good” or “Excellent”. In the midst of the Great Moderation, I would hardly called this revered.

Second, I’ve really grown tired of this argument that economists or policymakers or the Fed “failed to foresee the crisis.” The implicit assumption is that if the crisis had been foreseen, steps could have been taken to prevent it or make it less severe. But, if we accept this assumption, then we would only observe crises when they weren’t foreseen. Yet crises that were prevented would never show up in the data.

Third, to attribute the rise in populism to Federal Reserve policy presumes that the populism is tied to economic factors that the Fed can influence. Sure, if the Fed could have used policy to make real GDP higher today than it had been in the past that might have eased economic concerns. But productivity slowdowns and labor market disruptions caused by trade shocks are not things that the Federal Reserve can correct. To the extent to which these factors are what is driving populism, the Fed only has limited ability to ease such concerns.

But that’s enough about the lede…

So the basis of the article is that Fed policy has been a failure. This policy failure undermined the standing of the institution, created a wave of populism, and caused the Fed to re-think its policies. I’d like to discuss each of these points individually using passages from the article.

Let’s begin by discussing the declining public opinion of the Fed. Hilsenrath shows in his article that the public’s assessment of the Federal Reserve has declined significantly since 2003. He also shows that people have a great deal less confidence in Janet Yellen than Alan Greenspan? What does this tell us? Perhaps the public had an over-inflated view of the Fed to begin with. It certainly reasonable to think that the public had an over-inflated view of Alan Greenspan. It seems to me that there is a simple negative correlation between what they think of the Fed and a moving average of real GDP growth. It is unclear whether there are implications beyond this simple correlation.

Regarding the rise in populism, everyone has their grand theory of Donald Trump and (to a lesser extent) Bernie Sanders. Here’s Hilsenrath:

For anyone seeking to explain one of the most unpredictable political seasons in modern history, with the rise of Donald Trump and Bernie Sanders, a prime suspect is public dismay in institutions guiding the economy and government. The Fed in particular is a case study in how the conventional wisdom of the late 1990s on a wide range of economic issues, including trade, technology and central banking, has since slowly unraveled.

Do Trump and Sanders supporters have lower opinions of the Fed than the population as whole? Who knows? We are not told in the article. Also, has the conventional wisdom been upended? Whose conventional wisdom? Economists? The public?

So the populism and the reduced standing of the Fed appear to be correlations with things that are potentially correlated with Fed policy. Hardly the smoking gun suggested by the lede. So what about the re-thinking that is going on at the Fed?

First, officials missed signs that a more complex financial system had become vulnerable to financial bubbles, and bubbles had become a growing threat in a low-interest-rate world.

Secondly, they were blinded to a long-running slowdown in the growth of worker productivity, or output per hour of labor, which has limited how fast the economy could grow since 2004.

Thirdly, inflation hasn’t responded to the ups and downs of the job market in the way the Fed expected.

These are interesting. Let’s take them point-by-point:

1. Could the Fed have prevented the housing bust and the subsequent financial crisis? It is unclear. But even if they completed missed this, could not policy have responded once these effects became apparent?

2. What does this even mean? If there is a productivity slowdown that explains lower growth, then shouldn’t the Federal Reserve get a pass on the low growth of real GDP over the past several years? Shouldn’t we blame low productivity growth?

3. Who believes in the Phillips Curve as a useful guide for policy?

My criticism of Hilsenrath’s article should not be read as a defense of the Fed’s monetary policy. For example, critics might think I’m being a bit hypocritical since I have argued in my own academic work that the maintenance of stable nominal GDP growth likely contributed to the Great Moderation. The collapse of nominal GDP during the most recent recession would therefore seem to indicate a policy failure on the part of the Fed. However, notice how much different that argument is in comparison to the arguments made by Hilsenrath. The list provided by Hilsenrath suggests that the problems with Fed policy are (1) the Fed isn’t psychic, (2) the Fed didn’t understand that slow growth is not due to their policy, and (3) that the Phillips Curve is dead. Only this third component should factor into a re-think. But for most macroeconomists that re-think began taking place as early as Milton Friedman’s 1968 AEA Presidential Address — if not earlier. More recently, during an informal discussion at a conference, I observed Robert Lucas tell Noah Smith rather directly that “the Phillips Curve is dead” (to no objection) — so the Phillips Curve hardly represents conventional wisdom.

In fact, Hilsenrath’s logic regarding productivity is odd. He writes:

Fed officials, failing to see the persistence of this change [in productivity], have repeatedly overestimated how fast the economy would grow. The Fed has projected faster growth than the economy delivered in 13 of the past 15 years and is on track to do so again this year.

Private economists, too, have been baffled by these developments. But Fed miscalculations have consequences, contributing to start-and-stop policies since the crisis. Officials ended bond-buying programs, thinking the economy was picking up, then restarted them when it didn’t and inflation drifted lower.

There are 3 points that Hilsenrath is making here:

1. Productivity caused growth to slow.

2. The slowdown in productivity caused the Fed to over-forecast real GDP growth.

3. This has resulted in a stop-go policy that has hindered growth.

I’m trying to make sense of how these things fit together. Most economists think of productivity as being completely independent of monetary policy. So if low productivity growth is causing low GDP growth, then this is something that policy cannot correct. However, point 3 suggests that low GDP growth is explained by tight monetary policy. This is somewhat of a contradiction. For example, if the Fed over-forecast GDP growth, then the implication seems to be that if they’d forecast growth perfectly, they would have had more expansionary policy, which could have increased growth. But if growth was low due to low productivity, then a more expansionary monetary policy would have had only a temporary effect on real GDP growth. In fact, during the 1970s, the Federal Reserve consistently over-forecast real GDP. However, in contrast to recent policy, the Fed saw these over-foreasts as a failure of their policies rather than a productivity slowdown and tried to expand monetary policy further. What Athanasios Orphanides’s work has shown is that the big difference between policy in the 1970s and the Volcker-Greenspan era was that policy in the 1970s put much more weight on the output gap. Since the Fed was over-forecasting GDP, this caused the Fed to think they were observing negative output gaps and subsequently conducted expansionary policy. The result was stagflation.

So is Hilsenrath saying he’d prefer that policy be more like the 1970s? One cannot simultaneously argue that growth is low because of low productivity and tight monetary policy. (Even if it is some combination of both, then monetary policy is of second-order importance and that violates Hilsenrath’s thesis.)

In some sense, what is most remarkable is how far the pendulum has swung in 7 years. Back in 2009, very few people argued that tight monetary policy was to blame for the financial crisis or the recession — heck, Scott Sumner started a blog primarily because he didn’t see anyone making the case that tight monetary policy was to blame. Now, in 2016, the Wall Street Journal is now publishing stories that blame the Federal Reserve for all of society’s ills. There is a case to be made that monetary policy played a role in causing the recession and/or in explaining the slow recovery. Unfortunately, this article in the WSJ isn’t it.

On Adam Smith’s Straw Man

One way to interpret Adam Smith’s Wealth of Nations is as a critique of and rebuttal to what he called the “mercantile system” or today what we would call mercantilism. One critique that Smith made in the book is that mercantilists had an incorrect notion of wealth. In Smith’s view, mercantilists confused money and wealth. According to Smith, this misconception led many mercantilists to see trade surpluses as desirable because it was a way to accumulate gold (money) and therefore make the country richer. As it turns out, this is likely a straw man of Smith’s own construction.

I have recently been reading Mercantilism Reimagined and Carl Wennerlind has an interesting chapter on 17th century views on money in England. Here are some highlights:

  • J.D. Gould’s work in the Journal of Economic History suggests that to understand the literature on money and trade during the 1620s, one needs to understand the circumstances in which the writers were writing. He argues that this writing must be understood in the context of a significant downturn in economic activity that was largely blamed on a shortage of money. It is unclear whether this was due to an undervalued sterling or incorrect mint ratios, but a trade surplus was seen as a way to correct this shortage. In other words, these writers were not advocating trade surpluses for their own sake, but rather to replenish the money stock.
  • Smith’s attacks were on these writers of the 1620s, but he either ignored or was ignorant of a literature that emerged in the 1640s and 1650s associated with a group known as the Hartlib circle.
  • Members of this group thought that the expansion of scientific knowledge would lead to permanent expansions in economic activity. This therefore required an expanding money supply to prevent deflation and other problems with insufficient liquidity.
  • At least two writers within the Hartlib circle denied that the value of money came from the commodity itself (recall that gold and silver were money at this time). Wennerlind quotes Sir Cheney Culpeper, for example, as writing that “Money it self is nothing else but a kind of securitie which men receive upon parting with their commodities, as a ground of hope or assurance that they shall be repayed in some other commoditie.”
  • Culpeper advocated for parliament to create a law that would allow a bill of credit to be transferred from one person to another rather than waiting for repayment.
  • Another Hartlibian, William Potter had a much more ambitious proposal that called for tradesmen to set up a firm and print bills that could be borrowed with sufficient collateral. The tradesmen would agree to accept these bills in exchange for their production. At any time, a bill holder could request that it be redeemed. At that point, a bond would be issued that had to be paid by the borrower of the bill within 6 months. Since the bills were backed by collateral, the only threat to the ability to redeem a bill was a sudden decline in the value of the collateral — although Potter argued that insurance companies could be used to insure against such outcomes.
  • Winnerland argues that both the Bank of England and the South Sea Company were the outgrowth of Hartlib ideas about money and credit.

The fundamental point here is that it seems that there was an influential group of individuals writing in the 1640s and 1650s that were either ignored by Adam Smith or that he simply did not know existed. However, the omission is important. One would hardly consider the views of the Hartlibians as mercantilist. This group viewed scientific advancement as the key to economic prosperity, not trade surpluses and/or the accumulation of money. Culpeper, as evidenced by his quote, did not confuse money with wealth. His quote is consistent with a Kiyotaki-Wright model of money. Similarly, Potter clearly viewed credit and collateral as important for trade and prosperity (perhaps too much so, he predicted that under his plan that the English would be 500,000 times wealthier in less than a half of a century — that’s quite the multiplier!).

In short, this raises questions about the prevalence of mercantilist views in the time before Adam Smith. The critique by Smith that previous writers confused money and wealth might simply be a straw man.

On a Pascalian Theory of Political Economy

Throughout his career, Earl Thompson often argued that we needed a more Pascalian theory of political economy. His argument was based on the following quote from French mathematician Blaise Pascal: “The heart has its reasons of which reason knows nothing.”

Based on this idea, Thompson developed a theory of what he called “effective democracy.” The central idea behind effective democracy was a sort of “wisdom of crowds” argument. Namely, he argued that the collective decision-making that takes place through the electoral process is very often efficient – even in ways that are not immediately obvious to economists.

Economists who are reading this are likely already rolling their eyes at this idea. Economists tend to think of collective decision-making as difficult. When the social benefits of a particular good exceed the private benefits, the market will tend to under provide the good. When the social costs associated with a good exceed the private costs, markets will tend to overprovide the good. If individuals cannot be excluded from using a particular good or service, the good will tend to be under-provided or over-consumed. Principles of economics textbooks are filled with examples of these sorts of scenarios and the optimal policy response. Yet, when we look at the world, there are many instances in which democracies fail to adopt the appropriate policy responses.

Economists are also likely rolling their eyes because voters often have very different opinions on issues than economists. For example, economists tend to think that free trade is a net benefit to society. The general public is less inclined to believe that statement.

What made Thompson’s work interesting, however, is that he often argued that democracies tend to understand externalities and collective action problems better than economists realize. For example, he noted that we don’t see factories at the end of a neighborhood. Why not? Well, we typically don’t see factories at the end of a neighbor because of zoning restrictions. But why zoning restrictions? Why not just have Pigouvian taxation to internalize the social costs? In general, economists don’t tend to advocate quantity regulations, so why does it occur?

What Thompson argued is that Pigouvian taxation is insufficient. A factory imposes a social cost beyond the private cost (a negative externality) because it creates pollution (and possibly even because it is not fun to look at). Given this additional social cost, standard economic theory would suggest imposing a welfare-improving Pigouvian tax on the factory. This would force the factory to internalize the cost associated with the pollution thereby giving society the optimal amount of pollution. What Thompson pointed out is that this tax is inadequate. People in society might not just want to reduce pollution, they might want to limit their proximity to the pollution. A Pigouvian tax doesn’t solve this latter problem. To understand why, consider the following. Suppose there is a neighborhood that is not yet completed. Society imposes a Pigouvian tax to limit pollution. A company decides to open a factory in town and wants to put it in this near-complete neighborhood. The people who live in the neighborhood do not want the unsightly, noisy, smelly, polluting factory next to their homes. However, even if the Pigouvian tax bill would result in losses, the company has the incentive to purchase the land in the neighborhood and tell the neighborhood that they intend to build their factory unless the individuals in the neighborhood agree to buy the land back from them. As a result, democratic societies have adopted zoning restrictions to prevent factories from being built in neighborhoods. (As empirical evidence for this, something similar to this happened in my neighborhood, where in certain parts of Mississippi the word “zoning” is considered profane. So perhaps effective democracy hasn’t yet reached Mississippi.)

Thompson had many other examples of what he called effective democratic institutions. He argued, for example, that the lives of individuals tend to produce positive externalities for their friends and family and that this can explain why we subsidize health insurance and have costly safety regulations in the workplace, that the Interstate Commerce Act of 1887 was an efficient democratic response to the transaction costs of complex state regulations and corresponding local lawsuits for firms (especially railroads), and that Workmen’s Compensation Laws were democratically efficient responses to the significant transactions costs associated with the slew of private lawsuits brought by workers against firms.

Whether or not one accepts Thompson’s arguments, they are unique in the sense that they provide efficiency-based arguments for policies that, in general, economists see as inefficient. It is easy to follow Thompson’s intellectual development. He first began by developing his theory of effective democracy. His theory was motivated by the Pascal quote above. Namely, that democracies tend to produce efficient policies even if the constituents of that democracy have a hard time articulating why the policies are efficient. He then went out in search of empirical evidence that supported his view. In doing so, he would examine policies that economists often considered inefficient and he would try to understand why an effective democracy would adopt such a policy. In other words, he would ask: what characteristics would have to exist in order for an economist to consider the policy efficient? This is in sharp contrast to the typical way that economists examine policy, which is by starting with a basic model and determining whether the policy is efficient within that model.

I am writing about this because I believe that there is a critical element to Thompson’s analysis that should be incorporated into political economy – regardless of whether one believes that Thompson’s effective democracy theory is correct. The critical element is the presumption that there is some underlying reason that a particular policy emerged and that the policy might be an efficient democratic response. In other words, the working assumption when any policy or institution is analyzed is that the policy or institution was designed as an efficient response to some problem. Note that this doesn’t mean that economists should always conclude that the policies and institutions are efficient. The tools used by economists are the precise tools needed to determine whether something is indeed an efficient response to the problem. Thus, rather than start with a generic standard model and consider whether the policy is efficient in that context, perhaps economists should ask themselves: what would have to be true for this policy to be considered a constrained efficient response? In some cases this will be difficult to do – and that in and of itself might indicate the inefficiency of the policy. Other times, however, certain conditions might emerge that could justify a particular policy. These conditions would then generate testable hypotheses.

A Pascalian approach would hopefully lead to more humility among economists. For example, the minimum wage is a very popular policy despite the standard economic arguments against it. But why does the minimum wage exist? Even if one believes that the disemployment effects are small enough for the benefits to exceed the costs, this still begs the question as to why the minimum wage is chosen over other attempts to help low-wage workers, such as the Earned Income Tax Credit. Economists typically explain away the existence of the minimum wage as way for politicians to signal that they care about low-wage workers without bearing the cost. But this argument is rather weak. If there is a better alternative, wouldn’t the public eventually realize this? At the very least would the signal sent by the politician eventually be seen for exactly what it is? All too often, economists simply conclude that the general public just needs to learn more economics (how convenient a conclusion for economists to reach). My brief sketch of a theory of why the minimum wage exists (here) was an attempt to approach the topic from this Pascalian perspective.

Most recently, a seeming majority of economists (as well as financial and political pundits) expressed absolute shock at the decision of U.K. voters to leave the European Union. As a result, many have concluded that those who voted to leave did so because they don’t understand the costs (again, the argument is that the dullards just need to learn economics). Others have concluded that the decision to leave is just a manifestation of xenophobia. But perhaps economists are wrong about the costs associated with leaving. Or perhaps economists have miscalculated the long-run viability of the European experiment. Or perhaps individuals place values on things that are often left out of standard cost-benefit analysis because they’re hard to measure or hard to identify. Of course it is also possible that those who supported the decision to leave are indeed economically ignorant bigots. But even if this is the case, shouldn’t we fall back on this conclusion only after all other possible explanations have been exhausted?

A Pascalian view of political economy takes as given that we have imperfect knowledge of the complex nature of economic and social interactions. Studying the emergence of policies and institutions under the presumption that they were designed to efficiently deal with a particular problem forces economists to think hard about why the policies and institutions exist. But the tools at any economist’s disposal are up to the task.

Rather than seeing ourselves as the wise elders passing down advice and judgment to those who fail to understand price theory, let’s be humble. Let’s take our craft seriously. And let’s realize that we might be somewhat ignorant of the complex nature through which democracies create policies and institutions.

On Revolutions

A paper that I wrote with Alexander Salter entitled, “A Theory of Why the Ruthless Revolt” is now forthcoming in Economics & Politics. Here is the abstract:

We examine whether ruthless members of society are more likely to revolt against an existing government. The decision of whether to participate can be analyzed in the same way as the decision to exercise an option. We consider this decision when there are two groups in society: the ruthless and average citizens. We assume that the ruthless differ from the average citizens because they invest in fighting technology and therefore face a lower cost of participation. The participation decision then captures two important (and conflicting) incentives. The first is that, since participation is costly, there is value in waiting to participate. The second is that there is value in being the first-mover and capturing a greater share of the “spoils of war” if the revolution is successful. Our model generates the following implications. First, since participation is costly, there is some positive threshold for the net benefit. Second, if the ruthless do not have a significant cost advantage, then one cannot predict, a priori, that the ruthless lead the revolt. Third, when the ruthless have a significant cost advantage, they have a lower threshold and always enter the conflict first. Finally, existing regimes can delay revolution among one or both groups by increasing the cost of participation.

On What Econ 101 Actually Is (And Says)

There has been much recent discussion within the econo-blogosphere about the usefulness (or lack thereof) of “Econ 101.” This discussion seems to have started with Noah Smith’s Bloomberg column, in which he suggests that most of what you learn in Econ 101 is wrong. Mark Thoma then took this a bit further and argued that the problem with Econ 101 is ideological. In particular, Thoma argues that Econ 101 has a conservative bias. Both of these arguments rely on either a mischaracterization of Econ 101 or a really poor teaching of the subject.

Noah Smith’s dislike of Econ 101 seems to come from the discussion of the minimum wage. His basic argument is that Econ 101 says that the minimum wage increases unemployment. However, he argues that

That’s theory. Reality, it turns out, is very different. In the last two decades, empirical economists have looked at a large number of minimum wage hikes, and concluded that in most cases, the immediate effect on employment is very small.

This is a bizarre argument in a number of respects. First, Noah seems to move the goal posts. The theory is wrong because the magnitude of these effects are small? The prediction is about direction, not magnitude. Second, David Neumark and William Wascher’s survey of the literature suggests that there are indeed disemployment effects associated with the minimum wage and that these results are strongest when researchers have examined low-skilled workers.

Forgetting the evidence, let’s suppose that Noah’s assertion that the discussion of the minimum wage in Econ 101 is empirically invalid is correct. Even in this case, the idea that Econ 101 is fundamentally flawed is without basis. When I teach students about price controls, I am careful to note the difference between positive and normative statements. For example, many students tend to see price controls as a “bad” thing. When I teach students about price controls, however, I am quick to point out that saying something is “bad” is a normative statement. In other words, “bad” implies that things should be different. What “should be” is normative. The only positive (“what is”) statement that we can make about price controls is that they reduce efficiency. Whether or not this is a good or a bad thing depends on factors that are beyond an Econ 101 course — and I provide some examples of these factors.

Further, by emphasizing the effects on efficiency and the difference between positive and normative statements, this gives students a more complete picture of both the effects of price controls as well as why they might exist. In fact, it is precisely this lesson about efficiency and allocation that is an essential part of what students should learn in Econ 101.

For example, it is common for economists to discuss rent control when they discuss price ceilings. When societies put a binding maximum price on rent, this creates excess demand. However, one would not test whether this is a useful description of reality by examining the effects of rent control on homelessness. On the contrary, economists emphasize that in the absence of the price mechanism, other allocation mechanisms must substitute for price. Non-price rationing comes in a variety of forms: quality reduction, nepotism, discrimination, etc.

Similar arguments can be made for the minimum wage. For example, the basic point is that the minimum wage creates a scenario in which the quantity demanded for labor is less than the quantity supplied of labor. The ultimate outcome could come in a variety of forms. This could lead to a standard account of higher unemployment. Alternatively, this could simply cause a reduction in hours worked. Finally, in the case in which the firm faces some constraint (at least in the short to medium term), there is another way that firms can adjust. Standard Econ 101, for example, suggests that the nominal wage should equal the marginal revenue product of the worker. If the nominal wage is forced higher, there are two ways to increase the marginal revenue product — reduce labor or increase the price of the product.

The value of Econ 101 is the very process of thinking through these possible effects. What effect we actually observe is an empirical question, but it is of secondary importance to teaching students how to logically think through these sorts of examples.

Noah’s view of Econ 101, however, seems to come from his belief that economists want Econ 101 to be as simple as possible. And his argument is that this is misguided because simple often dispenses with the important. Mark Thoma, on the other hand, makes the argument that Econ 101 has a conservative bias:

The conservative bias in economics begins with the baseline theoretical model, what is often called “Economics 101.” This model of perfect competition describes a world that agrees with Republican ideology. In this model, there is no role for government intervention in the economy beyond setting the institutional structure for free markets to operate. There is nothing government can do to improve the ability of market to provide the goods and services people desire at the lowest possible price, or to help markets respond to shocks.

I think that this is both wrong about Econ 101 and it is also a strange view of conservatism.

First, I am not a conservative. However, it seems to me that many conservatives like government intervention. A number of conservatives think that child tax credits are a good idea and that marriage should be encouraged through subsidization. For these sorts of things to be justified on economic grounds requires that they believe that children and marriage generate positive externalities for society. While it is true that Republicans have been particularly obstructionist, Republican does not equal conservative. In addition, obstructionism might not have as much to do with economic beliefs as it does with political motivations about who gets the credit, the lobbying of special interest groups, the desire to imperil the image of the competing party, etc. — regardless of the rhetoric.

Which brings me to my second point. If you are a student who only learned the perfectly competitive model in Econ 101, then you should politely ask for a refund. Econ 101 routinely includes the discussion of externalities, public goods, monopoly, oligopoly, etc. All of these topics address issues that the competitive market model is ill-equipped to explain. And it is hard to argue that any of these topics have any sort of ideological bias.

On What Monetarism Really Is/Was

Paul Krugman has a recent post on why monetarism failed. Subsequently a number of economics bloggers have replied with their views on monetarism. I don’t have time to summarize all of the viewpoints espoused in these posts, but a fundamental problem throughout these posts is that each author’s description of monetarism seems to be merely their opinion about the distinct characteristics of monetarism. The problem is that many of these opinions do not provide anyone with more than a surface-level view of monetarism (i.e. something one might find in a principles or intermediate macro textbook).

In reality, Old Monetarists not only had views on money and inflation, but also had important views on the monetary transmission mechanism. The role that Old Monetarists saw for money was much more nuanced than the crude quantity theory vision that is often attributed to them. On this note, it is probably more valuable to look to the academic literature that attempts to summarize these ideas and put them into context for a modern reader.

A good place to start for anyone interested in Old Monetarist ideas is the work of Ed Nelson. Nelson is someone who has spent his career studying these ideas and trying to test their importance within modern macroeconomic frameworks. He is also currently working on a book about Milton Friedman’s influence on the monetary policy debate in the United States. To get a sense of what Old Monetarists really believed and why those ideas are relevant, I would recommend Nelson’s 2003 JME paper “The Future of Monetary Aggregates in Monetary Policy Analyis.” Here is the abstract:

This paper considers the role of monetary aggregates in modern macroeconomic models of the New Keynesian type. The focus is on possible developments of these models that are suggested by the monetarist literature, and that in addition seem justified empirically. Both the relation between money and inflation, and between money and aggregate demand, are considered. Regarding the first relation, it is argued that both the mean and the dynamics of inflation in present-day models are governed by money growth. This relationship arises from a conventional aggregate-demand channel; claims that an emphasis on the link between monetary aggregates and inflation requires a direct channel connecting money and inflation, are wide of the mark. The relevance of money for aggregate demand, in turn, lies not via real balance effects (or any other justification for money in the IS equation), but on money’s ability to serve as a proxy for the various substitution effects of monetary policy that exist when many asset prices matter for aggregate demand. This role for monetary aggregates, which is supported by empirical evidence, enhances the value of money to monetary policy.

Here is the working paper version that is not behind a paywall.

On Public Infrastructure Investment

There are two popular narratives about our infrastructure in the United States. The first is that our infrastructure is crumbling. The second is that our infrastructure spending is allocated based on its political value rather than its economic value. Maybe you believe one of these stories. Maybe you believe both. Maybe you believe neither. Regardless, these narratives are indicative of two important questions. How can we efficiently manage our public infrastructure? And how can we ensure that infrastructure investment isn’t used as a political tool? I have a new paper that proposes an answer to both questions. My proposal is to create a rule of law for public infrastructure based on option values. This rule of law would ensure that infrastructure is maintained efficiently and also that politicians would not be able to use infrastructure spending as a political tool.

The standard way to evaluate public infrastructure projects is to figure out the benefits of the infrastructure over its entire lifespan and then compute the present value of those benefits. Then you do the same thing with the costs. When you subtract the present value of the costs from the present value of the benefits you get something call a net present value. Infrastructure investments are evaluated using a positive net present value criterion. In other words, as long as the present discounted value of the benefits exceeds the present discounted value of the costs the project is desirable.

In theory the net present value approach seems like a good idea. Of course we would want the benefits to outweigh the costs. This approach, however, is much different than how a private firm would value their assets. A firm that owns a factory knows that the factory can eventually become outdated. To the firm the value of the factory is the sum of two components. The first component is the value of the factory to the firm if the firm never shuts down the factory or builds a new one. The second component is the value of the option to build a new factory or add to the current factory’s capacity in the future.

The same general concept is true of public infrastructure investment. The value of any existing infrastructure is the value of the infrastructure over its entire lifetime plus the option value of replacing that infrastructure in the future. This option value is associated with a tradeoff. Since infrastructure depreciates over time, the value from existing infrastructure is declining. This means that as time goes by, the opportunity cost of replacing the infrastructure declines and therefore the option value of replacing the infrastructure rises. However, the longer the government waits to replace the infrastructure, the longer society has to wait to receive the benefit of replacement. This reduces the option value. My proposal suggests that the government should choose the value of the current infrastructure that optimally balances this tradeoff. What this ultimately implies is that the government should wait until the value of the current infrastructure is some fraction of the net present value of the proposed replacement project.

The reason that this option approach is preferable to the net present value approach is as follows. First, even though a current project has a positive net present value, this does not necessarily imply that now is the optimal time to undertake the project. Replacing infrastructure entails an opportunity cost associated with the foregone benefit that society would have received from the existing infrastructure. In other words, society might get greater value from the project if the government chooses to wait a little longer before replacing what is currently there. Second, my approach provides a precise moment at which it is optimal to replace the infrastructure. In contrast, the net present value approach says nothing about optimality; it’s simply a cost-benefit analysis. Given the possibility that society could get an even larger benefit in the future, the option approach should be strictly preferred. Third, the option approach provides an explicit way for the government to maintain an infrastructure fund. In my paper I provide a simple formula for computing the amount of money that needs to be in the fund. This formula is simple; it only needs to take into account the cost of each project and the relative distance that project is from its replacement threshold. This sort of fund is important because it would also allow the government to continue funding infrastructure projects at the optimal time even during a recession when infrastructure budgets, especially at the local level, are often cut.

The final and most significant benefit of my approach, however, is that it would provide the means for establishing a rule of law for public infrastructure projects. This rule of law should appeal to people across the ideological spectrum. I say that for the following reasons. First, if the government adopted this option value approach as a rule of law, this would require that the government fund any and all infrastructure projects that had reached their replacement thresholds. This would ensure that the infrastructure in the United States was maintained efficiently. Second, because the only projects that would receive funding would be those that had reached the replacement threshold, politicians would not be able to use infrastructure spending as a tool for reelection or repayment to supporters. As a result, the option approach would provide the means for a rule of law for infrastructure investment that is both transparent and efficient.

Establishing such a rule of law would be difficult. The same politicians that benefit from allocating infrastructure investment for political reasons would be the same ones who would have to vote on the legislation to enact this new rule. Nonetheless, there is evidence that politicians vote in favor of infrastructure projects that benefit their constituents, but vote against aggregate investment. If the group of politicians that benefit most from this state of affairs is small, then the legislation might be easier to pass. In addition, there is nothing to stop departments of transportation at both the state and federal level from calculating option value and making the data available to the public. This greater transparency, while not a rule of law, would at least be a step in the direction of a more efficient management of our public infrastructure.