On Adam Smith’s Straw Man

One way to interpret Adam Smith’s Wealth of Nations is as a critique of and rebuttal to what he called the “mercantile system” or today what we would call mercantilism. One critique that Smith made in the book is that mercantilists had an incorrect notion of wealth. In Smith’s view, mercantilists confused money and wealth. According to Smith, this misconception led many mercantilists to see trade surpluses as desirable because it was a way to accumulate gold (money) and therefore make the country richer. As it turns out, this is likely a straw man of Smith’s own construction.

I have recently been reading Mercantilism Reimagined and Carl Wennerlind has an interesting chapter on 17th century views on money in England. Here are some highlights:

  • J.D. Gould’s work in the Journal of Economic History suggests that to understand the literature on money and trade during the 1620s, one needs to understand the circumstances in which the writers were writing. He argues that this writing must be understood in the context of a significant downturn in economic activity that was largely blamed on a shortage of money. It is unclear whether this was due to an undervalued sterling or incorrect mint ratios, but a trade surplus was seen as a way to correct this shortage. In other words, these writers were not advocating trade surpluses for their own sake, but rather to replenish the money stock.
  • Smith’s attacks were on these writers of the 1620s, but he either ignored or was ignorant of a literature that emerged in the 1640s and 1650s associated with a group known as the Hartlib circle.
  • Members of this group thought that the expansion of scientific knowledge would lead to permanent expansions in economic activity. This therefore required an expanding money supply to prevent deflation and other problems with insufficient liquidity.
  • At least two writers within the Hartlib circle denied that the value of money came from the commodity itself (recall that gold and silver were money at this time). Wennerlind quotes Sir Cheney Culpeper, for example, as writing that “Money it self is nothing else but a kind of securitie which men receive upon parting with their commodities, as a ground of hope or assurance that they shall be repayed in some other commoditie.”
  • Culpeper advocated for parliament to create a law that would allow a bill of credit to be transferred from one person to another rather than waiting for repayment.
  • Another Hartlibian, William Potter had a much more ambitious proposal that called for tradesmen to set up a firm and print bills that could be borrowed with sufficient collateral. The tradesmen would agree to accept these bills in exchange for their production. At any time, a bill holder could request that it be redeemed. At that point, a bond would be issued that had to be paid by the borrower of the bill within 6 months. Since the bills were backed by collateral, the only threat to the ability to redeem a bill was a sudden decline in the value of the collateral — although Potter argued that insurance companies could be used to insure against such outcomes.
  • Winnerland argues that both the Bank of England and the South Sea Company were the outgrowth of Hartlib ideas about money and credit.

The fundamental point here is that it seems that there was an influential group of individuals writing in the 1640s and 1650s that were either ignored by Adam Smith or that he simply did not know existed. However, the omission is important. One would hardly consider the views of the Hartlibians as mercantilist. This group viewed scientific advancement as the key to economic prosperity, not trade surpluses and/or the accumulation of money. Culpeper, as evidenced by his quote, did not confuse money with wealth. His quote is consistent with a Kiyotaki-Wright model of money. Similarly, Potter clearly viewed credit and collateral as important for trade and prosperity (perhaps too much so, he predicted that under his plan that the English would be 500,000 times wealthier in less than a half of a century — that’s quite the multiplier!).

In short, this raises questions about the prevalence of mercantilist views in the time before Adam Smith. The critique by Smith that previous writers confused money and wealth might simply be a straw man.

On a Pascalian Theory of Political Economy

Throughout his career, Earl Thompson often argued that we needed a more Pascalian theory of political economy. His argument was based on the following quote from French mathematician Blaise Pascal: “The heart has its reasons of which reason knows nothing.”

Based on this idea, Thompson developed a theory of what he called “effective democracy.” The central idea behind effective democracy was a sort of “wisdom of crowds” argument. Namely, he argued that the collective decision-making that takes place through the electoral process is very often efficient – even in ways that are not immediately obvious to economists.

Economists who are reading this are likely already rolling their eyes at this idea. Economists tend to think of collective decision-making as difficult. When the social benefits of a particular good exceed the private benefits, the market will tend to under provide the good. When the social costs associated with a good exceed the private costs, markets will tend to overprovide the good. If individuals cannot be excluded from using a particular good or service, the good will tend to be under-provided or over-consumed. Principles of economics textbooks are filled with examples of these sorts of scenarios and the optimal policy response. Yet, when we look at the world, there are many instances in which democracies fail to adopt the appropriate policy responses.

Economists are also likely rolling their eyes because voters often have very different opinions on issues than economists. For example, economists tend to think that free trade is a net benefit to society. The general public is less inclined to believe that statement.

What made Thompson’s work interesting, however, is that he often argued that democracies tend to understand externalities and collective action problems better than economists realize. For example, he noted that we don’t see factories at the end of a neighborhood. Why not? Well, we typically don’t see factories at the end of a neighbor because of zoning restrictions. But why zoning restrictions? Why not just have Pigouvian taxation to internalize the social costs? In general, economists don’t tend to advocate quantity regulations, so why does it occur?

What Thompson argued is that Pigouvian taxation is insufficient. A factory imposes a social cost beyond the private cost (a negative externality) because it creates pollution (and possibly even because it is not fun to look at). Given this additional social cost, standard economic theory would suggest imposing a welfare-improving Pigouvian tax on the factory. This would force the factory to internalize the cost associated with the pollution thereby giving society the optimal amount of pollution. What Thompson pointed out is that this tax is inadequate. People in society might not just want to reduce pollution, they might want to limit their proximity to the pollution. A Pigouvian tax doesn’t solve this latter problem. To understand why, consider the following. Suppose there is a neighborhood that is not yet completed. Society imposes a Pigouvian tax to limit pollution. A company decides to open a factory in town and wants to put it in this near-complete neighborhood. The people who live in the neighborhood do not want the unsightly, noisy, smelly, polluting factory next to their homes. However, even if the Pigouvian tax bill would result in losses, the company has the incentive to purchase the land in the neighborhood and tell the neighborhood that they intend to build their factory unless the individuals in the neighborhood agree to buy the land back from them. As a result, democratic societies have adopted zoning restrictions to prevent factories from being built in neighborhoods. (As empirical evidence for this, something similar to this happened in my neighborhood, where in certain parts of Mississippi the word “zoning” is considered profane. So perhaps effective democracy hasn’t yet reached Mississippi.)

Thompson had many other examples of what he called effective democratic institutions. He argued, for example, that the lives of individuals tend to produce positive externalities for their friends and family and that this can explain why we subsidize health insurance and have costly safety regulations in the workplace, that the Interstate Commerce Act of 1887 was an efficient democratic response to the transaction costs of complex state regulations and corresponding local lawsuits for firms (especially railroads), and that Workmen’s Compensation Laws were democratically efficient responses to the significant transactions costs associated with the slew of private lawsuits brought by workers against firms.

Whether or not one accepts Thompson’s arguments, they are unique in the sense that they provide efficiency-based arguments for policies that, in general, economists see as inefficient. It is easy to follow Thompson’s intellectual development. He first began by developing his theory of effective democracy. His theory was motivated by the Pascal quote above. Namely, that democracies tend to produce efficient policies even if the constituents of that democracy have a hard time articulating why the policies are efficient. He then went out in search of empirical evidence that supported his view. In doing so, he would examine policies that economists often considered inefficient and he would try to understand why an effective democracy would adopt such a policy. In other words, he would ask: what characteristics would have to exist in order for an economist to consider the policy efficient? This is in sharp contrast to the typical way that economists examine policy, which is by starting with a basic model and determining whether the policy is efficient within that model.

I am writing about this because I believe that there is a critical element to Thompson’s analysis that should be incorporated into political economy – regardless of whether one believes that Thompson’s effective democracy theory is correct. The critical element is the presumption that there is some underlying reason that a particular policy emerged and that the policy might be an efficient democratic response. In other words, the working assumption when any policy or institution is analyzed is that the policy or institution was designed as an efficient response to some problem. Note that this doesn’t mean that economists should always conclude that the policies and institutions are efficient. The tools used by economists are the precise tools needed to determine whether something is indeed an efficient response to the problem. Thus, rather than start with a generic standard model and consider whether the policy is efficient in that context, perhaps economists should ask themselves: what would have to be true for this policy to be considered a constrained efficient response? In some cases this will be difficult to do – and that in and of itself might indicate the inefficiency of the policy. Other times, however, certain conditions might emerge that could justify a particular policy. These conditions would then generate testable hypotheses.

A Pascalian approach would hopefully lead to more humility among economists. For example, the minimum wage is a very popular policy despite the standard economic arguments against it. But why does the minimum wage exist? Even if one believes that the disemployment effects are small enough for the benefits to exceed the costs, this still begs the question as to why the minimum wage is chosen over other attempts to help low-wage workers, such as the Earned Income Tax Credit. Economists typically explain away the existence of the minimum wage as way for politicians to signal that they care about low-wage workers without bearing the cost. But this argument is rather weak. If there is a better alternative, wouldn’t the public eventually realize this? At the very least would the signal sent by the politician eventually be seen for exactly what it is? All too often, economists simply conclude that the general public just needs to learn more economics (how convenient a conclusion for economists to reach). My brief sketch of a theory of why the minimum wage exists (here) was an attempt to approach the topic from this Pascalian perspective.

Most recently, a seeming majority of economists (as well as financial and political pundits) expressed absolute shock at the decision of U.K. voters to leave the European Union. As a result, many have concluded that those who voted to leave did so because they don’t understand the costs (again, the argument is that the dullards just need to learn economics). Others have concluded that the decision to leave is just a manifestation of xenophobia. But perhaps economists are wrong about the costs associated with leaving. Or perhaps economists have miscalculated the long-run viability of the European experiment. Or perhaps individuals place values on things that are often left out of standard cost-benefit analysis because they’re hard to measure or hard to identify. Of course it is also possible that those who supported the decision to leave are indeed economically ignorant bigots. But even if this is the case, shouldn’t we fall back on this conclusion only after all other possible explanations have been exhausted?

A Pascalian view of political economy takes as given that we have imperfect knowledge of the complex nature of economic and social interactions. Studying the emergence of policies and institutions under the presumption that they were designed to efficiently deal with a particular problem forces economists to think hard about why the policies and institutions exist. But the tools at any economist’s disposal are up to the task.

Rather than seeing ourselves as the wise elders passing down advice and judgment to those who fail to understand price theory, let’s be humble. Let’s take our craft seriously. And let’s realize that we might be somewhat ignorant of the complex nature through which democracies create policies and institutions.

On Revolutions

A paper that I wrote with Alexander Salter entitled, “A Theory of Why the Ruthless Revolt” is now forthcoming in Economics & Politics. Here is the abstract:

We examine whether ruthless members of society are more likely to revolt against an existing government. The decision of whether to participate can be analyzed in the same way as the decision to exercise an option. We consider this decision when there are two groups in society: the ruthless and average citizens. We assume that the ruthless differ from the average citizens because they invest in fighting technology and therefore face a lower cost of participation. The participation decision then captures two important (and conflicting) incentives. The first is that, since participation is costly, there is value in waiting to participate. The second is that there is value in being the first-mover and capturing a greater share of the “spoils of war” if the revolution is successful. Our model generates the following implications. First, since participation is costly, there is some positive threshold for the net benefit. Second, if the ruthless do not have a significant cost advantage, then one cannot predict, a priori, that the ruthless lead the revolt. Third, when the ruthless have a significant cost advantage, they have a lower threshold and always enter the conflict first. Finally, existing regimes can delay revolution among one or both groups by increasing the cost of participation.

On What Econ 101 Actually Is (And Says)

There has been much recent discussion within the econo-blogosphere about the usefulness (or lack thereof) of “Econ 101.” This discussion seems to have started with Noah Smith’s Bloomberg column, in which he suggests that most of what you learn in Econ 101 is wrong. Mark Thoma then took this a bit further and argued that the problem with Econ 101 is ideological. In particular, Thoma argues that Econ 101 has a conservative bias. Both of these arguments rely on either a mischaracterization of Econ 101 or a really poor teaching of the subject.

Noah Smith’s dislike of Econ 101 seems to come from the discussion of the minimum wage. His basic argument is that Econ 101 says that the minimum wage increases unemployment. However, he argues that

That’s theory. Reality, it turns out, is very different. In the last two decades, empirical economists have looked at a large number of minimum wage hikes, and concluded that in most cases, the immediate effect on employment is very small.

This is a bizarre argument in a number of respects. First, Noah seems to move the goal posts. The theory is wrong because the magnitude of these effects are small? The prediction is about direction, not magnitude. Second, David Neumark and William Wascher’s survey of the literature suggests that there are indeed disemployment effects associated with the minimum wage and that these results are strongest when researchers have examined low-skilled workers.

Forgetting the evidence, let’s suppose that Noah’s assertion that the discussion of the minimum wage in Econ 101 is empirically invalid is correct. Even in this case, the idea that Econ 101 is fundamentally flawed is without basis. When I teach students about price controls, I am careful to note the difference between positive and normative statements. For example, many students tend to see price controls as a “bad” thing. When I teach students about price controls, however, I am quick to point out that saying something is “bad” is a normative statement. In other words, “bad” implies that things should be different. What “should be” is normative. The only positive (“what is”) statement that we can make about price controls is that they reduce efficiency. Whether or not this is a good or a bad thing depends on factors that are beyond an Econ 101 course — and I provide some examples of these factors.

Further, by emphasizing the effects on efficiency and the difference between positive and normative statements, this gives students a more complete picture of both the effects of price controls as well as why they might exist. In fact, it is precisely this lesson about efficiency and allocation that is an essential part of what students should learn in Econ 101.

For example, it is common for economists to discuss rent control when they discuss price ceilings. When societies put a binding maximum price on rent, this creates excess demand. However, one would not test whether this is a useful description of reality by examining the effects of rent control on homelessness. On the contrary, economists emphasize that in the absence of the price mechanism, other allocation mechanisms must substitute for price. Non-price rationing comes in a variety of forms: quality reduction, nepotism, discrimination, etc.

Similar arguments can be made for the minimum wage. For example, the basic point is that the minimum wage creates a scenario in which the quantity demanded for labor is less than the quantity supplied of labor. The ultimate outcome could come in a variety of forms. This could lead to a standard account of higher unemployment. Alternatively, this could simply cause a reduction in hours worked. Finally, in the case in which the firm faces some constraint (at least in the short to medium term), there is another way that firms can adjust. Standard Econ 101, for example, suggests that the nominal wage should equal the marginal revenue product of the worker. If the nominal wage is forced higher, there are two ways to increase the marginal revenue product — reduce labor or increase the price of the product.

The value of Econ 101 is the very process of thinking through these possible effects. What effect we actually observe is an empirical question, but it is of secondary importance to teaching students how to logically think through these sorts of examples.

Noah’s view of Econ 101, however, seems to come from his belief that economists want Econ 101 to be as simple as possible. And his argument is that this is misguided because simple often dispenses with the important. Mark Thoma, on the other hand, makes the argument that Econ 101 has a conservative bias:

The conservative bias in economics begins with the baseline theoretical model, what is often called “Economics 101.” This model of perfect competition describes a world that agrees with Republican ideology. In this model, there is no role for government intervention in the economy beyond setting the institutional structure for free markets to operate. There is nothing government can do to improve the ability of market to provide the goods and services people desire at the lowest possible price, or to help markets respond to shocks.

I think that this is both wrong about Econ 101 and it is also a strange view of conservatism.

First, I am not a conservative. However, it seems to me that many conservatives like government intervention. A number of conservatives think that child tax credits are a good idea and that marriage should be encouraged through subsidization. For these sorts of things to be justified on economic grounds requires that they believe that children and marriage generate positive externalities for society. While it is true that Republicans have been particularly obstructionist, Republican does not equal conservative. In addition, obstructionism might not have as much to do with economic beliefs as it does with political motivations about who gets the credit, the lobbying of special interest groups, the desire to imperil the image of the competing party, etc. — regardless of the rhetoric.

Which brings me to my second point. If you are a student who only learned the perfectly competitive model in Econ 101, then you should politely ask for a refund. Econ 101 routinely includes the discussion of externalities, public goods, monopoly, oligopoly, etc. All of these topics address issues that the competitive market model is ill-equipped to explain. And it is hard to argue that any of these topics have any sort of ideological bias.

On What Monetarism Really Is/Was

Paul Krugman has a recent post on why monetarism failed. Subsequently a number of economics bloggers have replied with their views on monetarism. I don’t have time to summarize all of the viewpoints espoused in these posts, but a fundamental problem throughout these posts is that each author’s description of monetarism seems to be merely their opinion about the distinct characteristics of monetarism. The problem is that many of these opinions do not provide anyone with more than a surface-level view of monetarism (i.e. something one might find in a principles or intermediate macro textbook).

In reality, Old Monetarists not only had views on money and inflation, but also had important views on the monetary transmission mechanism. The role that Old Monetarists saw for money was much more nuanced than the crude quantity theory vision that is often attributed to them. On this note, it is probably more valuable to look to the academic literature that attempts to summarize these ideas and put them into context for a modern reader.

A good place to start for anyone interested in Old Monetarist ideas is the work of Ed Nelson. Nelson is someone who has spent his career studying these ideas and trying to test their importance within modern macroeconomic frameworks. He is also currently working on a book about Milton Friedman’s influence on the monetary policy debate in the United States. To get a sense of what Old Monetarists really believed and why those ideas are relevant, I would recommend Nelson’s 2003 JME paper “The Future of Monetary Aggregates in Monetary Policy Analyis.” Here is the abstract:

This paper considers the role of monetary aggregates in modern macroeconomic models of the New Keynesian type. The focus is on possible developments of these models that are suggested by the monetarist literature, and that in addition seem justified empirically. Both the relation between money and inflation, and between money and aggregate demand, are considered. Regarding the first relation, it is argued that both the mean and the dynamics of inflation in present-day models are governed by money growth. This relationship arises from a conventional aggregate-demand channel; claims that an emphasis on the link between monetary aggregates and inflation requires a direct channel connecting money and inflation, are wide of the mark. The relevance of money for aggregate demand, in turn, lies not via real balance effects (or any other justification for money in the IS equation), but on money’s ability to serve as a proxy for the various substitution effects of monetary policy that exist when many asset prices matter for aggregate demand. This role for monetary aggregates, which is supported by empirical evidence, enhances the value of money to monetary policy.

Here is the working paper version that is not behind a paywall.

On Public Infrastructure Investment

There are two popular narratives about our infrastructure in the United States. The first is that our infrastructure is crumbling. The second is that our infrastructure spending is allocated based on its political value rather than its economic value. Maybe you believe one of these stories. Maybe you believe both. Maybe you believe neither. Regardless, these narratives are indicative of two important questions. How can we efficiently manage our public infrastructure? And how can we ensure that infrastructure investment isn’t used as a political tool? I have a new paper that proposes an answer to both questions. My proposal is to create a rule of law for public infrastructure based on option values. This rule of law would ensure that infrastructure is maintained efficiently and also that politicians would not be able to use infrastructure spending as a political tool.

The standard way to evaluate public infrastructure projects is to figure out the benefits of the infrastructure over its entire lifespan and then compute the present value of those benefits. Then you do the same thing with the costs. When you subtract the present value of the costs from the present value of the benefits you get something call a net present value. Infrastructure investments are evaluated using a positive net present value criterion. In other words, as long as the present discounted value of the benefits exceeds the present discounted value of the costs the project is desirable.

In theory the net present value approach seems like a good idea. Of course we would want the benefits to outweigh the costs. This approach, however, is much different than how a private firm would value their assets. A firm that owns a factory knows that the factory can eventually become outdated. To the firm the value of the factory is the sum of two components. The first component is the value of the factory to the firm if the firm never shuts down the factory or builds a new one. The second component is the value of the option to build a new factory or add to the current factory’s capacity in the future.

The same general concept is true of public infrastructure investment. The value of any existing infrastructure is the value of the infrastructure over its entire lifetime plus the option value of replacing that infrastructure in the future. This option value is associated with a tradeoff. Since infrastructure depreciates over time, the value from existing infrastructure is declining. This means that as time goes by, the opportunity cost of replacing the infrastructure declines and therefore the option value of replacing the infrastructure rises. However, the longer the government waits to replace the infrastructure, the longer society has to wait to receive the benefit of replacement. This reduces the option value. My proposal suggests that the government should choose the value of the current infrastructure that optimally balances this tradeoff. What this ultimately implies is that the government should wait until the value of the current infrastructure is some fraction of the net present value of the proposed replacement project.

The reason that this option approach is preferable to the net present value approach is as follows. First, even though a current project has a positive net present value, this does not necessarily imply that now is the optimal time to undertake the project. Replacing infrastructure entails an opportunity cost associated with the foregone benefit that society would have received from the existing infrastructure. In other words, society might get greater value from the project if the government chooses to wait a little longer before replacing what is currently there. Second, my approach provides a precise moment at which it is optimal to replace the infrastructure. In contrast, the net present value approach says nothing about optimality; it’s simply a cost-benefit analysis. Given the possibility that society could get an even larger benefit in the future, the option approach should be strictly preferred. Third, the option approach provides an explicit way for the government to maintain an infrastructure fund. In my paper I provide a simple formula for computing the amount of money that needs to be in the fund. This formula is simple; it only needs to take into account the cost of each project and the relative distance that project is from its replacement threshold. This sort of fund is important because it would also allow the government to continue funding infrastructure projects at the optimal time even during a recession when infrastructure budgets, especially at the local level, are often cut.

The final and most significant benefit of my approach, however, is that it would provide the means for establishing a rule of law for public infrastructure projects. This rule of law should appeal to people across the ideological spectrum. I say that for the following reasons. First, if the government adopted this option value approach as a rule of law, this would require that the government fund any and all infrastructure projects that had reached their replacement thresholds. This would ensure that the infrastructure in the United States was maintained efficiently. Second, because the only projects that would receive funding would be those that had reached the replacement threshold, politicians would not be able to use infrastructure spending as a tool for reelection or repayment to supporters. As a result, the option approach would provide the means for a rule of law for infrastructure investment that is both transparent and efficient.

Establishing such a rule of law would be difficult. The same politicians that benefit from allocating infrastructure investment for political reasons would be the same ones who would have to vote on the legislation to enact this new rule. Nonetheless, there is evidence that politicians vote in favor of infrastructure projects that benefit their constituents, but vote against aggregate investment. If the group of politicians that benefit most from this state of affairs is small, then the legislation might be easier to pass. In addition, there is nothing to stop departments of transportation at both the state and federal level from calculating option value and making the data available to the public. This greater transparency, while not a rule of law, would at least be a step in the direction of a more efficient management of our public infrastructure.

The Importance of Safe Assets

A theme you often hear among bloggers, but a bit less so in seminars, is the idea that the supply of and demand for safe assets matter. David Beckworth is one such blogger who talks about this, but critics often find it hard to think about the macroeconomy in these terms since the role of money has been marginalized within the New Keynesian wing of macroeconomics. I say this because David’s intuitive explanation of safe asset equilibrium seems to be a cross between New Keynesian intuition and Old Monetarist intuition. He is trying to communicate his message to what is essentially the mainstream of the discipline, but by emphasizing something that isn’t generally in their models.

Along these lines, I was happy to stumble upon this paper by Caballero, Farhi, and Gourinchas. In my view this paper is quite similar to David’s views regarding safe assets and monetary policy and so I thought it might be interesting to outline the basic model in the paper and talk about the mechanisms for monetary policy.

The model is a modified version of an IS-LM model. The one modification to the model is a supply and demand condition for safe assets. Formally, the model consists of the following three equations:

y - \bar{y} = -\delta (r - \bar{r}) - \delta_s (r^s - \bar{r}^s)
r^s = \max[\hat{r}^s + \phi(y - \bar{y}), 0]
s = \psi_y y + \psi_s r^s - \psi_{\Delta} (r - r^s)

where y is output, r is the risky interest rate, r^s is the rate on safe assets, \hat{r}^s is the target interest rate, s is the supply of safe assets, \bar{y} is the natural rate of output, \bar{r} is the natural risky interest rate, and \bar{r}^s is the natural safe interest rate, and the greek letters are parameters. Inflation is assumed to be zero such that there is no difference between real and nominal interest rates.

This framework is a familiar IS-LM framework with the first equation is an IS equation, the second equation is a Taylor Rule subject to a zero lower bound, and the third equation determines the safe asset equilibrium.

The best interpretation of the safe asset equilibrium, as they describe it in the paper, is in terms of the flow of safe assets. According to this view, the flow demand for safe assets is a function of output, the rate of return on safe assets, and the risk premium (r - r^s). Thus the supply of safe assets, in this interpretation, is the net increase in the supply of safe assets.

Given that setup, let’s see what the model can tell us.

The first assumption that they make is that the supply of safe assets is unresponsive to the risk premium. In other words, in terms of the model, \psi_{\Delta} = 0. Given that many safe assets are exogenously supplied, this seems like a reasonable assumption.

Now, let’s think about the determination of the natural rate of interest. If the central bank sets the interest rate on safe assets equal to the natural rate, then output will be equal to potential (essentially by definition). It then follows from the IS equation that the risky interest rate is also equal to the natural risky interest rate. But how does one determine the natural interest rate?

Consider the equilibrium condition for safe assets. The interest rate on safe assets is the rate that exists when output is equal to potential. From the safe asset equilibrium condition it follows that

\bar{r}^s = {{s - \psi_y \bar{y}}\over{\psi_s}}

The central bank then needs to set r^s = \hat{r}^s = \bar{r}^s.

However, suppose that the net increase in the supply of safe assets is not high enough to keep up with the demand for new safe assets. In particular, suppose that the net increase in the supply of safe assets is so low that

s < \psi_y \bar{y}

In this scenario, the natural interest rate would be negative. However, from the Taylor rule, the market rate of interest is subject to a zero lower bound. As a result, the central bank cannot set the interest rate low enough to clear the market for safe assets. So what happens? Well, the central bank sets the safe interest rate as low as it can go r^s = 0. Which implies that output is pinned down by the net increase in the supply of safe assets:

y = {{s}\over{\psi_y}}

It then follows that r > \bar{r}. In other words, the risky interest rate is “too high” and the risk premium rises. But since the risky rate of interest is higher than the natural risky rate, the IS equation implies that output must fall in order to reduce the demand for safe assets and restore equilibrium.

The policy implication is that to escape this scenario, one needs to increase the supply of safe assets. By increasing the supply of safe assets, this increases output toward potential and thereby reduces the risk premium.

As the authors note, early attempts at quantitative easing in the United States did exactly what the model would prescribe because they swapped the risky assets in the market for safe assets. Fiscal stimulus can also help, but not through any sort of production done by the public sector, but because it increases the supply of safe assets (Treasuries).