What Are Real Business Cycles?

The real business cycle model is often described as the core of modern business cycle research. What this means is that other business cycle models have the RBC model as a special case (i.e. strip away all of the frictions from your model and its an RBC model). The idea that the RBC model is the core of modern business cycle research is somewhat tautological since the RBC model is just a neoclassical model without any frictions. Thus, if we start with a model with frictions and take those frictions away, we have a frictionless model.

The purpose of the original RBC models was not necessarily to argue that these models represented an accurate portrayal of the business cycle, but rather to see how much of the business cycle could be explained without the appeal to frictions. The basic idea is that there could be shocks to tastes and/or technology and that these changes could cause fluctuations in economic activity. Furthermore, since the RBC model was a frictionless model, any such fluctuations would be efficient. This conclusion was important. We typically think of recessions as being inefficient and costly. If this is true, countercyclical policy could be welfare-increasing. However, if the world can be adequately explained by the RBC model, then economic fluctuations represent efficient responses to unexpected changes in tastes and technology. There is no role for countercyclical policy.

There were two critical responses to RBC models. The first criticism was that the model was too simple. The crux of this argument is that if one estimated changes in total factor productivity (TFP; technology in the RBC model) using something like the Solow residual and plugged this into the model, one might be misled into thinking the model had greater predictive power than it did in reality. The basic idea is that the Solow residual is, as the name implies, a residual. Thus, this measure of TFP only captured fluctuations in output that were not explained by changes in labor and capital. Since there are a lot of things besides technology that might effect output other than labor and capital, this might not be a good measure of TFP and might result in attributing a greater percentage of fluctuations to TFP than was true of the actual data generating process.

The second critical response was largely to ridicule and make fun of the model. For example, Franco Modigliani once quipped that RBC-type models were akin to assuming that business cycles were mass outbreaks of laziness. Others would criticize the theory by stating that recessions must be periods of time when society collectively forgets how to use technology. And recently, Paul Romer has suggested that technology shocks be relabeled as phlogiston shocks.

These latter criticisms are certainly witty and no doubt the source of laughter in seminar rooms. Unfortunately, these latter criticisms obscure the more important criticisms. More importantly, however, they represent a misunderstanding of what the RBC model is about. As a result, I would like to provide an interpretation of the RBC model and then discuss more substantive criticisms.

The idea behind the real business cycle model is that fluctuations in aggregate productivity are the cause of economic fluctuations. If all firms are identical, then any decline in aggregate productivity must be a decline in the productivity of all the individual firms. But why would firms become less productive? To me, this seems to be the wrong way to interpret the model. My preferred interpretation is as follows. Suppose that you have a bunch of different firms producing different goods and these firms have different levels of productivity. In this case, an aggregate productivity shock is simply the reallocation from high productivity firms to low productivity firms or vice versa. As long as we think of all markets as being competitive, then the RBC model is just a reduced form version of what I’ve just described. In other words, the RBC model essentially suggests that fluctuations in the economy are driven by the reallocation of inputs between firms with different levels of productivity, but since markets are efficient we don’t need to get into the weeds of this reallocation in the model and can simply focus our attention on a representative firm and aggregate productivity.

I think that my interpretation is important for a couple of reasons. First, it suggests that while “forgetting how to use technology” might get chuckles in the seminar room, it is not particularly useful for thinking about productivity shocks. Second, and more importantly, this interpretation allows for further analysis. For example, how often do we see such reallocation between high productivity firms and low productivity firms? How well do such reallocations line up with business cycles in the data? What are the sources of reallocation? For example, if the reallocation is due to changes in demographics and/or preferences, then these reallocations could be interpreted as efficient responses to structural changes in the economy and be seen as efficient. However, if these reallocations are caused by changes in relative prices due to, say, monetary policy, then the welfare and policy implications are much different.

Thus, to me, rather than denigrate RBC theory, what we should do is try to disaggregate productivity, determine what causes reallocation, and try to assess whether this is an efficient reallocation or should really be considered misallocation. The good news is that economists are already doing this (here and here, for example). Unfortunately, you hear more sneering and name-calling in popular discussions than you do about this interesting and important work.

Finally, I should note that I think one of the reasons that the real business cycle model has been such a point of controversy is that it implies that recessions are efficient responses to fluctuations in productivity and counter-cyclical policy is unnecessary. This notion violates the prior beliefs of a great number of economists. As a result, I think that many of these economists are therefore willing to dismiss RBC out of hand. Nonetheless, while I myself am not inclined to think that recessions are simply efficient responses to taste and technology changes, I do think that this starting point is useful as a thought exercise. Using an RBC model as a starting point to thinking about recessions forces one to think about the potential sources of inefficiencies, how to test the magnitude of such effects, and the appropriate policy response. The better we are able to disaggregate fluctuations in productivity, the more we should be able to learn about fluctuations in aggregate productivity and the more we might be able to learn about the driving forces of recessions.

Forthcoming Publications

Blogging has been a bit light around here. In lieu of a blog post, here are a few papers of mine have recently been accepted for publication that might be of interest to regular readers:

1. “Money, Liquidity, and the Structure of Production” (with Alexander Salter), Journal of Economic Dynamics and Control. This paper is a little bit of Hayek, Hirshleifer, Tobin, and Dixit all rolled into one. Here is the abstract:

We use a model in which media of exchange are essential to examine the role of liquidity and monetary policy on production and investment decisions in which time is an important element. Specifically, we consider the effects of monetary policy on the length of production time and entry and exit decisions for firms. We show that higher rates of inflation cause households to substitute away from money balances and increase the allocation of bonds in their portfolio thereby causing a decline in the real interest rate. The decline in the real interest rate causes the period of production to increase and the productivity thresholds for entry and exit to decline. This implies that when the real interest rate declines, prospective firms are more likely to enter the market and existing firms are more likely to stay in the market. Finally, we present reduced form empirical evidence consistent with the predictions of the model.

2. “An Evaluation of Friedman’s Monetary Instability Hypothesis“, Southern Economic Journal. This paper examines two elements of Milton Friedman’s work within the context of a relatively standard structural model. The first element is the idea that deviations between the money supply and money demand are a significant source of business cycle fluctuations. The second element is the idea that shocks to the money supply are much more empirically significant that shocks to money demand. Here is the abstract:

In this paper, I examine what I call Milton Friedman’s Monetary Instability Hypothesis. Drawing on Friedman’s work, I argue that there are two main components to this view. The first component is the idea that deviations between the public’s demand for money and the supply of money are an important source of economic fluctuations. The second component of this view is that these deviations are primarily caused by fluctuations in the supply of money rather than the demand for money. Each of these components can be tested independently. To do so, I estimate an otherwise standard New Keynesian model, amended to include a money demand function consistent with Friedman’s work and a money growth rule, for a period from 1875-1963. This structural model allows me to separately identify shocks to the money supply and shocks to money demand. I then use variance decompositions to assess the relative importance of shocks to the supply and demand for money. I find that shocks to the monetary base can account for up to 28% of the fluctuations in output whereas money demand shocks can account for less than 1% of such fluctuations. This provides support for Friedman’s view.

3. “Interest Rates and Investment Coordination Failures“, Review of Austrian Economics. This paper examines the role of interest rates in influencing both production time and entry decisions of firms. The paper therefore examines coordination problems similar to those emphasized in the Austrian business cycle theory and the business cycle theory of Fischer Black. I show that in low interest rate environments firms are more likely to preempt the entry of their competitors at lower levels of demand than when interest rates are high. When firms enter simultaneously at these levels of demand, it is a coordination failure. Low interest rates also produce changes in the length of production that are consistent with the ABCT. This provides some support for business cycle theories such as the ABCT, which have been criticized as violating the assumption of rational expectations.

The theory of capital developed by Bohm-Bawerk and Wicksell emphasized the roundabout nature of the production process. The basic insight is that production necessarily involves time. One element of the production process is to determine the period of production, or the length of time from the start of production to its completion. Bohm-Bawerk and Wicksell emphasized the role of the interest rate in determining the period of production. In this paper, I develop an option games model of the decision to invest. Two firms have an opportunity to enter a market, but production takes time. Firms face a two-dimensional decision. Along one dimension, they determine the period of production and the prospective profit therefrom. Along another dimension, they determine whether or not they want to enter the market given the amount of time it will take to start generating revenue from production. Within this option games approach, the period of production can be understood as an endogenous time-to-build and I argue that this framework provides a tool for evaluating the claims of Bohm-Bawerk and Wicksell against the backdrop of competition and uncertainty. I evaluate the period of production decision and the option to enter decision when the real interest rate changes. I show that investment coordination failures are more likely to occur at lower levels of profitability when real interest rates are low. I conclude by discussing the implications of low interest rates for boom-bust investment cycles.

The Fed, Populism, and Related Topics

Jon Hilsenrath has quite the article in The Wall Street Journal, the title of which is “Years of Fed Missteps Fueled Disillusion With the Economy and Washington”. The article criticizes Fed policy, suggests these policy failures are at least partially responsible for the rise in populism in the United States, and presents a rather incoherent view of monetary policy. As one should be able to tell, the article is wide-ranging, so I want to do something different than I do in a typical blog post. I am going to go through the article point-by-point and deconstruct the narrative.

Let’s start with the lede:

Once-revered central bank failed to foresee the crisis and has struggled in its aftermath, fostering the rise of populism and distrust of institutions

There is a lot tied up in this lede. First, has the Federal Reserve ever been a revered institution? According to Hilsenrath’s own survey evidence, in 2003 only 53% of the population rated the Fed as “Good” or “Excellent”. In the midst of the Great Moderation, I would hardly called this revered.

Second, I’ve really grown tired of this argument that economists or policymakers or the Fed “failed to foresee the crisis.” The implicit assumption is that if the crisis had been foreseen, steps could have been taken to prevent it or make it less severe. But, if we accept this assumption, then we would only observe crises when they weren’t foreseen. Yet crises that were prevented would never show up in the data.

Third, to attribute the rise in populism to Federal Reserve policy presumes that the populism is tied to economic factors that the Fed can influence. Sure, if the Fed could have used policy to make real GDP higher today than it had been in the past that might have eased economic concerns. But productivity slowdowns and labor market disruptions caused by trade shocks are not things that the Federal Reserve can correct. To the extent to which these factors are what is driving populism, the Fed only has limited ability to ease such concerns.

But that’s enough about the lede…

So the basis of the article is that Fed policy has been a failure. This policy failure undermined the standing of the institution, created a wave of populism, and caused the Fed to re-think its policies. I’d like to discuss each of these points individually using passages from the article.

Let’s begin by discussing the declining public opinion of the Fed. Hilsenrath shows in his article that the public’s assessment of the Federal Reserve has declined significantly since 2003. He also shows that people have a great deal less confidence in Janet Yellen than Alan Greenspan? What does this tell us? Perhaps the public had an over-inflated view of the Fed to begin with. It certainly reasonable to think that the public had an over-inflated view of Alan Greenspan. It seems to me that there is a simple negative correlation between what they think of the Fed and a moving average of real GDP growth. It is unclear whether there are implications beyond this simple correlation.

Regarding the rise in populism, everyone has their grand theory of Donald Trump and (to a lesser extent) Bernie Sanders. Here’s Hilsenrath:

For anyone seeking to explain one of the most unpredictable political seasons in modern history, with the rise of Donald Trump and Bernie Sanders, a prime suspect is public dismay in institutions guiding the economy and government. The Fed in particular is a case study in how the conventional wisdom of the late 1990s on a wide range of economic issues, including trade, technology and central banking, has since slowly unraveled.

Do Trump and Sanders supporters have lower opinions of the Fed than the population as whole? Who knows? We are not told in the article. Also, has the conventional wisdom been upended? Whose conventional wisdom? Economists? The public?

So the populism and the reduced standing of the Fed appear to be correlations with things that are potentially correlated with Fed policy. Hardly the smoking gun suggested by the lede. So what about the re-thinking that is going on at the Fed?

First, officials missed signs that a more complex financial system had become vulnerable to financial bubbles, and bubbles had become a growing threat in a low-interest-rate world.

Secondly, they were blinded to a long-running slowdown in the growth of worker productivity, or output per hour of labor, which has limited how fast the economy could grow since 2004.

Thirdly, inflation hasn’t responded to the ups and downs of the job market in the way the Fed expected.

These are interesting. Let’s take them point-by-point:

1. Could the Fed have prevented the housing bust and the subsequent financial crisis? It is unclear. But even if they completed missed this, could not policy have responded once these effects became apparent?

2. What does this even mean? If there is a productivity slowdown that explains lower growth, then shouldn’t the Federal Reserve get a pass on the low growth of real GDP over the past several years? Shouldn’t we blame low productivity growth?

3. Who believes in the Phillips Curve as a useful guide for policy?

My criticism of Hilsenrath’s article should not be read as a defense of the Fed’s monetary policy. For example, critics might think I’m being a bit hypocritical since I have argued in my own academic work that the maintenance of stable nominal GDP growth likely contributed to the Great Moderation. The collapse of nominal GDP during the most recent recession would therefore seem to indicate a policy failure on the part of the Fed. However, notice how much different that argument is in comparison to the arguments made by Hilsenrath. The list provided by Hilsenrath suggests that the problems with Fed policy are (1) the Fed isn’t psychic, (2) the Fed didn’t understand that slow growth is not due to their policy, and (3) that the Phillips Curve is dead. Only this third component should factor into a re-think. But for most macroeconomists that re-think began taking place as early as Milton Friedman’s 1968 AEA Presidential Address — if not earlier. More recently, during an informal discussion at a conference, I observed Robert Lucas tell Noah Smith rather directly that “the Phillips Curve is dead” (to no objection) — so the Phillips Curve hardly represents conventional wisdom.

In fact, Hilsenrath’s logic regarding productivity is odd. He writes:

Fed officials, failing to see the persistence of this change [in productivity], have repeatedly overestimated how fast the economy would grow. The Fed has projected faster growth than the economy delivered in 13 of the past 15 years and is on track to do so again this year.

Private economists, too, have been baffled by these developments. But Fed miscalculations have consequences, contributing to start-and-stop policies since the crisis. Officials ended bond-buying programs, thinking the economy was picking up, then restarted them when it didn’t and inflation drifted lower.

There are 3 points that Hilsenrath is making here:

1. Productivity caused growth to slow.

2. The slowdown in productivity caused the Fed to over-forecast real GDP growth.

3. This has resulted in a stop-go policy that has hindered growth.

I’m trying to make sense of how these things fit together. Most economists think of productivity as being completely independent of monetary policy. So if low productivity growth is causing low GDP growth, then this is something that policy cannot correct. However, point 3 suggests that low GDP growth is explained by tight monetary policy. This is somewhat of a contradiction. For example, if the Fed over-forecast GDP growth, then the implication seems to be that if they’d forecast growth perfectly, they would have had more expansionary policy, which could have increased growth. But if growth was low due to low productivity, then a more expansionary monetary policy would have had only a temporary effect on real GDP growth. In fact, during the 1970s, the Federal Reserve consistently over-forecast real GDP. However, in contrast to recent policy, the Fed saw these over-foreasts as a failure of their policies rather than a productivity slowdown and tried to expand monetary policy further. What Athanasios Orphanides’s work has shown is that the big difference between policy in the 1970s and the Volcker-Greenspan era was that policy in the 1970s put much more weight on the output gap. Since the Fed was over-forecasting GDP, this caused the Fed to think they were observing negative output gaps and subsequently conducted expansionary policy. The result was stagflation.

So is Hilsenrath saying he’d prefer that policy be more like the 1970s? One cannot simultaneously argue that growth is low because of low productivity and tight monetary policy. (Even if it is some combination of both, then monetary policy is of second-order importance and that violates Hilsenrath’s thesis.)

In some sense, what is most remarkable is how far the pendulum has swung in 7 years. Back in 2009, very few people argued that tight monetary policy was to blame for the financial crisis or the recession — heck, Scott Sumner started a blog primarily because he didn’t see anyone making the case that tight monetary policy was to blame. Now, in 2016, the Wall Street Journal is now publishing stories that blame the Federal Reserve for all of society’s ills. There is a case to be made that monetary policy played a role in causing the recession and/or in explaining the slow recovery. Unfortunately, this article in the WSJ isn’t it.

On Adam Smith’s Straw Man

One way to interpret Adam Smith’s Wealth of Nations is as a critique of and rebuttal to what he called the “mercantile system” or today what we would call mercantilism. One critique that Smith made in the book is that mercantilists had an incorrect notion of wealth. In Smith’s view, mercantilists confused money and wealth. According to Smith, this misconception led many mercantilists to see trade surpluses as desirable because it was a way to accumulate gold (money) and therefore make the country richer. As it turns out, this is likely a straw man of Smith’s own construction.

I have recently been reading Mercantilism Reimagined and Carl Wennerlind has an interesting chapter on 17th century views on money in England. Here are some highlights:

  • J.D. Gould’s work in the Journal of Economic History suggests that to understand the literature on money and trade during the 1620s, one needs to understand the circumstances in which the writers were writing. He argues that this writing must be understood in the context of a significant downturn in economic activity that was largely blamed on a shortage of money. It is unclear whether this was due to an undervalued sterling or incorrect mint ratios, but a trade surplus was seen as a way to correct this shortage. In other words, these writers were not advocating trade surpluses for their own sake, but rather to replenish the money stock.
  • Smith’s attacks were on these writers of the 1620s, but he either ignored or was ignorant of a literature that emerged in the 1640s and 1650s associated with a group known as the Hartlib circle.
  • Members of this group thought that the expansion of scientific knowledge would lead to permanent expansions in economic activity. This therefore required an expanding money supply to prevent deflation and other problems with insufficient liquidity.
  • At least two writers within the Hartlib circle denied that the value of money came from the commodity itself (recall that gold and silver were money at this time). Wennerlind quotes Sir Cheney Culpeper, for example, as writing that “Money it self is nothing else but a kind of securitie which men receive upon parting with their commodities, as a ground of hope or assurance that they shall be repayed in some other commoditie.”
  • Culpeper advocated for parliament to create a law that would allow a bill of credit to be transferred from one person to another rather than waiting for repayment.
  • Another Hartlibian, William Potter had a much more ambitious proposal that called for tradesmen to set up a firm and print bills that could be borrowed with sufficient collateral. The tradesmen would agree to accept these bills in exchange for their production. At any time, a bill holder could request that it be redeemed. At that point, a bond would be issued that had to be paid by the borrower of the bill within 6 months. Since the bills were backed by collateral, the only threat to the ability to redeem a bill was a sudden decline in the value of the collateral — although Potter argued that insurance companies could be used to insure against such outcomes.
  • Winnerland argues that both the Bank of England and the South Sea Company were the outgrowth of Hartlib ideas about money and credit.

The fundamental point here is that it seems that there was an influential group of individuals writing in the 1640s and 1650s that were either ignored by Adam Smith or that he simply did not know existed. However, the omission is important. One would hardly consider the views of the Hartlibians as mercantilist. This group viewed scientific advancement as the key to economic prosperity, not trade surpluses and/or the accumulation of money. Culpeper, as evidenced by his quote, did not confuse money with wealth. His quote is consistent with a Kiyotaki-Wright model of money. Similarly, Potter clearly viewed credit and collateral as important for trade and prosperity (perhaps too much so, he predicted that under his plan that the English would be 500,000 times wealthier in less than a half of a century — that’s quite the multiplier!).

In short, this raises questions about the prevalence of mercantilist views in the time before Adam Smith. The critique by Smith that previous writers confused money and wealth might simply be a straw man.

On a Pascalian Theory of Political Economy

Throughout his career, Earl Thompson often argued that we needed a more Pascalian theory of political economy. His argument was based on the following quote from French mathematician Blaise Pascal: “The heart has its reasons of which reason knows nothing.”

Based on this idea, Thompson developed a theory of what he called “effective democracy.” The central idea behind effective democracy was a sort of “wisdom of crowds” argument. Namely, he argued that the collective decision-making that takes place through the electoral process is very often efficient – even in ways that are not immediately obvious to economists.

Economists who are reading this are likely already rolling their eyes at this idea. Economists tend to think of collective decision-making as difficult. When the social benefits of a particular good exceed the private benefits, the market will tend to under provide the good. When the social costs associated with a good exceed the private costs, markets will tend to overprovide the good. If individuals cannot be excluded from using a particular good or service, the good will tend to be under-provided or over-consumed. Principles of economics textbooks are filled with examples of these sorts of scenarios and the optimal policy response. Yet, when we look at the world, there are many instances in which democracies fail to adopt the appropriate policy responses.

Economists are also likely rolling their eyes because voters often have very different opinions on issues than economists. For example, economists tend to think that free trade is a net benefit to society. The general public is less inclined to believe that statement.

What made Thompson’s work interesting, however, is that he often argued that democracies tend to understand externalities and collective action problems better than economists realize. For example, he noted that we don’t see factories at the end of a neighborhood. Why not? Well, we typically don’t see factories at the end of a neighbor because of zoning restrictions. But why zoning restrictions? Why not just have Pigouvian taxation to internalize the social costs? In general, economists don’t tend to advocate quantity regulations, so why does it occur?

What Thompson argued is that Pigouvian taxation is insufficient. A factory imposes a social cost beyond the private cost (a negative externality) because it creates pollution (and possibly even because it is not fun to look at). Given this additional social cost, standard economic theory would suggest imposing a welfare-improving Pigouvian tax on the factory. This would force the factory to internalize the cost associated with the pollution thereby giving society the optimal amount of pollution. What Thompson pointed out is that this tax is inadequate. People in society might not just want to reduce pollution, they might want to limit their proximity to the pollution. A Pigouvian tax doesn’t solve this latter problem. To understand why, consider the following. Suppose there is a neighborhood that is not yet completed. Society imposes a Pigouvian tax to limit pollution. A company decides to open a factory in town and wants to put it in this near-complete neighborhood. The people who live in the neighborhood do not want the unsightly, noisy, smelly, polluting factory next to their homes. However, even if the Pigouvian tax bill would result in losses, the company has the incentive to purchase the land in the neighborhood and tell the neighborhood that they intend to build their factory unless the individuals in the neighborhood agree to buy the land back from them. As a result, democratic societies have adopted zoning restrictions to prevent factories from being built in neighborhoods. (As empirical evidence for this, something similar to this happened in my neighborhood, where in certain parts of Mississippi the word “zoning” is considered profane. So perhaps effective democracy hasn’t yet reached Mississippi.)

Thompson had many other examples of what he called effective democratic institutions. He argued, for example, that the lives of individuals tend to produce positive externalities for their friends and family and that this can explain why we subsidize health insurance and have costly safety regulations in the workplace, that the Interstate Commerce Act of 1887 was an efficient democratic response to the transaction costs of complex state regulations and corresponding local lawsuits for firms (especially railroads), and that Workmen’s Compensation Laws were democratically efficient responses to the significant transactions costs associated with the slew of private lawsuits brought by workers against firms.

Whether or not one accepts Thompson’s arguments, they are unique in the sense that they provide efficiency-based arguments for policies that, in general, economists see as inefficient. It is easy to follow Thompson’s intellectual development. He first began by developing his theory of effective democracy. His theory was motivated by the Pascal quote above. Namely, that democracies tend to produce efficient policies even if the constituents of that democracy have a hard time articulating why the policies are efficient. He then went out in search of empirical evidence that supported his view. In doing so, he would examine policies that economists often considered inefficient and he would try to understand why an effective democracy would adopt such a policy. In other words, he would ask: what characteristics would have to exist in order for an economist to consider the policy efficient? This is in sharp contrast to the typical way that economists examine policy, which is by starting with a basic model and determining whether the policy is efficient within that model.

I am writing about this because I believe that there is a critical element to Thompson’s analysis that should be incorporated into political economy – regardless of whether one believes that Thompson’s effective democracy theory is correct. The critical element is the presumption that there is some underlying reason that a particular policy emerged and that the policy might be an efficient democratic response. In other words, the working assumption when any policy or institution is analyzed is that the policy or institution was designed as an efficient response to some problem. Note that this doesn’t mean that economists should always conclude that the policies and institutions are efficient. The tools used by economists are the precise tools needed to determine whether something is indeed an efficient response to the problem. Thus, rather than start with a generic standard model and consider whether the policy is efficient in that context, perhaps economists should ask themselves: what would have to be true for this policy to be considered a constrained efficient response? In some cases this will be difficult to do – and that in and of itself might indicate the inefficiency of the policy. Other times, however, certain conditions might emerge that could justify a particular policy. These conditions would then generate testable hypotheses.

A Pascalian approach would hopefully lead to more humility among economists. For example, the minimum wage is a very popular policy despite the standard economic arguments against it. But why does the minimum wage exist? Even if one believes that the disemployment effects are small enough for the benefits to exceed the costs, this still begs the question as to why the minimum wage is chosen over other attempts to help low-wage workers, such as the Earned Income Tax Credit. Economists typically explain away the existence of the minimum wage as way for politicians to signal that they care about low-wage workers without bearing the cost. But this argument is rather weak. If there is a better alternative, wouldn’t the public eventually realize this? At the very least would the signal sent by the politician eventually be seen for exactly what it is? All too often, economists simply conclude that the general public just needs to learn more economics (how convenient a conclusion for economists to reach). My brief sketch of a theory of why the minimum wage exists (here) was an attempt to approach the topic from this Pascalian perspective.

Most recently, a seeming majority of economists (as well as financial and political pundits) expressed absolute shock at the decision of U.K. voters to leave the European Union. As a result, many have concluded that those who voted to leave did so because they don’t understand the costs (again, the argument is that the dullards just need to learn economics). Others have concluded that the decision to leave is just a manifestation of xenophobia. But perhaps economists are wrong about the costs associated with leaving. Or perhaps economists have miscalculated the long-run viability of the European experiment. Or perhaps individuals place values on things that are often left out of standard cost-benefit analysis because they’re hard to measure or hard to identify. Of course it is also possible that those who supported the decision to leave are indeed economically ignorant bigots. But even if this is the case, shouldn’t we fall back on this conclusion only after all other possible explanations have been exhausted?

A Pascalian view of political economy takes as given that we have imperfect knowledge of the complex nature of economic and social interactions. Studying the emergence of policies and institutions under the presumption that they were designed to efficiently deal with a particular problem forces economists to think hard about why the policies and institutions exist. But the tools at any economist’s disposal are up to the task.

Rather than seeing ourselves as the wise elders passing down advice and judgment to those who fail to understand price theory, let’s be humble. Let’s take our craft seriously. And let’s realize that we might be somewhat ignorant of the complex nature through which democracies create policies and institutions.

On Revolutions

A paper that I wrote with Alexander Salter entitled, “A Theory of Why the Ruthless Revolt” is now forthcoming in Economics & Politics. Here is the abstract:

We examine whether ruthless members of society are more likely to revolt against an existing government. The decision of whether to participate can be analyzed in the same way as the decision to exercise an option. We consider this decision when there are two groups in society: the ruthless and average citizens. We assume that the ruthless differ from the average citizens because they invest in fighting technology and therefore face a lower cost of participation. The participation decision then captures two important (and conflicting) incentives. The first is that, since participation is costly, there is value in waiting to participate. The second is that there is value in being the first-mover and capturing a greater share of the “spoils of war” if the revolution is successful. Our model generates the following implications. First, since participation is costly, there is some positive threshold for the net benefit. Second, if the ruthless do not have a significant cost advantage, then one cannot predict, a priori, that the ruthless lead the revolt. Third, when the ruthless have a significant cost advantage, they have a lower threshold and always enter the conflict first. Finally, existing regimes can delay revolution among one or both groups by increasing the cost of participation.

On What Econ 101 Actually Is (And Says)

There has been much recent discussion within the econo-blogosphere about the usefulness (or lack thereof) of “Econ 101.” This discussion seems to have started with Noah Smith’s Bloomberg column, in which he suggests that most of what you learn in Econ 101 is wrong. Mark Thoma then took this a bit further and argued that the problem with Econ 101 is ideological. In particular, Thoma argues that Econ 101 has a conservative bias. Both of these arguments rely on either a mischaracterization of Econ 101 or a really poor teaching of the subject.

Noah Smith’s dislike of Econ 101 seems to come from the discussion of the minimum wage. His basic argument is that Econ 101 says that the minimum wage increases unemployment. However, he argues that

That’s theory. Reality, it turns out, is very different. In the last two decades, empirical economists have looked at a large number of minimum wage hikes, and concluded that in most cases, the immediate effect on employment is very small.

This is a bizarre argument in a number of respects. First, Noah seems to move the goal posts. The theory is wrong because the magnitude of these effects are small? The prediction is about direction, not magnitude. Second, David Neumark and William Wascher’s survey of the literature suggests that there are indeed disemployment effects associated with the minimum wage and that these results are strongest when researchers have examined low-skilled workers.

Forgetting the evidence, let’s suppose that Noah’s assertion that the discussion of the minimum wage in Econ 101 is empirically invalid is correct. Even in this case, the idea that Econ 101 is fundamentally flawed is without basis. When I teach students about price controls, I am careful to note the difference between positive and normative statements. For example, many students tend to see price controls as a “bad” thing. When I teach students about price controls, however, I am quick to point out that saying something is “bad” is a normative statement. In other words, “bad” implies that things should be different. What “should be” is normative. The only positive (“what is”) statement that we can make about price controls is that they reduce efficiency. Whether or not this is a good or a bad thing depends on factors that are beyond an Econ 101 course — and I provide some examples of these factors.

Further, by emphasizing the effects on efficiency and the difference between positive and normative statements, this gives students a more complete picture of both the effects of price controls as well as why they might exist. In fact, it is precisely this lesson about efficiency and allocation that is an essential part of what students should learn in Econ 101.

For example, it is common for economists to discuss rent control when they discuss price ceilings. When societies put a binding maximum price on rent, this creates excess demand. However, one would not test whether this is a useful description of reality by examining the effects of rent control on homelessness. On the contrary, economists emphasize that in the absence of the price mechanism, other allocation mechanisms must substitute for price. Non-price rationing comes in a variety of forms: quality reduction, nepotism, discrimination, etc.

Similar arguments can be made for the minimum wage. For example, the basic point is that the minimum wage creates a scenario in which the quantity demanded for labor is less than the quantity supplied of labor. The ultimate outcome could come in a variety of forms. This could lead to a standard account of higher unemployment. Alternatively, this could simply cause a reduction in hours worked. Finally, in the case in which the firm faces some constraint (at least in the short to medium term), there is another way that firms can adjust. Standard Econ 101, for example, suggests that the nominal wage should equal the marginal revenue product of the worker. If the nominal wage is forced higher, there are two ways to increase the marginal revenue product — reduce labor or increase the price of the product.

The value of Econ 101 is the very process of thinking through these possible effects. What effect we actually observe is an empirical question, but it is of secondary importance to teaching students how to logically think through these sorts of examples.

Noah’s view of Econ 101, however, seems to come from his belief that economists want Econ 101 to be as simple as possible. And his argument is that this is misguided because simple often dispenses with the important. Mark Thoma, on the other hand, makes the argument that Econ 101 has a conservative bias:

The conservative bias in economics begins with the baseline theoretical model, what is often called “Economics 101.” This model of perfect competition describes a world that agrees with Republican ideology. In this model, there is no role for government intervention in the economy beyond setting the institutional structure for free markets to operate. There is nothing government can do to improve the ability of market to provide the goods and services people desire at the lowest possible price, or to help markets respond to shocks.

I think that this is both wrong about Econ 101 and it is also a strange view of conservatism.

First, I am not a conservative. However, it seems to me that many conservatives like government intervention. A number of conservatives think that child tax credits are a good idea and that marriage should be encouraged through subsidization. For these sorts of things to be justified on economic grounds requires that they believe that children and marriage generate positive externalities for society. While it is true that Republicans have been particularly obstructionist, Republican does not equal conservative. In addition, obstructionism might not have as much to do with economic beliefs as it does with political motivations about who gets the credit, the lobbying of special interest groups, the desire to imperil the image of the competing party, etc. — regardless of the rhetoric.

Which brings me to my second point. If you are a student who only learned the perfectly competitive model in Econ 101, then you should politely ask for a refund. Econ 101 routinely includes the discussion of externalities, public goods, monopoly, oligopoly, etc. All of these topics address issues that the competitive market model is ill-equipped to explain. And it is hard to argue that any of these topics have any sort of ideological bias.