Monthly Archives: April 2012

Insiders Versus Outsiders

The Wall Street Journal reports:

So robust is the recovery in the U.S. auto industry that virtually all the union workers who were laid off by Detroit auto makers during the crisis years can have their jobs back, if they want them.

Even General Motors Co.’s Lordstown, Ohio, complex, long known for its money-losing small cars and its bad labor climate, is running 24 hours a day, with more than 4,000 workers churning out hot-selling Chevy Cruze compacts.

But here in Moraine, the GM assembly plant closed for good. Despite being one of GM’s most productive and cooperative factories, Moraine was closed following the company’s 2007 labor pact with the United Auto Workers union. Under a deal struck by the UAW during GM’s bankruptcy two years later, Moraine’s 2,500 laid-off workers were barred from transferring to other plants, locking them out of the industry’s rebound.

The trouble with Moraine: Its workers weren’t in the UAW.

[...]

Moraine’s workers got nothing in the bankruptcy deal. Their plant, which had closed months earlier, was ultimately sold to a developer. The workers were barred from transfers to UAW plants, as were thousands of others who had worked for Delphi Corp., GM’s former parts arm.

How Moraine’s interests diverged from the UAW’s is rooted in the plant’s history. A former appliance factory, it was converted to a pickup truck plant in 1981 after GM sold its Frigidaire brand. At the time, workers there elected to stick with their existing union, the International Union of Electrical Workers, rather than join the UAW. Over time, they generally accepted contracts negotiated by the UAW.

Through much of the 1990s, vehicles built at Moraine—models like the Chevy TrailBlazer and GMC Envoy—were big sellers and contributors to GM’s profits. In 2007, the plant won recognition as the nation’s most efficient midsize-SUV plant from the Harbour Report, a measure of manufacturing productivity.

“Moraine was always a good plant. The IUE was always a good union,” says Art Schwartz, former general director of labor for GM and now president of Michigan-based Labor & Economic Associates. “It’s a shame what happened to them.”

When GM began spiraling downward in 2007, as soaring fuel prices pummeled truck and SUV sales, IUE leaders decided to break ranks with the UAW and offer concessions to keep GM and Moraine afloat.

The IUE agreed to a two-tier wage system—long opposed by the UAW—in which new hires earn half as much as longtime workers. It also agreed to let the company unload its retiree health obligations into a union-run trust fund.

But as the crisis deepened, and GM and the UAW began negotiating a way out, Moraine’s workers had no seat at the table.

In the fall of 2007, GM promised work to dozens of UAW-represented plants in exchange for concessions on wages and health care, including some of the very changes offered by the IUE. By the time GM doled out enough work to satisfy the UAW, there was nothing left for Moraine.

The reference in the title is to this theory of the labor market.

Some Skepticism About Level Targeting

The conventional Market Monetarist view of monetary policy can be summarized by two points:

1. Target the forecast.
2. Target the level.

David Beckworth articulates the latter point as follows:

What these inflation critics miss is that the Fed could actually raise the level of aggregate nominal spending by a meaningful amount without jeopardizing long-run inflation expectations. This is possible if one uses a price level or a NGDP level target that provides a credible nominal anchor.

If this is indeed correct, then perhaps it is a conundrum as to why the FOMC doesn’t adopt a level target. However, it is not necessarily clear that this statement is true. If the Fed were to adopt a target of the price level or the level of nominal GDP, could they keep inflation expectations stable? It is likely that this depends on the aggressiveness of monetary policy. In fact, if the Fed were to adopt the type of policy described by points 1 and 2 above, it would actually be the case that more aggressive monetary policy would lead to the potential for self-fulfilling expectations. As a result, there is reason to be skeptical of level targeting.

To illustrate this point, consider a simple model. First suppose that the price level is governed by a Wicksellian process:

P_t = \alpha E_t P_{t + 1} + \beta(r - i_t) + e_t

where P_t is the price level, r is the natural rate of interest, i_t is the market rate of interest, \beta and \alpha are parameters, E_t is the expectations operator, e_t is a stochastic shocks, and all variables are expressed as logarithms.

In addition, suppose that monetary policy is governed by points 1 and 2 above such that:

i_t = \delta(E_t P_{t + 1} - P^*) + u_t

where P^* is the target for the price level, u_t is a monetary policy disturbance, and the variables are expressed as logarithms. This rule captures the market monetarist objectives of a level target and a forecast target.

To simplify the analysis, it is assumed above that the price level target and the natural rate of interest are constant over time. To simplify this even further, let’s re-write the equations above without these constant terms:

P_t = \alpha E_t P_{t + 1} - \beta i_t + e_t

i_t = \delta E_t P_{t + 1} + u_t

Now, substituting the second equation into the first, we can get a rational expectations difference equation for the price level:

P_t = (\alpha - \beta \delta)E_t P_{t + 1} + \epsilon_t

where \epsilon_t = -\beta u_t + e_t. A necessary condition for a unique rational expectations solution is that |(\alpha - \beta \delta)| < 1. Thus, the more aggressive the response of monetary policy to deviations of the expected price level from its target, the less likely this condition is to hold. In addition, if this condition is not satisfied, then the price level will be subject to self-fulfilling expectations. In other words, if the Fed responds too aggressively to the deviation of the expected price level from target, it is possible that price level expectations could become unanchored thereby generating self-fulfilling fluctuations.

It is important to note that the conclusion above is not unique to an interest rate rule. We can re-write this as a modified Cagan model with rational expectations. Thus re-write the first equation:

P_t = \alpha E_t P_{t + 1} + \beta m_t + e_t

where m_t is the money supply. Now describe a rule for the money supply that is consistent with points 1 and 2 above (agains, suppressing constants for simplicity):

m_t = -\theta E_t P_{t + 1} + u_t

Substituting the monetary policy rule into the first equation yields:

P_t = (\alpha - \beta \theta)E_t P_{t + 1} + \epsilon_t

where the variables are as defined above. Notice that the greater the responsiveness of monetary policy, the less likely there is a unique, rational expectations equilibrium.

So what is the source of this instability? Why is it that price level expectations can become unanchored? The reason is that an aggressive monetary policy is designed to rapidly converge to the desired price level. For example, if inflation expectations have been 2% for some time and the inflation rate rises to say 7%, the public might lose confidence in the central bank to maintain price level stability.

The idea that rapid convergence is behind the results above can be illustrated by modifying our basic framework. Suppose that rather than focusing exclusively on the price level, the Fed were to also place emphasis in the expected rate of inflation in their monetary policy rule. Thus, the rule could modified such that:

i_t = \delta(E_t P_{t + 1} - P^*) + \lambda (E_t P_{t + 1} - P_t) + u_t

Again, ignoring constants and substituting this rule into our equation for the price level, we arrive at:

P_t = {{(\alpha - \beta \delta - \beta \lambda)}\over{1 + \lambda}} E_t P_{t + 1} + \epsilon_t

Now the condition for a unique equilibrium is that |{{(\alpha - \beta \delta - \beta \lambda)}\over{1 + \lambda}}| < 1. Thus, for a given responsiveness of monetary policy to the expected price level, a greater responsiveness of the central bank to the expected rate of inflation can ensure a unique, rational expectations equilibrium.

What this very simple model illustrates is that a monetary policy that is consistent with points 1 and 2 above does not necessarily keep price level expectations anchored. If monetary policy is too aggressive, price level expectations can become unanchored thereby resulting in the potential for self-fulfilling price level fluctuations. The lesson then is that a policy consistent with points 1 and 2 above must also be mindful of the rate of change in the variable for which the Fed is targeting the forecast. A policy consistent with points 1 and 2 alone might not be sufficient to keep the price level anchored.

Avoiding a Ticket

A UCSD physicist was able to avoid paying the fine after receiving a citation for running a stop sign. How? He wrote this paper.

*Sigh*

Arnold Kling has a new article on modern macroeconomics. You may recall that I was somewhat critical of his previous piece on this topic. I think that this new piece is similarly off the mark as he continues to attack straw men (or, perhaps, he is attacking certain people who meet this criteria — of which I can think of a few — that are not representative of the discipline as a whole).

He begins with what he sees as evidence that there is some “new intuitive model” of the macroeconomy:

In the fall of 2011, the University of Chicago Business School, as part of its Initiative on Global Markets, created an expert panel of mainstream economists. The website states: “Our panel was chosen to include distinguished experts with a keen interest in public policy from the major areas of economics, to be geographically diverse, and to include Democrats, Republicans and Independents as well as older and younger scholars. The panel members are all senior faculty at the most elite research universities in the United States … a group with impeccable qualifications to speak on public policy matters.”

On March 6, 2012, the website reported on the results of asking this panel to agree or disagree with the following statement:

Because the U.S. Treasury bailed out and backstopped banks (by injecting equity into them in late 2008, and later committing to provide public capital to any banks that failed the stress tests and could not raise private capital), the U.S. unemployment rate was lower at the end of 2010 than it would have been without these measures.

The results were that 27 percent strongly agreed that the bailouts helped cushion the medium-term employment effects of the financial crisis, 51 percent agreed, 7 percent were uncertain, and the remaining 15 percent did not answer. Not a single panelist disagreed.

While I would grant that this represents the consensus today, one can imagine how the panel might have responded five years ago to a statement such as the following:

Monetary and fiscal policy tools are not sufficient for dealing with shocks to aggregate demand, such as asset market bubbles and crashes; these tools must instead be supplemented by other measures, such as injecting capital into banks and committing public capital to any banks that fail stress tests.

I would wager that many of the panelists, quite probably the majority, would have disagreed, even though this is basically the same statement as the one to which they voiced consensus agreement in March. For mainstream economists, the financial crisis has produced a new intuitive model of the economy which has yet to be articulated in any formal theory.

Does this mean that there is “a new intuitive model of the economy”? Despite Kling’s claim, I see no evidence. There are two very distinct questions being asked here and Kling seems to think they are exactly the same. They are not. One asks whether or not a particular policy helped. The second statement asks whether or not one thinks the policy is optimal (or at the very least necessary). I think that it is perfectly reasonable to assume that there are individuals who would agree with the first statement and disagree with the second statement. Thus, Kling’s main thesis is incorrect.

Kling continues:

Thus, the mainstream view is that the financial crisis put the economy in such a deep hole that neither bank bailouts nor sizable fiscal and monetary policy could dig us out. However, this “deep hole” story is simply a way of reconciling a view that stimulus works with the fact that economic performance remains weak, particularly in terms of employment. There is little in the way of analysis that directly explains how the financial crisis put us into the “deep hole.”

Who is in the mainstream? This certainly doesn’t sound like the mainstream to me. In fact, Jim Bullard’s recent paper entitled, “Death of a Theory”, hammers home the point about what was mainstream prior to the crisis and how this consensus is beginning to emerge again. (I’m not going to re-hash the paper here, but I have discussed this paper previously.)

In addition, Kling claims that there is little in the way of directly explaining how the financial crisis put us in a deep hole. Where? There is an abundance of literature on banking and financial intermediation that existed before the crisis that can help us to understand the types of things that have happened and are going on. I don’t have time to write a survey in a blog post, but here are a few suggestions. Start with the work that Gary Gorton has done. His recent book summarized the events of the financial crisis in the context of the work that he and others have been doing over the course of the last couple decades. The initial work of Townsend and Williamson on costly state verification models and the extension of business cycle models to include these type of frictions such as that by Carlstrom and Fuerst as well as Bernanke, Gertler, and Gilchrist emphasize that endogenous changes in net worth amplify economic fluctuations. Franklin Allen and Douglas Gale wrote a book entitled Understanding Financial Crises prior to the crisis. In addition, see the work of Kiyotaki and Moore on liquidity.

Of course, one can quarrel with any of the work cited above, but one cannot claim that such work doesn’t exist!

Kling continues:

Earlier this year, Bloomberg News had an article on the large number of graduates of the MIT economics department in charge of central banks or holding other important posts in Europe and elsewhere. The article delved into the outlook that MIT economists tend to share on the relationship between theory and policy. The article quotes from an essay by Paul Krugman.

The “MIT style,” according to Nobel laureate Paul Krugman, who received a doctorate from the university in 1977 and who is now a New York Times newspaper columnist, is the “use of small models applied to real problems, blending real-world observation and a little mathematics to cut through to the core of the issue.”

The article also describes the seminal role played by Paul Samuelson in the MIT economics department. Samuelson steered the department, and indeed the whole economics profession, toward using modes of analysis and discourse laden with mathematics, making it appear to resemble physics. Indeed, Samuelson was often accused of suffering from “physics envy.”

As with physics, the goal of many macroeconomists has been to predict and control economic phenomena on the basis of a minimal set of equations, the “small models” referred to by Krugman above. What I call the “modernist” view in economics is the view that small models give economists and policymakers the tools with which to explain, predict, and control economic growth and employment.

There are two points to make here. First, economists in general think that using mathematical tools helps to generate useful insights and explain economic events. This is correct. There is also nothing wrong with this practice. Math is a device that keeps our logic intact. Second, who thinks that these mathematical models are there to allow us to “control economic growth and employment.” I don’t see these statements in journal articles. Usually journal articles have statements like the following: “This paper demonstrates that X… Possible policy implications are …” I don’t know of any paper out there that purports to show how to “control economic growth and employment.”

Finally, Kling writes:

Now that we are experiencing another major downturn in the economy, the mainstream modernists will be doing another round of patching. (Note, however, that in part two of this series I described the “stubborn Keynesians” and “stubborn monetarists,” who would instead revert to the views they held prior to the round of patching that took place after the 1970s fiasco.)

There are some blogosphere-type Keynesians who think that we should go back to IS-LM. I don’t see the profession moving that way. In addition, “stubborn monetarists” don’t want to go back to the pre-rational expectations days of policy. Whether they are New Monetarists, Market Monetarists, and some brand of Old Monetarist, I think one would be hard pressed to find a member of these groups that want to throw out the last 40 years of macroeconomics — and those that do shouldn’t be taken seriously.

In short, Kling’s article seems to be an attack on a straw man. It seems that he is attacking some kind of caricature of what I would call “blogosphere Keynesianism.” But the blogosphere is a non-representative sample of the profession at large. And so are the views that Kling is attacking.

More On Sticky Prices

Micro-level evidence suggests that prices and wages are slow to adjust. This imperfect adjustment is a central feature of the monetary transmission mechanism in New Keynesian models. For example, given that prices are slow to adjust, a reduction in the nominal interest rate leads to a reduction in the real interest rate and therefore (in the baseline model) an increase in consumption and therefore output. Advocates of the NK framework often suggest that the framework can better explain the data than models with flexible prices. This, coupled with the fact that there is ample evidence of sticky prices at the micro-level, is often used to support the use of the NK framework.

But is micro-level evidence sufficient to support this transmission mechanism of monetary policy? No. If the NK transmission mechanism is correct, then we have to be able to explain the stickiness of the price level not simply the stickiness of individual prices. This position should not be controversial, but yet I think that it is under-appreciated and perhaps not well understood.

In his paper, “The Economics of Information”, George Stigler discussed price dispersion. One of Stigler’s assertions in the paper was that price dispersion results from “ignorance in the market.” In other words, as a consumer, I don’t know the price charged by every seller and therefore this opens the door to price dispersion — even among homogeneous goods. This insight was extended by Burdett and Judd, who showed that if one knows the price of every seller in the market, then the price will converge to the competitive price. If one knows only the price of one seller, then the price charged by each firm will converge to the monopoly price. If, however, the consumer knows the price charged by more than one firm with a positive probability, there is equilibrium price dispersion.

So why does this matter and what does it have to do with sticky prices? Well, as shown in a recent paper by Head, Liu, Menzio, and Wright, a model with equilibrium price dispersion can generate evidence of sticky prices even though money is neutral. To understand their insight, consider the following example. Suppose that there is a distribution of prices over which firms can maximize their profit. In other words, firms with a lower price are able to maximize profit by making up for the lower price with a larger volume of sales. In addition, suppose that this distribution of prices has support [p1, p2]. Since money is neutral, an increase in the money supply causes the distribution of prices to shift. Thus, imagine that an increase in the money supply causes the distribution of prices to shift such that the new distribution has support [p3, p4], where p1 < p3 < p2. This change suggests that the only firms that have to adjust their price following the change in the money supply are the ones that charge a price less than p3.

In the scenario described above, prices appear to be sticky. Only a fraction of firms adjust their prices. However, money is neutral. The implication is not necessarily that sticky prices are unimportant, but rather that the observation of "sticky prices" is not sufficient evidence for the transmission mechanism of monetary policy advocated by New Keynesians. This conclusion is also not new. Caplin and Spulber and Goloslov and Lucas, for example, obtain similar results.

What then is needed to reconcile the NK transmission mechanism? It would seem that it must be true that the price level rather than individual prices must be sticky. The NK model explicitly assumes this by using a representative firm in the case of Rotemberg pricing or by simply assuming that the price level is slow to adjust because only a fraction of firms can change their prices in the case of Calvo pricing.

Sticky prices are not a distinctly Keynesian idea and are not confined to New Keynesian analysis. The notion of sticky prices was evident in the writings of classical economists. This post is not meant to suggest that sticky prices are unimportant for the monetary transmission mechanism. Rather, the point is to emphasize that micro-level evidence is insufficient to argue that sticky prices have macroeconomic implications.