Monthly Archives: October 2009

Assessing the Stimulus

The current administration has unveiled an entirely new metric for measuring the success of stimulus spending. Rather than claim credit for “creating” jobs, they have focused on jobs that were “created or saved” by the stimulus. This, of course, is a preposterous notion. How do we know that a job was saved? The idea of transparent reporting from the government is welcome, but transparent reporting is only part of the problem. What precisely is the definition of a “saved” job? This might seem a bit facetious, but bear with me.

Suppose that a municipality receives money to pave a road. They hire a private firm to do the job. The firm was planning on laying off (we’ll say) 10 workers. However, given the new job, the firm keeps those 10 men on payroll. This seems pretty straightforward. It’s not. These 10 workers might be kept on the payroll until the completion of this job and let go thereafter. Does this still count as a job saved? How long does the person have to remain employed for it to be considered a job “saved”? Near as I can tell, this doesn’t factor in to the decision-making.

Consider another example. Suppose that a state or municipality announces that they are going to lay off teachers or police officers. If stimulus funds keep these individuals employed, this is considered a job that was saved. However, how do we know that state and local governments weren’t, at the very least, exaggerating the number of individuals that were going to lose their jobs in a ploy for more stimulus money?

Of course, all of this ignores the financing. The government doesn’t have money, it must borrow and tax in order to spend money. Thus, any metric of job creation measures gross job creation, but what we are really concerned with is net job creation.

With that being said, the number that has been released regarding the “saved or created” jobs was estimated to be between 640,329 and 1 million jobs. That means that the stimulus has cost between $160,000 and $250,000 per job. (Jared Bernstein calls that “calculator abuse.”)

White House officials have been quick to mention that these numbers do not include jobs that were saved or created through the temporary tax cuts. As I have mentioned numerous times on the blog, temporary tax cuts don’t work. John Taylor has documented this fact for the last two rebate checks. Thus, it would seem that including the cost of these tax cuts would actually inflate the cost per job.

Ultimately, I am not entirely sure what we are to get from the “jobs saved or created” metric. There doesn’t seem to be any true objective way to quantify such a thing. Regardless, based on the data on jobs and growth up to this point, one can hardly conclude that the stimulus has been successful.

UPDATE: John Taylor breaks down the GDP numbers and concludes that the “stimulus did not fuel GDP growth.”

Casey Mulligan writes that he is “still waiting for mistakes that underestimate the potentcy of the stimulus.”

The Future of Too Big Too Fail

As we emerge from the financial crisis, it is important to develop a framework for dealing with failing institutions. In particular, the nature of the doctrine of “too big to fail” must be addressed and re-examined. Recently, John Taylor and Larry White have spoken out about the need for a rule of law rather than a discretionary authority. Their comments and my thoughts are below the fold.

Continue reading

Prisoner’s Dilemma

The prisoner’s dilemma illustrated via YouTube:

HT: John Taylor

UPDATE: Scott Sumner writes, “I’ve never been more proud to be human.”

Taylor

Why didn’t anyone tell me that John Taylor is blogging?

In any event, Taylor does some of the best work in the profession — thoughtful, careful, and persuasive. Definitely check out the blog.

There is No Such Thing as ‘Clutch’

Those who know me personally can attest to the fact that I am a big sports fan. More importantly, I am a sports fan who pays close attention to statistics and what those statistics mean — especially in baseball. One of my biggest pet peeves as a sports fan is when an announcer refers to a player as “clutch”. This bothers me because it is usually after pointing out that a particular batter is 6 for 7 with the bases loaded (or runners in scoring position or . . . ) thereby ignoring the relevance of sample size. It is with great pleasure that I discovered J.C. Bradbury’s recent post on clutch hitting. Here is an overview:

I used probit models to estimate the likelihood that a player would get a hit (1 = hit; 0 = otherwise), or get on base (1= hit, walk, or hbp; 0 = otherwise) controlling for the player’s seasonal performance in that area (AVG or OBP), RISP 1989–91 performance in that area, whether the the platoon advantage was in effect (1 = platton; 0 = otherwise), and the pitcher’s ability in that area. To test hitting power, I used the count regression negative binomial method to estimate the expected number of total bases during the plate appearance and used his RSIP SLG 1989–1991 as a proxy for clutch skill in this area.

[…]

In samples of this size, statistical significance isn’t difficult to achieve; therefore, it isn’t surprising that in all but two instances the variables are significant. The two that are insignificant are the past RISP performance in batting average and slugging average. Thus, clutch ability doesn’t appear to be strong here.

However, the estimate of a clutch effect is statistically significant for getting on base. Is this evidence for clutch ability? Well, let’s interpret the coefficient. Every one-unit increase in RISP OBP is associated with a 0.00018 increase in the likelihood of getting on base; thus, a player increasing his RISP OBP by 0.010 (10 OBP points) increases his on-base probability by 0.0000018. For practical purposes, there is no effect.

There is no such thing as “clutch”.

Dow 10,000!

Again.

Measurement Before Theory, Part 3: A Further Reply to Arnold Kling

Arnold Kling writes:

What happened one year ago that caused the economy to tank over the winter?

(a) a credit crunch. Banks would not lend to one another, and they cut back on credit to businesses, which in turn caused the contraction in economic activity.

(b) a recalculation. People found out that their housing wealth was lower, so they spent less. The home construction, real estate brokerage, mortgage lending, and securitization industries found out that their services were in much less demand than they had been, and they cut back. Finally, Ben Bernanke and Henry Paulson shouted “The Great Depression might come back!” in this crowded theater, and everybody ran for the exits. For example, law firms started telling new hires to go do something else for a while.

(c) people woke up to find that the elves and helicopters had left less money lying around.

(d) people woke up to find that the Fed had lowered its de facto inflation target.

The economists I consider to be most sensible are pushing some combination of (a) and (b). I differ from the consensus in that I push (b) exclusively and minimize (a). Lots of folks–defenders of Bernanke in particular–push (a) more than (b). What Scott Sumner and David Beckworth wish to defend is (c) and/or (d).

If given the choice between these four descriptions, I would also choose (a) or (b), but mostly because (c) and (d) are misguided caricatures of what David Beckworth, Scott Sumner, and myself have been discussing. Those who believe that monetary policy was tight do not believe that there was “less money lying around.” While David and Scott might disagree on method, I think that this phenomenon can best be understood by an understanding of the quantity theory of money.

First, it is important to understand what the quantity theory is not. For example, Arnold Kling has admitted that he might be taking his “anti-monetarism to extremes” and that his motivation is to free many of us from old habits. For example, he writes:

Another habit I want to try to break is the habit of thinking that nominal income is proportional to money. At any given moment, one can take the ratio of PY/M and say “there’s your proportion for you,” but you can do that if you define M as mackerel as easily as if you define M as the monetary base.

Indeed, I am in agreement that it is meaningless to discuss variables based on their proportion to nominal income unless there is good reason. In fact, Friedman and Scwartz raised this very point in “Money and Business Cycles” (p. 213 in the reprinted version in Friedman, 1969):

The stock of money displays a consistent cyclical behavior which is closely related to the cyclical behavior of the economy at large. This much the factual evidence summarized above puts beyond reasonable doubt.

That evidence alone is much less decisive about the direction of influence . . . It might be, so far as we know, that one could marshal a similar body of evidence demonstrating that the production of dressmakers’ pins has displayed over the past nine decades a regular cyclical pattern; that the pin pattern reaches a peak well before the reference peak and a trough well before the reference trough; that it amplitude is highly correlated with the amplitude of the movements in general business.

[…]

Most economists would be willing to dismiss out of hand the pin theory even on such evidence; most economists would take seriously the monetary theory even on much less evidence, which is not by any means the same as saying that they would be persuaded by the evidence. Whence the difference? Primarily, the difference is that we have other kinds of evidence.

What Kling really seems to be suggesting is that not all changes in nominal income (and perhaps prices) can be explained by changes in the money supply. Again, this is not something that a quantity theorist would argue with. In his New Palgrave article on the quantity theory, Milton Friedman writes:

Changes in prices and nominal income can be produced either by changes in the real balances that people wish to hold or by changes in the nominal balances available for them to hold. Indeed, it is a tautology, summarized in the famous quantity equations, that all changes in nominal income can be attributed to one or the other . . . The quantity theory is not that tautology.

This provides the perfect segue into describing what the quantity theory is and why it is important to understanding the recession. (It is important to note that this is not how the quantity theory has been traditionally described. The quantity theory has come in a variety of forms and what follows is broadly consistent the QT.)

Recall the equation of exchange:

MV = Py

where M is money, V is velocity, P is the price level, and y is real output. The money supply itself, however, is a multiple of the monetary base (the currency in circulation plus bank reserves). Thus, we can rewrite the equation of exchange as:

mBV = Py

where m is the money multiplier, B is the monetary base, and V is now the velocity of the monetary base and V remains velocity as described above. This distinction is important because it stresses the interaction of the money multiplier and the monetary base. The money multiplier is a function of the reserve-to-deposit ratio, r, and the currency-to-deposit, c, ratio:

m = m(r, c)

where m_r (.), m_c (.) < 0 (m_i denotes the derivative w.r.t. i). Further, it is important to note that given M = mB, a decline in the money multiplier would reduce broader money aggregates through multiple deposit destruction.

Given this information, it would now be prudent to discuss the recession in light of this framework. The recession can largely be viewed in two stages. The first stage ran from Dec. 2007 to around the end of August 2008. This first stage was somewhat mild, or at least on par with a typical recession. The second stage, however, began in late August and early September 2008. There are two major events that correspond with this change: the collapse of Lehman Brothers and the Bernanke-Paulson testimony and TARP debacle. (John Taylor's research suggests the latter was more important to understanding the crisis.)

There are two effects that followed. First, the currency component began to rise considerably (note that this is in percentage change from the previous year):

Second, reserves increased substantially:

The sharp increase in excess reserves shown above can be attributed to increased uncertainty and to the Fed’s decision to pay excess reserves beginning in October.

These increases in currency and reserves relative to deposits all serve to reduce the money multiplier, m, as well as lead to reductions in velocity as spending falls. David Beckworth has depicted this phenomenon quite well graphically.

This decline in m and V should lead to a sharp decline in nominal spending. Thus, when we are talking about tight money we are not referencing a sudden decline in the amount of money that is lying around, but rather the failure of policy to answer the decline in m with a corresponding increase in B (I have noted that the Fed has performed relatively admirably in this case — at least in comparison to history). In fact, this is an insight that can be gathered from Friedman and Schwart’z Monetary History of the United States in their discussion of the Depression as the Fed allowed the money supply to fall because it did not increase the monetary base to offset changes in c (and in some cases, r).

What’s more, it is not necessarily that the Fed lowered their de facto inflation target, but rather that tight monetary policy created expectations of lower inflation (and lower nominal spending). Given that there is some endogeneity with respect to inflation expectations, there are other ways to measure the stance of monetary policy. David Beckworth has broken out the VARs again to analyze whether monetary policy can explain the decline in nominal spending. He shows that monetary policy explains a quite sizable portion of the decline in nominal spending (the precise size depends on the model specification and the stance of monetary policy).

Taken together, I would think that the evidence and theory presented here are at least somewhat compelling. At least certainly more so than (c) and (d) as described above.