Monthly Archives: January 2010

The Fed’s Exit Strategy

Allan Meltzer had an op-ed in the Wall Street Journal the other day in which he argued that the Fed’s exit strategy will fail. Here is an excerpt:

Federal Reserve Chairman Ben Bernanke has explained his exit strategy to prevent future inflation. The Fed recently began to pay interest to banks on the reserves they hold in their vaults. Using this new tool, it claims the ability to get banks to keep the money instead of lending it out, thus containing the money supply and inflation.

I don’t believe this will work, and no one else should.

Meltzer and I likely disagree on when the exit strategy needs to begin, however, we are in agreement that the strategy will not work. The banking system is currently flooded with excess reserves — over $1 trillion. Historically, that figure has been around $1 billion or less. Thus far this has not led to inflation because of the declines in velocity and the money multiplier (for more on this see here and here). It is only a matter of time before confidence re-emerges and banks start lending these excess reserves.

When this happens, monetary policy will have to respond by draining these reserves from system. It is possible to do so in one of two ways. The first way is to sell bonds through open market operations. The second way is the raise the interest rate that the Fed currently pays on reserves. While I have no doubt that the latter is possible, it is a much trickier assignment and only tackles the problem indirectly. If the problem is with reserves, the Fed should tackle the problem directly.

Hayek v. Keynes Rap

More here.

Stimulus: Worse Than Imagined

I previously highlighted the paper by Cogan, Cwik, Taylor, and Wieland that outlines the differences in Old Keynesian and New Keynesian multipliers. However, it seems that the differences might be worse than imagined.

Harold Uhlig’s presentation from the Atlanta Fed conference on fiscal policy explains the effects of stimulus when one assumes the presence of distortionary labor taxation. (He also examines the implications if rule-of-thumb consumers and the zero lower bound.) Here is what he finds:

In the context of this model, the impact of a government spending stimulus …

  • … is very sensitive to assumptions about taxes.
  • … on output is rarely larger than the government spending increase
  • … is a comparatively larger output loss later on, due to the increased tax burden.


  • Consumption declines.
  • Rules-of-thumb agents do not change the results much. Consumption may be feebly positive, the increase in output is somewhat larger.
  • Binding zero lower bound: does not change the results much, if temporary, and is extreme and fragile, if longer.

Similar to the paper by Cogan, et. al, the baseline framework that Uhlig uses is the Smets-Wouters model. I cannot find a copy of the paper online. Nevertheless, the link above is to the presentation from the conference and provides substantial information for understanding the framework and assumptions.

Taylor Responds to Bernanke

Bernanke’s speech at the AEA meetin suggested that monetary policy was not to blame for the housing boom based on his version of Taylor rule. Today, John Taylor responds to Bernanke on the WSJ opinion page:

In his speech, Mr. Bernanke’s main response to this critique was to propose alternatives to the standard Taylor rule—and then to use the alternatives to rationalize the Fed’s policy in 2002-2005.

In one alternative, which addresses what he describes as his “most significant concern regarding the use of the standard Taylor rule,” he put the Fed’s forecasts of future inflation into the Taylor rule rather than actual measured inflation. Because the Fed’s inflation forecasts were lower than current inflation during this period, this alternative obviously gives a lower target interest rate and seems to justify the Fed’s decisions at the time.

There are several problems with this procedure. First, the Fed’s forecasts of inflation were too low. Inflation increased rather than decreased in 2002-2005. Second, as shown by economists Athanasios Orphanides and Volker Wieland, who previously served on the Federal Reserve Board staff, if one uses the average of private sector inflation forecasts rather than the Fed’s forecasts, the interest rate would still have been judged as too low for too long.

Third, Mr. Bernanke cites no empirical evidence that his alternative to the Taylor rule improves central-bank performance. He mentions that forecasts avoid overreacting to temporary movements in inflation—but so does the simple averaging of broad price indices as in the Taylor rule. Indeed, his alternative is not well defined because one does not know whose forecasts to use. Moreover, the appropriate response to an increase in actual inflation would be different from the appropriate response to an increase in forecast inflation.

The entire piece is a must-read, but I would like to focus attention on Bernanke’s use of the Taylor rule. What is troubling about the recent debate and framing it in terms of the Taylor rule is that it seems that everyone has their own definition. Over time, many economists have statistically fit the parameters of the Taylor rule in order to estimate the Fed’s reaction function. However, we have to be careful about what these estimates actually mean. These types of estimates are certainly useful for policy comparisons and other positive analyses. However, they are not useful for drawing normative conclusions because the fitted parameters incorporate policy mistakes in the estimation period.

As Taylor notes, Bernanke commits a related error by plotting the interest rate implied by the Taylor rule using the Fed’s forecast of inflation. Taylor’s rule is not based on the inflation forecast, but rather on the actual inflation rate.

Why is this important?

In a working paper, I make the case that the Federal Reserve’s policy during the Great Inflation was the result of an incorrect doctrine. Specifically, the Federal Reserve was convinced, in Arthur Burns’ words, that the rules of economics had changed and that inflation was driven by cost-push forces. I argue that this caused the Fed to misinterpret positive aggregate demand shocks for negative aggregate supply shocks. What’s more, this view implies that inflation forecasts based on the Phillips curve would result in systematically lower predictions than the actual value. This is, in fact, what the data show about the Fed’s forecasts. As a result, this misplaced view ultimately led the Fed to have a much strong response to forecasts of inflation than to the actual values observed ex post.

Bernanke’s analysis similarly misguided. What good is it to use the forecast of inflation in plotting the Taylor rule if that forecast is systematically lower than the actual rate observed ex post?

They Giveth and They Taketh Away

I have been a major cheerleader for the Bloomberg podcasts over the past couple years. I think Tom Keene is one of the best interviewers around. He is very knowledgeable of not only financial markets, but economic theory. Thus, I was disappointed to learn that Keene’s interviews will no longer be available on iTunes without an annual subscription to the new “Tom Keene On Demand“.

The Federal Funds Rate

Michael Belongia and Melvin Hinich have an interesting working paper entitled, “The evolving role and definition of the federal funds rate in the conduct of U.S. monetary policy” (non-gated link). Here is the abstract:

The federal funds rate has become known conventionally as the Federal Reserve’s “instrument” of policy. This fails to recognize that the funds rate is an endogenously determined price that can be influenced by shifts in the demand for reserves or other conditions in credit markets; indeed, recognizing just this possibility, Bernanke and Blinder (1992) chose to label the funds rate as an indicator variable, one that merely signaled the thrust of monetary policy actions. Because the Fed’s ability to control the funds rate and the issue of endogeneity is central to modeling questions in monetary economics, we apply various statistical methods to offer evidence on whether the funds rate is best characterized as an instrument, intermediate target or indicator variable.

In the Mail

A History of the Federal Reserve, Volume 2 by Allan Meltzer

Medicare Reimbursement and Quality of Care

I thought that I would highlight some recent research done by a former fellow Ph.D student at WSU, Chris Brunt, and a current WSU faculty member, Gail Jensen, on the effect of price restrictions enacted by states for Medicare Part B reimbursement. Here is a link (gated) and the abstract:

The maximum amount physicians can charge Medicare patients for Part B services depends on Medicare reimbursement rates and on federal and state restrictions regarding balance billing. This study evaluates whether Part B payment rates, state restrictions on balance billing beyond the federal limit, and physician balance billing influence how beneficiaries rate the quality of their doctor’s care. Using nationally representative data from the 2001 to 2003 Medicare Current Beneficiary Survey, this paper finds strong evidence that Medicare reimbursement rates, and state balance billing restrictions influence a wide range of perceived care quality measures. Lower Medicare reimbursement and restrictions on physicians’ ability to balance bill significantly reduce the perceived quality of care under Part B.

Economic theory clearly predicts that a mandated reduction in price will result in non-price rationing. However, prior to this paper there was a lack of empirical evidence with regards to the reductions in quality predicted by economic theory.

The Bernanke Speech

Bernanke gave a speech at the AEA meetings defending the actions of the Federal Reserve in the early part of the decade. Scott Sumner makes a keen observation:

Bernanke’s explanation for the Fed’s actions in 2002 show exactly how monetary policy failed in 2008. In particular, Bernanke made the following three observations regarding 2002:

1. Monetary policy needs to focus on the macroeconomy, not specific sectors.

2. Monetary policy must be forward-looking, must target the forecast.

3. Monetary policy must be especially aggressive when there is risk of liquidity trap (which would render conventional policy ineffective.)

In 2008 the Fed did exactly the opposite. Between September and December 2008 the Fed focused on banking, not the macroeconomy, they adopted a backward-looking Taylor Rule, and they were extremely passive when the threat of a liquidity trap was already obvious.

BTW, the quote of the day comes from Sumner’s post as well:

Unlike [Arnold] Kling, the stock market does believe monetary policy has a near-term impact on the economy.

UPDATE: David Beckworth on why we should doubt the claims put forth in the speech.

Taylor, Models, and Stimulus

Throughout the current recession, John Taylor has exemplified what an economist should be. He continuously provides careful and thoughtful commentary on the financial crisis and the recession — both in scholarly papers and on his blog. Taylor’s recent post on the stimulus package highlights precisely what I am talking about.

In late November the NYT had a piece on the stimulus package which showed that certain forecasts of GDP were shown to be much higher with the stimulus package than those forecasts would have been without the stimulus package. However, Taylor reminds us of an important point:

It’s been nearly a year since the stimulus package of 2009 was passed. Unfortunately most attempts to answer the question “What was the size of the impact?” are still based on economic models in which the answer is built-in, and was built-in well before the stimulus. Frequently the same economic models that said, a year ago, the impact would be large are now trotted out to show that the impact is large. In other words these assessments are not based on the actual experience with the stimulus. I think this has confused public discourse.

I would take this criticism one step further. As I have mentioned before, there are major fundamental differences between the New Keynesian and Old Keynesian models. What’s more, our priors should be based on the model that we believe to be the best description of reality and subsequently adjusted accordingly. While there is certainly much to quarrel with in New Keynesian models, I find it difficult to believe that we should elevate the Old Keynesian models in light of these potential shortcomings.