Monthly Archives: March 2010

Thinking About Monetary Policy

Nick Rowe and Bill Woolsey bring up some interesting points in their recent posts. These points are often neglected, but are of the utmost importance for monetary policy. Below, I will explore what is meant by a monetary policy tool, target, and goal and why it is important to understand the distinct characteristics of each.

Often times, we are told that monetary policy is tight (or loose) by observing the interest rate. In a recent post, I made the case that the real interest rate isn’t a good predictor of the output gap. In a standard New Keynesian framework, if the real rate doesn’t predict the output gap, it will not help to predict inflation either. Thus, the real interest rate does not appear to be a good indicator of policy. In that same post I argued that the growth of the Divisia monetary aggregates do help to predict the output gap. So does this mean that these aggregates are a good indicator of the stance of policy? Potentially, but not necessarily.

To motivate the discussion, consider a simple monetary equilibrium framework captured by the equation of exchange:

mBV = Py

where m is the money multiplier, B is the monetary base, V is the velocity of the monetary aggregate, P is the price level and y is real output. The monetary base, B, is the tool of monetary policy because it is under more or less direct control by the Federal Reserve. The Fed’s job is to adjust to base in order to achieve a particular policy goal.

Other important factors in the equation of exchange are the money multiplier, m, and the velocity of circulation, V. These are important because V will reflect changes in the demand for the monetary aggregate whereas m will reflect changes in the demand for the components of the monetary base.

Now suppose that the Federal Reserve’s goal is to maintain monetary equilibrium. In other words, the Fed wants to ensure that the supply of money is equal to the corresponding demand for money. In the language of the equation of exchange, this would require that mBV is constant. Or, in other words, that changes in m and V are offset by changes in B.

This goal would certainly make sense because an excess supply of money ultimately leads to higher inflation whereas an excess demand for money results in — initially — a reduction in output. Unfortunately, this is a difficult task because it is difficult to observe shifts in m and V in real time. Nonetheless, there is an alternative way to ensure that monetary equilibrium is maintained. For example, in the equation of exchange, a constant mBV implies a constant Py. Thus, if the central bank wants to maintain monetary equilibrium, they can establish the path of nominal income as their policy goal.

Thus far, the framework we have employed has outlined two aspects of monetary policy. First, the monetary policy tool (or instrument) is the monetary base. This is considered a policy instrument because it is directly controlled by the Fed. Second, the goal of monetary policy is to target a desired path for nominal income. This goal is considered desirable because it maintains monetary equilibrium. Even with the instrument and goal in place, the analysis is not complete. The central bank needs an intermediate target.

The intermediate target of monetary policy can be anything that has a strong statistical relationship with the goal variable. In addition, it must be available at higher frequencies than the goal variable. This intermediate target can be a measure of the money supply, the federal funds rate, or even the forecast of the goal variable. (It is important to note that the federal funds rate is NOT the instrument of monetary policy despite the frequent usage of this term.)

Returning to the equation of exchange, a natural choice for an intermediate variable (so long as there exists a strong statistical relationship) is a monetary aggregate. Re-writing the equation of exchange, we have the more familiar form:

MV = Py

where M is the monetary aggregate used as the intermediate target.

The behavior of monetary policy is characterized as follows. The central bank chooses the monetary base with the intent of guiding the path of nominal expenditures. However, the control of the monetary base in and of itself is not always enough to ensure that the policy goal is met. This is because changes in the demand for the monetary base will result in changes in the money multiplier and, as a result, a different relationship between the monetary base and nominal expenditure. In order to ensure that the monetary base is being adjusted enough to maintain the desired path of nominal expenditures, the central bank uses the monetary aggregate as the intermediate target. In other words, the central bank chooses B so as to ensure that:

mB = M*

where M* is the desired level of the monetary aggregate. What’s more, this desired level of the monetary aggregate is chosen such that it maintains the desired level of nominal expenditure.

The most important question that I want to address is in regards to the “best” measure of the stance of monetary policy. In our example, the monetary base serves to demonstrate the actual adjustments made by the monetary authority. However, it does not necessarily demonstrate the stance of monetary policy (i.e. if policy is loose or tight). For example, if there is a change in the demand for the components of the monetary base, the money multiplier will change and, depending on the direction of the change, the monetary base might suggest that monetary policy has been expansionary or contractionary even when M and Py remain correspondingly constant.

The same problem exists for M. The central bank adjusts the monetary base to target the intermediate variable, M. The target of M is meant to generate the desired path for Py. Like the monetary base, however, movements in M will not be sufficient to produce the desired path of nominal expenditure if the demand for M — reflected in V — changes. Relying on M to discover the stance of monetary policy is potentially misleading as well in that higher than expected changes in M might merely reflect declines in V.

So what is the best way to determine the stance of monetary policy?

The answer is quite simple. If the goal is to achieve a certain level of nominal expenditure, (Py)*, then the stance of monetary policy is best determined by deviation of the goal variable from its target:

Py – (Py)*

If this value is positive, it suggests that policy has been overly expansionary. If this value is negative, it suggests that policy has been overly contractionary. This point seems to be missed by many within the United States, but is widely accepted elsewhere. For example, the Bank of England has an explicit inflation target. If their target is 2% and inflation comes in at 3%, there would be little doubt that the policy was over-expansionary regardless of the level of the nominal interest rate or the behavior of monetary aggregates. The problem in the United States seems to center around the fact that the Fed has no explicit goal for monetary policy. Rather the Fed is to promote full employment and low inflation. As a result, we tend to rely on the behavior of the intermediate targets like the federal funds rate and monetary aggregates to gauge the stance of policy when, in fact, these variables can potentially be misleading.

Brunner on Fiscal Policy

“The activist argument implicitly assumes that policymakers do possess reliable and detailed knowledge about the dynamic properties of the economy. Such knowledge would certainly allow the pursuit of an effective fiscal intervention. But such knowlege, while necessary, is not a sufficient condition for socially successful fiscal activism. We still need to invoke a goodwill or public-interest theory or benevolent dictator view of government. The case for fiscal activism, at least for purposes of stabilization policy, thus involves two important empirical assumptions bearing on required information and the behavior of man in political contexts . . . We lack the needed detailed and reliable knowledge about the economy’s dynamic structure . . . The consequences of this information problem are reenforced by the fact that self-interested behavior also permeates the political environment. There is little evidence that political agencies operate according to a generally recognized social welfare function. Fiscal activism produces, under the circumstances, more problems.”

— Karl Brunner, “Fiscal Policy in Macro Theory: A Survey and Evaluation”, 1986

Funding The Health Care Bill

The health care bill that Congress is currently trying to pass (or that may have passed by the time this is read) is designed to reduce the number of uninsured. While this is certainly a noble goal, these benefits come with corresponding costs. Advocates of the bill have argued that while the bill comes with a hefty cost, the bill actually reduces the government’s budget deficit over the Congressional Budget Office’s (CBO) 10-year forecasting horizon. This certainly makes the bill sound appealing as it implies that we can expand coverage while reducing the deficit. Unfortunately, this is somewhat misleading in the sense that it takes for granted some sketchy assumptions built in to the funding structure of the bill.

Politicians often point to the fact that the CBO is a non-partisan entity that evaluates the cost of particular legislation. The lack of partisanship is certainly important. This appeal to non-partisanship, however, obfuscates other issues surrounding CBO forecasts. Most notably, cost and revenue estimates of the bill are dependent upon the assumptions included in the bill. This feature is very important to the health care legislation.

Particularly important to the health care legislation are provisions about the funding of the bill. The following are three such important provisions: (1) an excise tax on so-called “Cadillac” health care plans that would begin in 2013; (2) a reduction in physician payment rates for Medicare and other rate cuts for Medicare providers; (3) the implementation of an advisory board that would implement cost-saving measures to reduce Medicare spending, unless rejected by subsequent legislation.

These assumptions are particularly important. First, consider the effects of the excise tax as a potential revenue source. The tax is not implemented until 2013. This suggests that, in the near-term, firms that offer top-of-the-line insurance have an incentive to reduce the coverage extended to their employees to avoid the tax. This shift could potentially reduce health care spending as these individuals would then have to spend more in out-of-pocket costs – effectively raising the price and reducing the quantity demanded. Such a shift, however, would also imply lower tax revenue as these plans are eliminated.

Of course, even the analysis of the excise tax above makes important assumptions. For example, it was assumed that the tax was actually implemented and not repealed by subsequent legislation. In addition, the most vehement detractors of this provision have been labor unions as they tend to offer their members better benefits that could potentially be subject to the new tax. As a result, there has been discussion about creating an exemption to the excise tax for members of labor unions. Such an exemption, however, would result in lower tax revenue and a lesser reduction in health care spending.

The second assumption baked into the analysis is a reduction in physician payment rates for Medicare. This reduction was actually passed during the 1990s, but each year has been postponed by subsequent legislation. There is no reason to believe that, after the passage of the health care legislation, this reduction will not be postponed once again. Such a postponement would be enough to cause the health care bill to result in a $59 billion addition to the budge deficit over ten years.

The final major funding source comes from the creation of an advisory panel that would recommend cost-savings measures for Medicare that would be enacted unless revoked by subsequent legislation. The bill assumes that this panel would be able to identify significant areas for cost reduction. This cost reduction could be in the form of greater efficiency or by reducing the quantity or quality of service.

Overall, the bill assumes that reductions in payment rates to Medicare providers and those identified by the advisory panel would that Medicare spending per beneficiary would grow at a rate of 2 percent per year (adjusted for inflation). As a method of comparison, this growth rate has been roughly 4 percent per year over the last 20 years.

The cuts to Medicare are an extremely important source of “revenue” for the health care bill. In fact, in the latest CBO projections (March 11, 2010), cuts to Medicare make up $430 billion of the funding over the next decade. This figure represents roughly 50 percent of the estimated total cost of $875 billion.

Overall, the health care bill makes a great deal of assumptions about revenue sources that potentially obfuscate the true costs of the bill. The reader should note that I have not taken a position on the health care bill. Rather the purpose of this column was to highlight the potential costs of the legislation if the assumptions included in the CBO’s analysis are not met. It is left to the reader to decide whether the benefits exceed the actual costs.

A Reappraisal of Money

I have uploaded the paper that I referred to in the previous post that re-examines widely-cited empirical results using Divisia monetary aggregates. Here is the abstract:

The emerging consensus in monetary policy and business cycle analysis is that money aggregates are not useful as an intermediate target for monetary policy or as an information variable. The uselessness of money as an intermediate target is driven by empirical research that suggests that money demand is unstable. In addition, the informational quality of money has been called into question by empirical research that fails to identify a relationship between money growth and inflation, nominal income growth, and the output gap. Nevertheless, this research is potentially flawed by the use of simple sum money aggregates, which are not consistent with economic, aggregation, or index number theory. This paper therefore re-examines previous empirical evidence on money demand and the role of money as an information variable using monetary services indexes as monetary aggregates. These aggregates have the advantage of being derived from microtheoretic foundations as well as being consistent with aggregation and index number theory. The results of the re-evaluation suggest that previous empirical work might be driven by mismeasurement.

Monetary Aggregates and Monetary Policy

In my previous post I attempted to shoot down the idea of the impotence of monetary policy at the zero bound. Given the issues raised in that post, there are two topics that need to be addressed. One issue is whether the interest rate is useful for monetary policy analysis, especially given its limitation at the zero bound. The second issue is whether there is a preferable alternative to the interest rate that should be used. I will answer these questions in reverse order.

David Beckworth asked which monetary aggregate should be used to examine the current crisis. Following the suggestion of Gary Gorton, who suggests using M3, he plots the year-to-year percentage change in M1, M2, and M3. The plot yields different predictions for M3 than the other aggregates. So, which aggregate should we use? The answer is none of them — or at least as they are traditionally measured.

Traditional aggregates are computed using a simple sum method. In other words, one simply adds currency to checkable deposits to time deposits, etc. These aggregates are thus potentially flawed by the fact that they are not consistent with economic theory, aggregation theory, or index number theory. With regards to economic theory, the simple sum aggregation procedure assumes that all assets within a particular aggregate are perfect substitutes — a characteristic that we know to be false.

An alternative to the simple sum aggregates are the Divisia aggregates — or the monetary services indexes — initially derived by William Barnett and available through the St. Louis Federal Reserve. In contrast with simple sum aggregates, these aggregates are consistent with economic, aggregation, and index number theory.

Two questions naturally arise. First, do these aggregates provide more information than the simple sum counterparts. Second, are these aggregates preferable to the interest rate.

In a recently completed working paper that I have written, I address these questions by re-examining some empirical work that suggests that monetary aggregates are unimportant and that the interest rate is sufficient for predicting movements in the output gap. Using these monetary services indexes rather than the simple sum aggregates, I find that many of the conclusions are reversed. One major conclusion that is reversed is regarding the IS equation referenced in the earlier. Specifically, Rudebusch and Svensson published a paper in 2002 that estimates the following backward-looking IS equation:

y(t) = b1*y(t-1) + b2*y(t-2) +b3*r(t-1) + e(t)

where y(t) is the output gap at time t, r(t-1) is the lagged real interest rate, b1, b2, and b3 are parameters, and e(t) is a disturbance term. They find a negative and statistically significant relationship between the real interest rate and the output gap. In addition, they suggest that when money terms are added, they are not statistically significant.

My paper addresses this by adding real money balances measured by the monetary services index counterpart to M1, M2, and MZM. For the entire sample, 1961 – 2005, I find a positive and significant impact of real balances on the output gap. The real interest remains negative and significant. However, when estimating the results for the subsample that begins with the Volcker-led Federal Reserve (a benchmark used by those who think monetary aggregates are not useful), I find that the monetary service index counterpart to M2 and MZM has a positive and significant impact on the output gap. What’s more, for these equations, the real interest is no longer statistically significantly different from zero. In other words, one cannot reject the null hypothesis that the real interest rate has no effect on the output gap.

Thus, whereas my last post claimed that models with a single interest rate that measures monetary policy are a weak reed on which to develop one’s theory of the monetary transmission process, this post makes clear that there exists a better alternative to the interest rate. The alternative is a familiar one — monetary aggregates. However, the monetary aggregates are not the simple sum variety, but rather they are ones which are consistent with economic, aggregation, and index number theory.

[[[I have yet to upload my working paper. If anyone is interested in reading a copy of the paper, feel free to send me an email: josh.hendrickson@wayne.edu]]]

The Failure of Modern Macroeconomics

Since the financial crisis began, I have been one of the most vehement supporters of modern macroeconomics. While I have my own quarrels with the current research, I have found much (not all) of the criticism wanting. Nevertheless, there is one notable and glaring failure in the macro literature that has come to the forefront during this recession. That failure is regarding the zero lower bound on nominal interest rates. Not only do I believe that a consensus is needed on this topic, but I also believe that the zero lower bound is of little practical importance.

Background

The tool of modern macroeconomics is the dynamic, stochastic, general equilibrium (DSGE) model. Monetary models often consist of the baseline New Keynesian model and extensions thereof. This model is characterized by two equations and an interest rate rule. The first equation is the dynamic IS equation, which is expressed in logarithms as follows:

y(t) = E(t)y(t+1) – (1/a)[R(t) – E(t)P(t+1)]

where y(t) is the output gap at time t, E is the mathematical expectation operator, R is the nominal interest rate, P is inflation, and a is a parameter.

The second equation is the New Keynesian Phillips curve:

P(t) = bE(t)P(t+1) + ky(t)

where b and k are parameters.

The system is then closed by a monetary policy rule. This is typically formulated as a Taylor-type rule in which the monetary authority adjusts the nominal interest rate in response to inflation and the output gap. Together with the assumption of sticky prices, the adjustment of the nominal interest rate leads to a corresponding adjustment in the real interest as well.

It is important to note that money is not part of this model. Rather movements in the interest rate pin down the rate of inflation (so long as the policy rule leads to an increase in the real rate of interest when inflation is higher than its target). The purported benefits of these types of models is that they can neglect any reference to money demand and the interest rate captures the complete transmission mechanism through which monetary policy and monetary shocks influence the system. As it turns out, however, this framework contributes to what I believe to be the failure of modern macro in light of this recession. [I also have quarrels with the empirical evidence that justifies this approach, but I will leave that for a separate post.]

The Zero Lower Bound

In the model presented above, monetary policy is conducted by a central bank that adjusts ‘the’ interest rate. The change in the interest rate in turn inversely impacts the output gap through the IS relation. Supposing that we begin from a zero inflation steady state, the increase in the interest rate causes output to fall below the natural rate. As a result, inflation declines in accordance with the New Keynesian Phillips curve.

What this process illustrates is that the interest rate is the sole mechanism through which monetary policy affects the economy. The importance of this point centers on the fact that the nominal interest rate is limited in that it cannot take on a value less than zero. Thus, if we believe this model accurately captures the world in which we live, there exists a precarious position for central banks when the output gap is negative and the nominal interest rate is zero.

The zero bound therefore places a limit on the effects of monetary policy conducted using an interest rate.

Theoretical Foundations

As with all modern theoretical macro models, the New Keynesian model is derived from microeconomic foundations. In other words, the IS equation is derived from utility maximization in which a representative household maximizes utility subject to a budget constraint. In the basic New Keynesian model illustrated above, there is a consumption good and one asset (bonds). The analysis can be extended to include money, but for typical money demand functions, money is essentially a mirror for changes in the interest rate. Nonetheless, the existence of only two assets — money and bonds — is at the heart of the problem.

If monetary policy is ineffective at the zero bound, this is referred to as a liquidity trap. Put differently, if the interest rate on bonds is zero, money and bonds become perfect substitutes. Whereas open market operations would typically be used to increase reserves and thereby lower the federal funds rate, in a liquidity trap agents simply hold the additional cash balances in place of the bonds. Increases in the money supply do not result in real changes, only alterations to the composition of portfolios.

So what is the problem with this analysis?

Well, the problem surrounds the fact that there are only two assets in the model. Monetary policy is impotent because money and bonds are perfect substitutes. Contrary to this model (and others), there are actually substantially more than two assets in the real world. Thus, a natural question to ask is whether these other assets matter for our analysis. In their 1968 paper, Karl Brunner and Allan Meltzer do precisely that. They extend the analysis beyond the two asset world. In such a case, the condition for a liquidity trap is that the marginal rates of substitution for money and all other assets must be equal to zero. As Karl Brunner would have said, we simply know this isn’t true.

This condition, I believe, represents substantial reason for pause in considering the possibility and policy implications of a liquidity trap. In fact, I would argue that it suggests that liquidity traps don’t exist. Put differently, as Scott Sumner has suggested:

Zero rates don’t really make monetary policy more difficult, they make interest rate-oriented monetary policy more difficult.

Indeed, in the absence of a liquidity trap, the zero lower bound is merely a signal that monetary policy needs to employ other methods.

The Evidence . . . Or, Am I alone?

A subsequent question is whether (a) evidence suggests that monetary policy is impotent, and (b) whether I am alone in suggesting that “unconventional” monetary policy — defined as non-interest rate policy — is useful at the zero bound.

Regarding point (a), I will be brief. In a fairly recent paper, Allan Meltzer examines the monetary transmission mechanism by comparing the behavior of the real interest rate and real money balances during periods of deflation. The impetus behind this reasoning is that during periods of deflation, the real interest rate and real balances will increase. If the monetary transmission mechanism is solely captured by the real interest rate as implied by the New Keynesian framework, then one would expect output to decline as the real interest rate rises. In contrast, the monetarist proposition has long been that the monetary transmission mechanism is reflected by the behavior of real money balances as individuals re-allocate their portfolios thereby inducing relative price adjustments on financial assets and subsequently on non-financial, or real, assets. Thus, the mechanism implies that as real balances rise, output should be expected to rise. He finds that in each case the behavior of real money balances is a much better predictor of movements in output than the real interest rate. This not only suggests that there is little reason to fear the zero lower bound, but there is also reason to doubt that the interest rate represents the correct mechanism for analysis of the monetary transmission process.

Regarding point (b), consider some recent papers that examine the zero lower bound. First, Marco Del Negro, Gauti Eggertson, Andrea Ferrero, and Nobuhiro Kiyotaki (HT: David Beckworth):

This paper extends the model in Kiyotaki and Moore (2008) to include nominal wage and price frictions and explicitly incorporates the zero bound on the short-term nominal interest rate. We subject this model to a shock which arguably captures the 2008 US financial crisis. Within this framework we ask: Once interest rate cuts are no longer feasible due to the zero bound, what are the effects of non-standard open market operations in which the government exchanges liquid government liabilities for illiquid private assets? We find that the effect of this non-standard monetary policy can be large at zero nominal interest rates. We show model simulations in which these policy interventions prevented a repeat of the Great Depression in 2008-2009.

Next, Michael Woodford, the founder of the moneyless approach exemplified by the New Keynesian model, and Vasco Curdia:

We extend a standard New Keynesian model both to incorporate heterogeneity in spending opportunities along with two sources of (potentially time-varying) credit spreads and to allow a role for the central bank’s balance sheet in determining equilibrium. We use the model to investigate the implications of imperfect financial intermediation for familiar monetary policy prescriptions and to consider additional dimensions of central bank policy—variations in the size and composition of the central bank’s balance sheet as well as payment of interest on reserves—alongside the traditional question of the proper operating target for an overnight policy rate. We also study the special problems that arise when the zero lower bound for the policy rate is reached. We show that it is possible to provide criteria for the choice of policy along each of these possible dimensions within a single unified framework, and to achieve policy prescriptions that apply equally well regardless of whether financial markets work efficiently or not and regardless of whether the zero bound on nominal interest rates is reached or not

And finally, Paul Krugman:

Even if the economy is in a liquidity trap in the sense that the nominal interest rate is stuck at zero, the monetary expansion would raise the expected future price level P*, and hence reduce the real interest rate. A permanent as opposed to temporary monetary expansion would, in other words, be effective – because it would cause expectations of inflation.

An astute reader will note that I have chose these authors because they are supporters of or seem content with the interest rate view of monetary policy. Nevertheless, in each case, they find that monetary policy can be effective at the zero lower bound.

Taken together with the evidence by Meltzer above, I think that we have sufficient reason to doubt the existence of liquidity trap.

Brief Conclusion

The zero lower bound represents a key failure of modern macro in that there is little consensus or agreement about the effects of monetary policy in such a circumstance. The issue is of central importance for determining the correct policy prescriptions — both monetary and fiscal. It is my hope that the recent surge in research on the zero lower bound will ultimately reach a consensus. What’s more, I hope that this consensus takes into account that we live in a world with more than two assets and, as a result, that the zero lower bound is nothing more than an intellectual curiosity.

Further Reading: For those interested in the topic, I think that these papers might be of use as well:

Sumner, Scott. 2002. “Some Observations of the Return of the Liquidity Trap.” Cato Journal.

Goodfriend, Marvin. 2000. “Overcoming the Zero Lower Bound on Monetary Policy.” Journal of Money, Credit, and Banking.

Meltzer, Allan. “Monetary Transmission at Low Inflation: Lessons from Japan.” (.doc link here).

The Shape of Things to Come

“Bernanke draws an important policy conclusion from the destructive effect of the debt crisis. Since he view the debt crisis as an exogenous event, he argues for selective bailouts of bankrupt firms. We find this proposal ill-advised and unnecessary. It is ill-advised because it disregards the serious moral hazard associated with such a policy and the incentives it creates in the political process. It is unnecessary, we believe, because the debt crisis, like the banking crisis, is avoidable if the monetary authority prevents the destructive effect of the money-credit decline and the wave of bankruptcies. We conclude that banking crises and debt crises can be prevented with the aid of a suitable choice of monetary arrangements.”

— Karl Brunner and Allan Meltzer, Money and the Economy: Issues in Monetary Analysis (1993, p. 96 – 97)

Quote of the Day

“At home and elsewhere in Europe, I have often been called a ‘monetarist’… For me it was an honor; they, however, meant it as an offense.”

— Vactav Klaus (Quoted in Allan Meltzer’s “Monetarism: Issues and Outcome”)

Filling the Fed Seats

So it looks like President Obama is going to start filling the vacancies left on the Federal Reserve Board. There are three vacancies including the vice chairmanship. Reuters lists several possible candidates. Their list is fine for speculative purposes, but I would like to put my two cents in as well.

There are two people that I think should (and could) be chosen to sit on the Board of Governors. Also, I would choose one of the two to serve as vice chairman. They are:

  • Edward Nelson — Ed Nelson is currently the Assistant Vice President of the St. Louis Federal Reserve and is, in my opinion, one of the best and most active monetary economists in the field. Nelson’s research displays not only rich, modern insights, but also an appreciation of previous work in monetary economics that seems to have fallen out of the consciousness of most of monetary theory. He recognizes that monetary aggregates are useful for policy. In addition, he has written on nominal income targeting (something I like) and has done extensive research on the Great Inflation.
  • Michael Woodford — This pick will likely surprise those who know me well because I absolutely deplore the cashless approach to monetary policy analysis that Woodford pioneered. However, Woodford is the premier monetary theorist of the day, an independent thinker, and, most importantly, he has a forward-looking viewpoint of monetary policy (something that would please Scott Sumner).

Markets At Work

HT: Tyler Cowen