Monthly Archives: December 2013

Yellen, Optimal Control, and Dynamic Inconsistency

For much of his career, Milton Friedman advocated a constant rate of money growth — the so-called k-percent rule. According to this rule, the central bank would increase the money supply at a constant rate, k, every year. In this case, there would be no need for an FOMC. A computer could conduct monetary policy.

The k-percent rule has often been derided as a sub-optimal policy. Suppose, for example, that there was an increase in money demand. Without a corresponding increase in the money supply, there would be excess money demand that even Friedman believed would cause a reduction in both nominal income and real economic activity. So why would Friedman advocate such a policy?

The reason Friedman advocated the k-percent rule was not because he believed that it was the optimal policy in the modern sense of phrase, but rather that it limited the damage done by activist monetary policy. In Friedman’s view, shaped by his empirical work on monetary history, central banks tended to be a greater source of business cycle fluctuations than they were a source of stability. Thus, the k-percent rule would eliminate recessions caused by bad monetary policy.

The purpose of this discussion is not to take a position on the k-percent rule, but rather to point out the fundamental nature of discretionary monetary policy. A central bank that uses discretion has the ability to deviate from its traditional approach or pre-announced policy if it believes that doing so would be in the best interest of the economy. In other words, the central bank can respond to unique events with unique policy actions. There are certainly desirable characteristics of this approach. However, Friedman’s point was that there are very undesirable characteristics of discretion. Just because a central bank has discretion doesn’t necessarily mean that the central bank will use it wisely. This is true even of central banks that have the best intentions (more on this point later).

The economic literature on rules versus discretion is now quite extensive. In fact, a substantial amount of research within the New Keynesian paradigm is dedicated to identifying the optimal monetary policy rule and examining the robustness of this rule to different assumptions about the economy. In addition, there has been a substantial amount of work on credible commitment on the part of policymakers.

Much of the modern emphasis on rules versus discretion traces back to the work of Kydland and Prescott and the idea of dynamic inconsistency. The basic idea is that when the central bank cannot perfectly commit to future plans, we end up with suboptimal outcomes. The idea is important because Kydland and Prescott’s work was largely a response to those who viewed optimal control theory as a proper way to determine the stance of monetary policy. The optimal control approach can be summarized as follows:

The Federal Open Market Committee (FOMC) targets an annual inflation rate of 2% over the long run and an unemployment rate of 6% (the latter number an estimate of the economy’s “natural” unemployment rate).

Under the optimal control approach, the central bank would then use a model to calculate the optimal path of short-term interest rates in order to hit these targets.

In short optimal control theory seems to have a lot of desirable characteristics in that policy is based on the explicit dynamics of a particular economic framework. In addition, it is possible for one to consider what the path of policy should look like given different paths for the models state variables. Given these characteristics, the story describing optimal control linked above is somewhat favorable to this approach and notes that the optimal control approach to monetary policy is favored by incoming Fed chair Janet Yellen. Thus, it is particularly useful to understand the criticisms of optimal control levied by Kydland and Prescott.

As noted above, the basic conclusion that Kydland and Prescott reached was the when the central bank has discretionary power and use optimal control theory to determine policy, this will often result in suboptimal policy. Their critique of optimal control theory rests on the belief that economic agents form expectations about the future and those expectations influence their current decision-making. In addition, since these expectations will be formed based in part on their expectations of future policy, this results in a breakdown of the optimal control framework. The reason that this is true is based on the way in which optimal control theory is used. In particular, optimal control theory chooses the current policy (or the expected future path of policy, if you prefer) based on the current state variables and the history of policy. If expectations about future policy affect current outcomes, then this violates the assumptions of optimal control theory.

Put differently, optimal control theory generates a path for the policy instrument for the present policy decision and the future path of policy. This expected future path of the monetary policy instrument is calculated taking all information available today as given — including past expectations. However, this means that the value of the policy instrument tomorrow is based, in part, on the decisions made today, which are based, in part, on the expectations about policy tomorrow.

There are two problems here. First, if the central bank could perfectly commit to future actions, then this wouldn’t necessarily be a problem. The central bank could, for example, announce some state-contingent policy and perfectly commit to that policy. If the central bank’s commitment was seen as credible, this would help to anchor expectations thereby reinforcing the policy commitment and allowing the central bank to remain on its stated policy path. However, central banks cannot perfectly commit (this is why Friedman not only wanted a k-percent rule, but also sometimes advocated that it be administered by a computer). Thus, when a central bank has some degree of discretion, using optimal control theory to guide policy will result in suboptimal outcomes.

In addition, discretion creates additional problems if there is some uncertainty about the structure of the economy. If the central bank has imperfect information about the structure of the macroeconomy or an inability to foresee all possible future states of the world, then optimal control theory will not be a useful guide for policy. (To see an illustration of this, see this post by Marcus Nunes.) But note that while this assertion casts further doubt on the ability of optimal control theory to be a useful guide for policy, it is not a necessary condition for suboptimal policy.

In short Kydland and Prescott expanded and bolstered Friedman’s argument. Whereas Friedman had argued that rules were necessary to prevent central banks from making errors that were due to timing and ignorance of the lag in effect of policy, Kydland and Prescott showed that even when the central bank knows the model of the economy and tries to maximize an explicit social welfare function known to everyone, using optimal control theory to guide policy can still be suboptimal. This is a remarkable insight and an important factor in Kydland and Prescott receiving the Nobel Prize. Most importantly, it should give one pause about the favored approach to policy by the incoming chair of the Fed.

My Two Cents on QE and Deflation

Steve Williamson has caused quite the controversy in the blogosphere regarding his argument that quantitative easing is reducing inflation. Unfortunately, I think that much of the debate surrounding this claim can be summarized as: “Steve, of course you’re wrong. Haven’t you read an undergraduate macro text?” I think that this is unfair. Steve is a good economist. He is curious about the world and he likes to think about problems within the context of frameworks that he is familiar with. Sometimes this gives him fairly standard conclusions. Sometimes it doesn’t. Nonetheless, this is what we should all do. And we should evaluate claims based on their merit rather than whether they reinforce our prior beliefs. Thus, I would much rather try to figure out what Steve is saying and then evaluate what he has to say based on its merits.

My commentary on this is going to be somewhat short because I have identified the point at which I think is the source of disagreement. If I am wrong, hopefully Steve or someone else will point out the error in my understanding.

The crux of Steve’s argument seems to be that there is a distinct equilibrium relationship between the rate of inflation and the liquidity premium on money. For example, he writes:

Similarly, for money to be held,

(2) 1 – L(t) = B[u'(c(t+1))/u'(c(t))][p(t)/p(t+1)],

where L(t) is the liquidity premium on money. For example, L(t) is associated with a binding cash-in-advance constraint in a cash-in-advance model, or with some inefficiency of exchange in a deeper model of money.

He then explains why QE might cause a reduction in inflation using this equation:

…the effect of QE is to lower the liquidity premium (collateral constraints are relaxed) which … will lower inflation and increase the real interest rate.

Like Steve, I agree that such a relationship between inflation and the liquidity premium exists. However, where I differ with Steve seems to be in the interpretation of causation. Steve seems to be arguing that causation runs from the liquidity premium to inflation. In addition, since the liquidity premium is determined by the relative supplies of alternative transaction assets, monetary policy controls inflation by controlling the liquidity premium. My thinking is distinct from this. I tend to think of the supply of public transaction assets determining the price level (and thereby the rate of inflation) with the liquidity premium determined given the relative supply of assets and the rate of inflation. Thus, we both seem to think that there is this important equilibrium relationship between the rate of inflation and the liquidity premium, but I tend to see causation running in the opposite direction.

But rather than simply conclude here, let me outline what I am saying within the context of a simple model. Consider the equilibrium condition for money in a monetary search model:

E_t{{p_{t+1}}\over{\beta p_t}} = \sigma E_t[{{u'(q_{t+1})}\over{c'(q_{t+1})}} - 1] + 1

where p_t is the price level, \beta is the discount factor, q_t is consumption, and \sigma is the probability that a buyer and seller is matched. Thus, the term in brackets measures the value of spending money balances and \sigma the probability that those balances are spent. The product of these two terms we will refer to as the liquidity premium, \ell. Thus, the equation can be written:

E_t{{p_{t+1}}\over{\beta p_t}} = 1 + \ell

So here we have the same relationship between the liquidity premium and the inflation rate that we have in Williamson’s framework. In fact, I think that it is through this equation that I can explain our differences on policy.

For example, let’s use our equilibrium expression to illustrate the Friedman rule. The Friedman rule is designed to eliminate a friction. Namely the friction that arises because currency pays zero interest. As a result, individuals economize on money balances and this is inefficient. Milton Friedman recommended maintaining a market interest rate of zero to eliminate the inefficiency. Doing so would also eliminate the liquidity premium on money. In terms of the equation above, it is important to note that the left-hand side can be re-written as:

{{p_{t+1}}\over{\beta p_t}} = (1 + E_t \pi_{t + 1})(1 + r) = 1 + i

where \pi is the inflation rate and r is the rate of time preference. Thus, it is clear that by setting i = 0, it follows from the expression above that \ell = 0 as well.

Steve seems to be thinking about policy within this context. The Fed is pushing the federal funds rate down toward the zero lower bound. Thus, in the context of our discussion above, this should result in a reduction in inflation. If the nominal interest rate is zero, this reduces the liquidity premium on money. From the expression above, if the liquidity premium falls, then the inflation rate must fall to maintain equilibrium.

HOWEVER, there seems to be one thing that is missing. That one thing is how the policy is implemented. Friedman argued that to maintain a zero percent market interest rate the central bank would have to conduct policy such that the inflation rate was negative. In particular, in the context of our basic framework, the central bank would reduce the interest rate to zero by setting

\pi_t = \beta

Since 0 < \beta < 1, this implies deflation. More specifically, Friedman argued that the way in which the central bank could produce deflation was by shrinking the money supply. In other words, Friedman argued that the way to produce a zero percent interest rate was by reducing the money supply and producing deflation.

In practice, the current Federal Reserve policy has been to conduct large scale asset purchases, which have substantially increased the monetary base and have more modestly increased broader measures of the money supply.

In Williamson's framework, it doesn't seem to matter how we get to the zero lower bound on nominal interest rates. All that matters is that we are there, which reduces the liquidity premium on money and therefore must reduce inflation to satisfy our equilibrium condition.

In my view, it is the rate of money growth that determines the rate of inflation and the liquidity premium on money then adjusts. Of course, my view requires a bit more explanation of why we are at the zero lower bound despite LSAPs and positive rates of inflation. The lazy answer is that \beta changed. However, if one allows for the non-neutrality of money, then it is possible that the liquidity premium not only adjusts to the relative supplies of different assets, but also to changes in real economic activity (i.e. q_t above). In particular, if LSAPs increase real economic activity, this could reduce the liquidity premium (given standard assumptions about the shape and slope of the functions u and c).

This is I think the fundamental area of disagreement between Williamson and his critics — whether his critics even know it or not. If you tend to think that non-neutralities are important and persistent then you are likely to think that Williamson is wrong. If you think that non-neutralities are relatively unimportant or that they aren't very persistent, then you are likely to think Williamson might be on to something.

In any event, the blogosphere could stand to spend more time trying to identify the source of disagreement and less time bickering over prior beliefs.