Tag Archives: uncertainty

Big Players and Uncertainty, Part 2

Some readers may recall an earlier post in which I explained that many of the failures of the attempts at government intervention have to do with the fact that the government is exercising discretionary power that change the rules of the game on a largely ad hoc basis. Particularly, I referenced Roger Koppl’s theory of Big Players, in which a large entity that is largely immune from the profit-loss mechanism wields significant discretionary power. Under these circumstances, such discretionary power can be a major source of uncertainty and therefore causes market participants to shift resources away from productive uses and toward predicting the behavior of the Big Player (incidentally, Axel Leijonhufvud makes a similar point in his classic, “Costs and Consequences of Inflation“).

In the earlier post, I referenced the ad hoc behavior of the Treasury in developing the bank bailouts and suggested that, consistent with the theory of Big Players, such behavior only served to generate uncertainty. I have not been alone in this analysis. For example, John Taylor’s new book similarly criticizes such ad hoc behavior on the part of the federal government as exacerbating the crisis.

Thus, I was not at all surprised to read the following story from NPR, in which one bank CEO explains why he decided to give the TARP money back:

[CEO Joseph] DePaolo says Signature returned the money for three reasons: Legislation passed Feb. 17 would limit the compensation for salespeople, make it difficult to recruit bankers and cause uncertainty.

“With the new legislation, they changed the rules in the middle of the game,” he says. “We didn’t know how many more rule changes or legislation would come down, maybe telling banks, ‘This is what you can do with your lending. This is what you can do with your clients.'”

I will reiterate a point that I made in the earlier post:

If the government really wants to help, they can start by setting the rules now and following through on their promises. So long as they continue to change the rules on a daily basis, uncertainty will prevail, the stock market will remain volatile, and the credit markets will remain frozen.

The Government, Housing, and The Crisis

It is time to weigh in on some important topics with respect to the current financial crisis. First, I think it needs to be noted that the crisis has already had many stages, each of which likely need to be discussed individually. They can loosely be classified as follows:

1.) The housing bubble (or “How we got here…”)

2.) The bursting of the housing bubble, the increased perception of risk, the fall of Bear Stearns, and the expansion of Federal Reserve power (sorry I couldn’t make this pithy).

3.) Financial market mayhem.

4.) The Hank Paulson Variety Show. (My thoughts here and here.)

Over at Cato Unbound the crisis is being debated by the likes of Lawrence White, Brad DeLong, and Casey Mulligan. Each makes particularly intriguing points, but the main point that I would like to address is in regards to the stages of the crisis. We seem to have gotten to the point where everyone is talking past one another because each is talking about a separate stage of the crisis. For example, White’s essay clearly outlines the incentives put forth by the government that contributed to the housing boom. DeLong, however, counters that White is not addressing the important issue and that he even gets the one he is discussing wrong. I think that there are elements of each of their essays that are correct, but I do not agree with DeLong that they are mutually exclusive.

White is largely concerned with stage 1 listed above. His essay (helps) explain the cause of the housing bubble, but is quite vague on the impact of the economic shock created by its collapse. DeLong is primarily concerned with stages 2 and 3, or in other words the impact of the economic shock. Further, he asserts that government intervention and monetary policy explain little about the shock.

Let’s take this point-by-point. First with regard to monetary policy. DeLong asks:

Are we supposed to believe that $200 billion of open-market purchases by the Fed drives private agents into making $8 trillion of privately unprofitable loans?

This is somewhat misleading. As our friend David Beckworth points out,

The absolute dollar size of the [open market purchase], however, is not important. What is important is whether these increases in liquidity were excessive relative to the demand for them. One only needs to look at the negative real federal funds rate that persisted over this period to see that these injections were excessive.

I think that Beckworth hits the nail on the head here. These injections were clearly excessive as is evident from White’s chart in his Cato policy paper, in which he compares the actual federal funds rate to that which would be predicted by the Taylor Rule. Further, recent research has shown that low interest rates cause banks to lower their lending standards. These would seem to suggest that monetary policy played in important role in causing the economic shock.

This brings us to the second point in this discussion: did government intervention cause the housing bubble? I believe that the answer is both yes and no. I am on record in saying that Fannie and Freddie (see here and here) did not cause the crisis. In fact, if you read Stephen Cecchetti’s excellent discussion of the early part of the crisis, you will notice that private securitization of mortgage debt was growing much faster than that of the GSEs in the early part of this decade. Nonetheless, I believe that government policy did play a minor role in creating the housing boom (as I will discuss below).

As previously mentioned, monetary policy seems to have played a crucial role in the financial crisis. However, the fact that monetary policy stoked the fire says little about why all of this money flowed into housing. I think that there are two main culprits: (A) Securitization, and (B) Government policy; the former being a necessary condition for the latter to have a meaningful impact. Allow me to explain.

In private conversations with our friend Barry Ritholtz about these matters, he has challenged me to explain why the Community Reinvestment Act (CRA) did not create a boom (or crisis) from 1977 to 2002. This is a fair point and one that I think few (if any) have failed to address. What changed in recent years is that (i) the CRA received some teeth in 1995, (ii) the Federal Reserve lowered interest rates to historic lows for an extended period of time, and (iii) the increased use of private securitization. Ultimately, I think that (ii) and (iii) are the most important both in creating the economic shock and that (i) played a minor role in that the other two factors facilitated the compliance with government policy.

When government regulation is created, there is an immediate incentive to circumvent the regulation. However, the use of securitization essentially made it easier for banks to comply with CRA (by buying securitized mortgages that complied or by issuing the mortgages themselves and selling them off as part of an ABS in the future). Thus far all we have is lower bound estimates of the impact of the CRA on subprime loans, but this lower bound is decidedly not zero. As the link above indicates, a recent Fed study indicated that only about 8% of subprime loans can be correctly tied to the CRA. Nevertheless, as Lawrence White points out in that post, this ignores potential “demonstration” effects. In other words, once banks who are not required to comply with the CRA discover that other banks are making these loans somewhat successfully, they might be more inclined to enter the market to compete directly with these firms (this might explain why 75% of troubled mortgages originate from firms that are not required to comply with the CRA). In any event, however, it is unlikely that the percentage of subprime that originated directly as a result of CRA exceeds 20% and therefore must be deemed a relatively small factor.

To summarize, I believe that monetary policy and the increased use of securitization are to blame for the creation of the economic shock and the subsequent chaos in its aftermath. Nonetheless, I think that government policy does play a minor role in explaining the creation of the shock.

Big Players and Uncertainty

There has been a great deal of discussion lately with regards to the ever-changing role of the Troubled Assets Relief Program (TARP).  Initially, the program was designed to purchase the troubled assets of financial institutions in an attempt to cleanse their balance sheets and get them lending again.  I have previously come out against this plan as it fails to take into account the limitations and dispersal of knowledge within markets.  Perhaps this and other objections were heeded by the Treasury Department, which inexplicably abandoned this stated goal in favor of direct equity injections in troubled financial institutions.  In the aftermath of this decision, little has been done to instill confidence in the financial markets and the capital infusions have done little to increase the lending by the recipient institutions.  Given that many noted economists preferred capital infusions to the purchase of troubled financial assets, recent events beg the question as to why this change in policy has been unsuccessful.

The answer can be found in Fairleigh Dickinson economist Roger Koppl’s theory of Big Players.  Koppl defines a Big Player as a market participant that is substantially large, immune to profit and loss mechanisms, and yields ample discretionary power to have an impact on the market as a whole.  Central banks are perhaps the clearest example of a Big Player and this theory indeed might explain much about the artificial boom that preceded the current mess.  However, the theory is perhaps more applicable in the aftermath of the boom.  Since the onset of the crisis, the Federal Reserve and the Treasury department have acted as Big Players.  They continue to wield significant discretionary power and often take unprecedented action – and at times unexpected restraint.  In other words, to use a tired saying, they are flying by the seat of their pants.  One need not look beyond the TARP for an understanding of the discretionary power of these entities.

The effect of this discretionary power is to increase uncertainty within the financial markets.  Firms that receive capital infusions refuse to increase lending precisely because the rules are changing on a daily basis.  The same goes for investors who must not only predict what the market is going to do, but also the behavior of the Big Players.  Of course, the ability to predict what the Treasury and the Fed are going to do next is substantially difficult.  The result is the herd-like behavior that has been prevalent in the stock market for the last few months.  When there is a high level of uncertainty in markets, participants start relying more on what they believe that others believe than the prospective yield of a particular investment.  The empirical evidence presented by Koppl and his colleagues confirms these claims.  Uncertainty breeds uncertainty.

Nevertheless, some pundits continue to press on.  The same individuals who advocated using capital infusions and who were surprised to find the institutions unwilling to lend are now advocating forcing the financial companies to lend. Markets function well when the surrounding institutional framework is sound.  If the government really wants to help, they can start by setting the rules now and following through on their promises.  So long as they continue to change the rules on a daily basis, uncertainty will prevail, the stock market will remain volatile, and the credit markets will remain frozen.  

Knowledge, Uncertainty, and the Paulson Plan

Like the crisis itself, the conversation surrounding the Paulson Plan has devolved into clichéd talking points, ideological posturing, and an utter inability to discuss the situation in an intelligent and coherent fashion. Financial and political pundits are either heralding the defeat of the plan or lamenting its demise as the trigger towards another depression. Thus there is a great degree of uncertainty in the air as many are left wondering if the defeat of the Paulson Plan (or at least the 110-page House version of the plan) spells disaster. The short answer is “no.”

As I have previously mentioned, the problem in the financial markets is not a lack of liquidity, but rather the result of counter-party risk. The essential problem with the financial industry is the fact that firms need to raise capital, but are unable to do so because of the inability of other parties to accurately and adequately price the assets that the firms would like to sell. Accordingly, confidence must be restored in the market in order to get things moving again. The Paulson Plan, in its original form, would have allowed the government to purchase these assets at depressed prices thereby allowing for re-capitalization of the financials and allowing credit to flow. In addition, the stabilization of the housing and credit markets would allow firms and investors to more accurately price the assets in question. Thus, in theory, the government could buy low and sell high. However, the question remains as to whether or not the government can successfully execute this strategy. As our friend Thomas Palley explains:

The Paulson plan is subject to three fundamental criticisms. First, the Treasury may over-pay for assets, saddling taxpayers with large losses. If the Treasury sets its acceptable price too low, there is a risk it will buy insufficient assets and banks will not be cleansed. If it sets prices too high, the risk is Treasury overpays. Second, Treasury is taking a big risk as prices could fall further, yet it is not being properly rewarded for this risk-taking. That is tantamount to subsidizing banks which have created the mess. Third, markets may not provide finance even after Treasury’s purchases, in which case banks will remain undercapitalized.

The Paulson Plan therefore succumbs the knowledge problem. If those within the market are unable to accurately price these assets, it is unlikely that a government agency could succeed in doing so. In an ergodic world in which risk is easily calculable, the Paulson Plan would likely be successful. However, if the events of the last year have been any indication, the world is non-ergodic and the risks of default associated with mortgage-backed securities and collateralized debt obligations do not necessarily follow identifiable (or even existing) probability distributions (roughly, what Nassim Taleb refers to as the Fourth Quadrant). In the absence of easily quantifiable risk, the Treasury leaves itself prone to setting prices too low or too high and therefore resulting in either a failure to re-capitalize the financials or creating an exorbitant cost to taxpayers.

So while some are decrying the defeat of this bill as the beginning of something terrible, it seems prudent to at least take a step back and evaluate whether the plan could truly have been successful. I am not convinced.

A Final Word on Uncertainty

Gabriel responds in the comments to the previous post (my thoughts are in bold):

So, what are you _practically_ going to do about it? A stylized model that you might think is literally wrong is better than no model at all.

I staunchly disagree with this assertion.  To paraphrase Keynes, “I’d rather be somewhat correct than completely wrong.”  That point, however, is not as important as your first question.  Practically, there are many things that we can do about it.  The work of Roman Frydman and Michael Goldberg in Imperfect Knowledge Economics represents a step in the right direction as does the work of the complexity theorists who are devising models with realistic expectations.

The world might be non-ergodic (I happen to think that it’s not, properly conceived, but whatever) but even if it isn’t, it might still make sense to model it as if it were.

I disagree here as well.  To quote Frydman and Goldberg (ibid, 4):

“To construct such models, which we refer to as fully predetermined,  contemporary economists mus fully prespecify how market participants alter their decisions and how resulting aggregate outcomes unfold over time.  By design, contemporary models rule out the importance of individual creativity in coping with inherently imperfect knowlege and unforeseen changes in the social context.”

Even if the world is ergodic, individuals do not possess the perfect knowledge assumed by rational expectations (a non-ergodic world would undoubtedly imply imperfect knowledge).  The a priori assumptions that we use in the current choice-theoretic framework are flawed.  This need not suggest that we abandon such modeling, but rather modify it to be more in touch with reality.

There are many economists who are trying to give meaning to uncertainty and imperfect knowledge in contemporary theory.  The work of Ned Phelps, Axel Leijonhufvud, Robert Clower, Arthur Okun, Armen Alchian and others were the first wave of such theorists and it has since spread to Roman Frydman, Michael Goldberg, Brian Arthur, Barkley Rosser, and countless others.  I would recommend reading Ned Phelps’s Nobel Prize lecture, specifically the sections on knowledge and uncertainty.

More on Radical Uncertainty

Gabriel Mihalache has criticized the views of myself and others on radical uncertainty as follows:

Some people wrongly interpreted Caplan’s point as being one about markets, so they jumped at a chance to criticize a set of complete, contingent markets, but a) this is not about markets, but rather about agents; and b) neoclassical economics can be done with incomplete markets or no markets at all!

Contingent claim markets are used in models of representative agents, so I am not sure where this criticism quite fits. The problem that I have with contingent claim markets and the use of representative agents in general equilibrium theory is far too expansive for a blog post. Similarly, I do not want to get bogged down with other elements of GE theory.

First, I would point out that the world is non-ergodic (to use a term of Doug North, Paul Davidson, and others). As the quote from Keynes in my previous post as well as the work of Schumpeter on creative destruction indicates that there is no probability distribution that exists for invention, innovation, etc. Similarly, as Doug North points out, economists treat uncertainty (as defined in the Knightian sense of the word) as though it is a rare case, when in fact, “it has been the underlying condition responsible for the evolving structure of human organization throughout history and pre-history” (Understanding the Process of Economic Change, Douglass C. North, p. 14).

Thus, ignoring the misuse of uncertainty in the general equilibrium framework, let’s use the classical example of risk and uncertainty from microeconomics. An actuarially fair insurance premium would be such that:

Premium = p*L

where p is the probability of the event and L is the loss. (We can expand this to include a risk premium, but it would not embolden our analysis). Of course, in reality, there are cases where both p and L are unknown. Suppose, for example, one wanted to purchase insurance against the risk of the price of a given commodity falling over an extended period of time. What is the likely price of that commodity 5 years hence? 3 years? 1 year? 3 months? What is the probability that the price will fall? As Keynes would say, “About these matter there is no scientific basis on which to form any calculable probability…”

I am in no way trying to argue that models or risk and uncertainty should be abandoned. They are clearly useful in cases in which the probabilities and potential losses are explicitly known. However, we would do well to recognize that the world is not ergodic and that always and everywhere modeling it as such is an impediment to our understanding of complex human interaction.

Radical Uncertainty

Bryan Caplan has issued a challenge:

Austrian economists often attack the mainstream for ignoring something they call “radical uncertainty,” “sheer ignorance,” or sometimes “Knightian uncertainty.” A common Austrian slogan is that “Neoclassical economists study only cases where people know that they don’t know; we study cases where people don’t know that they don’t know.”

All of this sounds plausible until you press the Austrian to do one of two things:

1. Explain his point using standard probability language. What probability does “don’t know that you don’t know” correspond to? Zero? But if people really assigned p=0 to an event, than the arrival of counter-evidence should make them think that they are delusional, not than a p=0 event has occured.

2. Give a good concrete example.

Austrians (as well as Post Keynesians), I believe, are correct to criticize neoclassical theory in this manner. Neoclassical theory assumes that there is a market of complete contingent contracts with an assigned probability for each anticipated state. This undoubtedly does not reflect reality as there exist states for which no contract is traded. As Keynes explained in “The General Theory of Employment” in the QJE in 1937:

But at any given time facts and expectations were assumed [by the classical economists] to be given in a definite and calculable form; and risks, of which, though admitted, not much notice was taken, were supposed to be capable of an exact actuarial computation. The calculus of probability, though mention of it was kept in the background, was supposed to be capable of reducing uncertainty to the same calculable status as that of certainty itself.

Actually, however, we have, as a rule, only the vaguest idea of any by the most direct consequences of our acts … Thus the fact that our knowledge of the future is fluctuating, vague and uncertain, renders wealth a peculiarly unsuitable subject for the methods of the classical economic theory.

By uncertain knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is merely probable … The sense in which I am using the term is that in which the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention are uncertain. About these matter there is no scientific basis on which to form any calculable probability whatever. [Emphasis added.]

The infamous beauty contest described in the General Theory is also a particularly useful analogy for stock market activity and speculation. Of course Keynes was overly pessimistic, in my view, of our ability to form meaningful expectations. Roger Koppl, for example, bridges the gap between Keynes and reality in Big Players and the Economic Theory of Expectations by discussing the emergence of planning horizons, in which each point in the future grows evermore uncertain and therefore the more distant the period, the more open-ended one’s expectations must become. Nevertheless, Keynes’ views on probability theory and economics is much more grounded in reality than the Arrow-Debreu markets for contingent claims.

Perhaps ironically, Keynes’ views on uncertainty are greatly complemented by the work of F.A. Hayek. Whereas Keynes explicitly laid out a vision of why things go wrong, Hayek countered (although not directly) by explaining how things could go right. Hayek’s work on economics and knowledge (here and here, for example) details how, even in the presence of uncertainty and dispersed knowledge, markets serve coordinate behavior and produce efficient outcomes. Similarly, Hayek’s writing on expectations detail how an individual’s views evolve over time and adjust in response to confirmation (or lack thereof) of expectations. Overall, the market provides signals through prices as well as through the profit and loss mechanism and therefore individuals are able to evaluate their expectations and evolve accordingly. Thus, Keynes provides the outline for the radical uncertainty that individuals face and Hayek explains how individuals are able to overcome and cope with said uncertainty. As I have stated previously, this is a much better description of reality than Arrow-Debreu contingent claims.

As to Bryan’s questions, in assigning probabilities (p = x, for example) for events that people don’t know that they don’t know, it is irrelevant what value x takes on as long as their expectations are proven grossly incorrect ex post or the probability of such an event precludes the existence of a contingent contract for that event. Had one posed a question on September 10, 2001 regarding the probability of a terrorist attack the following day the mean probability would undoubtedly not have been equal to 1 (it would likely have been less than 0.01) and I would venture to guess that it is even unlikely that one would have received a single response of 100%. Similarly, for Tyler Cowen’s example of the arrival of the Spaniards.