I am reading a very useful primer, “Cognitive biases potentially affecting judgment of global risks,” by Eliezer Yudkowsky, one of the contributors to the blog Overcoming Bias (we made use of one of his posts yesterday). He focuses on existential risk, meaning risks to human existence. Since many people would regard an economic collapse as The End of Life as We Know It, his area of expertise has elements in common with the study of financial risk and market failure.
His article does not claim to be exhaustive, but it give a good layman’s description of some major types of cognitive biases. While all the topics are useful knowledge (and it’s sobering to realize how poorly we humans integrate logic into our decision processes), a couple of sections jumped out as being particularly relevant to the events of the last few weeks.
One was on the “availability heuristic,” which says that people tend to judge the likelihood of an event by the ease with which they can bring it to mind. Thus, subjects will almost without exception say that homicides are more frequent than suicides, when the reverse is true, because murders are reported obsessively in the press and also a plot driver in novels and movies.
Availability skews the assessment of large-scale risk:
People refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et. al. (1993) suggests underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred… Men on flood plains appear to be very much prisoners of their experience… Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.” Burton et. al. (1978) report that when dams and levees are built, they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions. While building dams
decreases the frequency of floods, damage per flood is so much greater afterward that the average yearly damage increases.It seems that people do not extrapolate from experienced small hazards to a possibility of large risks; rather, the past experience of small hazards sets a perceived upper bound on risks. A society well-protected against minor hazards will take no action against major risks (building on flood plains once the regular minor floods are eliminated). A society subject to regular minor hazards will treat those minor hazards as an upper bound on the size of the risks (guarding against regular minor floods but not occasional major floods)
In keeping, Nobel Prize winner and LTCM alum Robert Merton made this observation about the supposed advantages of risk dispersion:
[I]f you invent an advanced braking system for a car, it can reduce road accidents – but it only works if drivers do not react by driving faster.
Another important type of bias is overconfidence:
Suppose I ask you for your best guess as to an uncertain quantity, such as the number of “Physicians and Surgeons” listed in the Yellow Pages of the Boston phone directory, or total U.S. egg production in millions. You will generate some value, which surely will not be exactly correct; the true value will be more or less than your guess. Next I ask you to name a lower bound such that you are 99% confident that the true value lies above this bound, and an upper bound such that you are 99% confident the true value lies beneath this bound. These two bounds form your 98% confidence interval. If you are well-calibrated, then on a test with one hundred such questions, around 2 questions will have answers that fall outside your 98% confidence interval.
Alpert and Raiffa (1982) asked subjects a collective total of 1000 general-knowledge questions like those described above; 426 of the true values lay outside the subjects 98% confidence intervals. If the subjects were properly calibrated there would have been approximately 20 surprises. Put another way: Events to which subjects assigned a probability of 2% happened 42.6% of the time.
Humans seem hard wired to underestimate tail risk. Even experts make the same mistake of setting confidence intervals too tight on questions within their area of expertise.
And then there is the simple danger of not knowing what you don’t know:
[S]omeone….should also know how terribly dangerous it is to have an answer in your mind before you finish asking the question…. [R]emember the reply of Enrico Fermi to Leo Szilard’s proposal that a fission chain reaction could be used to build nuclear weapons. (The reply was “Nuts!” – Fermi considered the possibility so remote as to not be worth investigating.)….[R]emember the history of errors in physics calculations: the Castle Bravo nuclear test that produced a 15-megaton explosion, instead of 4 to 8, because of an unconsidered reaction in lithium-7: They correctly solved the wrong equation, failed to think of all the terms that needed to be included, and at least one person in the expanded fallout radius died….[R]emember Lord Kelvin’s careful proof, using multiple, independent quantitative calculations from well-established theories, that the Earth could not possibly have existed for so much as forty million years…..
[W]hen an expert says the probability is “a million to one” without using actuarial data or calculations from a precise, precisely confirmed model, the calibration is probably more like twenty to one (though this is not an exact conversion).
More good stuff here, along with a bibliography and recommended reading.
On cognitive bias, why would an institution value an instrument differently to another (and thus produce an active market) because it finances itself through short term paper and, or, does not need carry it on its bsheet?
Let me address the first part of your question: why would one institution value a security differently than another. Off the top of my head, I can come up with a few reasons. One is difference in objectives. Defined benefit pension funds are supposed to manage their assets to produce returns to meet the expected retirement payout. Thus they are long term investors with fairly modest return objectives (their recent behavior not withstanding). They place a bigger premium than other investors on Not Screwing Up and many are very dependent on fund consultants (many of which use methodologies that I regard with some skepticism, the biggest being style diversification). So conformity to conventional investment thinking is also important to them.
By contrast, hedge funds have short term objectives, and typically use leverage. They have an incentive to take big risks. They get big upside fees if they are right, and in the vast majority of cases, no downside if they screw up (very few funds have clawbacks in the case of losses).
Similarly, pension funds and banks, for different regulatory reasons, like AAA paper. There is more demand for AAA paper than supply. Hence the creation of supposed AAA paper by securitization and tranching.
Basically, investors have different objectives and constraints. And they may also have different beliefs. Studies of propaganda in different countries have shown how quickly opinion can change based on media coverage. My perception is that coverage of the financial markets in the UK is much more jaundiced than here. There are doubtless other subtle and perhaps gross differences in coverage across countries (as well as further differences in regulations) that would lead to differences in investor behavior and hence how they might value the same security.
I think this means you are indicating the pricing differences are most likely, 98% driven by regulatory, objective, incentives and financial structuring and only 2% by cognitive issues.
Are these guys searching for risk and valuation anomalies in the wrong places? The “market” as much as it can be a “thinking unit” may suffer from episodes of cognitive dissonance so to speak. Markets can underestimate risks due to irrationality but their is no reason that they should do this differently at different points in time. Rather they take on risk because it is in their rational interest to do so.
When the market turns their bets are wrong, they have no excuse, nor should they be excused.