All About Alpha, which is a fine site for all things hedge fund related, has an excellent piece today by Dr. William Shadwick of Omega Analysis. Shadwick has the unusual distinction of being a serious mathematician (he established Fields Institute for Research in Mathematical Sciences) who writes well. He also entered the finance industry relatively late in his professional life, which enables him to take a more detached view.
The piece does a nice job of giving some of the major failing in models that show up as embedded assumptions. What is perhaps particularly interesting to the layperson is some of the recognized problems (such as models that presuppose that distributions are normal) are often uncorrected. In addition, some of the modifications (allowing for skewness, or asymmetrical distributions, and kurtosis, or fat tails) are unreliable in practice. Practitioners typically have too few data points to make solid assessments, and the model outputs are highly sensitive to how the distributions are tweaked.
From AllAboutAlpha:
The title of this piece comes from a joke about a “highly qualified” financial engineer’s reaction to a well-proven trading strategy. It illustrates the tension between theory and practice that we have all seen as quantitative methods become ever more common at trading desks and in investment management.
In general, the rise of quantitative tools in finance has been highly beneficial but the widespread use of models has been a decidedly mixed blessing. In science, the constant development of theories expressed as mathematical models to be tested, rejected, confirmed or refined through observation and experiment is the main source of progress in our understanding of the physical world. This process is also crucial for engineering and technology where it is the key to predicting future events and controlling them to our advantage.
It was inevitable that this paradigm would eventually be adopted in economics and finance too. In the half century since Markowitz put portfolio construction on a quantitative footing there has been a steady growth in the use of increasingly sophisticated and complex models and statistical techniques in investment management.
The nature of the financial markets is such that this growth in models has not been accompanied by the sort of testing that the field science demands. Finance academics simply cannot perform experiments like those upon which the sciences rely, and they are also severely constrained in the type of observations they can make. Data and information about what goes on in reality (as opposed to theory) is, and is likely to remain, in very short supply in comparison to the sciences.
The end users of the models in finance are not intent on understanding how markets may be explained. That is the goal of academic research. Instead, they want to employ theory and models to produce profits. Like physical engineering, which has had its share of collapsing bridges, financial engineering has therefore led to many accidents.For example, the current mess in the credit markets would not have been possible without the extensive and inappropriate rise of “sophisticated” models. The results of mis-priced risk have now been cascading through the financial system for several months and show no sign of abating.
Assumptions = “Hidden Models”
A mathematical model can be thought of as a process that states:
“If assumption A is satisfied, then input of B is guaranteed to be followed by output of C.”
A “robust” model is one in which:
“If A is close enough to being satisfied and the input is close enough to B then a result close to C is guaranteed.”
The most basic requirement for the use of mathematical models – in any context – is that they are appropriate for the job. The model tells you nothing about what happens if assumption A is not even close to being satisfied. In this case, no matter how diligently one applies the model with the expectation of getting from B to C, the process cannot be trusted. If the model is not robust, the difference in output may even be very dangerous.
Hidden Model: Return distributions are “independent and identically distributed”
The increased use of quantitative methods means that models are now almost ubiquitous and are often present but hidden. For example, almost every hedge fund investor or manager has used the “square root of 12 rule” to produce an annualized volatility figure from a sample of monthly returns. But how many people remember to check the assumption upon which the rule is based? The returns must be independent draws from the same distribution (i.i.d., or “independent and identically distributed”) for this rule to be justified.
This is an example of a “hidden model”. It is not the returns of a hedge fund that we are talking about but the model of returns of a hedge fund. Does anybody really believe that hedge fund returns have no auto correlation? Does anybody really believe that there is an unchanging distribution from which the returns are drawn?
Returns on hedge fund investments or stock market prices are not random variables. However the extent of apparent randomness in their behavior means that statistical tools are most appropriate for describing them and for making predictions of the future. In the case of the “square root of 12 rule”, the prediction is the annual volatility expected over many years. While nobody would feel that a sample of 3 annual returns would merit the calculation of an annual volatility, we’re happy to use a sample of 36 monthly returns and the model of returns as i.i.d. random variables to make the prediction.
The danger in such an assumption is that it can easily underestimate the true annual volatility of returns. This may produce serious strains on an investment program because the path by which the NAV goes from its initial value to its value 5 or 10 years later often matters a great deal. It matters to a manager who may spend significant time without receiving a performance fee after a large loss or a series of smaller ones. It matters to the investor who requires some of the proceeds of his investment for income during the period. This is, of course, the reason for wanting an estimate of annual volatility in the first place. (It is difficult to find an example of an investor who only needs to know that his investment NAV will rise “in the long term” while being unable to count on using the proceeds at any intervening time before the long term – when, as Keynes said, “we’re all dead.”)
Hidden Model: Return are normally distributed
There are more dangerous assumptions than returns being independent and identically distributed. One might also assume that they were normally distributed. Probably everyone has heard the “black swans” argument about the importance of extreme events in markets and Mandelbrot and Taleb’s attacks on the reliance on normal distributions in finance theory.
I think they have greatly overestimated the number of academics who haven’t yet noticed that market returns aren’t normal. However there is no doubt that the persistence of press and industry descriptions of large market losses in terms of standard deviations (and ascribing an extremely low probability to such an event in consequence) indicates a widespread hidden assumption of normality.
This is dangerous for the obvious reason that it call lead to a feeling of safety where none exists. If you know that there is a 1 in 10 chance of a catastrophic loss instead of believing the chance to be 1 in 1000, the expected return you require for taking such a risk will be very different. There is no doubt that many of the estimates of loss responsible for the sub-prime debacle required exactly this sort of mis-pricing.
Hidden Model: Standard deviation is a proxy for risk
In great part, these dangers are a consequence of another hidden model, namely the use of standard deviation of returns as a proxy for risk. The realization that this model of risk is especially dangerous when applied to hedge funds has led both academics and finance practitioners to make use of skewness and kurtosis in an attempt at more “sophisticated” modeling of risk.
Skewness is intended to model asymmetry – the mismatch of upside and downside risk. Kurtosis is intended to model the likelihood of extreme events or “fat tails”. Of course, certain assumptions must be satisfied for these models to perform as intended. Dangers introduced by relying on these metrics are compounded by the great sensitivity of skewness and kurtosis to (even moderate) outliers. These are not statistics meant for small samples.
Hidden assumptions in hedge fund replication
The extent to which distributional replicators will succeed in reproducing hedge fund returns will depend on the extent to which the “noisiness” of skewness and kurtosis can be managed. An even more critical assumption (underpinning distributional replication) is that distributions with the same mean, variance, skewness and kurtosis must be very similar. This is not true in general. So replicators must depend on this assumption being satisfied – at least approximately – for the distributions that matter to them.
The use of standard deviation to describe risk is also an essential part of the risk-factor approach to hedge fund replication. In fact, the term risk-factor itself equates risk with standard deviation of returns. In this case, the (linear regression) model could be said to be “hidden in plain sight”, but it is no more easily remembered for that.
Does everyone who uses the term “alpha” really mean it to be interchangeable with an artifact of a particular model of returns? For that matter, does everyone accept the hidden model in the statistician’s use of the word “explain” when he says that “certain risk-factors explain some percentage of hedge fund returns?”
It sounds rather different if he instead says that he has a model which (while nothing can be known about its actual similarity to a particular investment strategy or indeed how likely its assumptions are to be satisfied), manages to approximately reproduce the strategy’s mean, standard deviation, and correlation with a number of financial indices.
Bottom Line: Hidden assumptions should give rise to “health warnings” on quantitative models
Models are everywhere in quantitative finance but it is almost impossible to find any attendant statements regarding the assumptions upon which they are based. Their purveyors should issue “health warnings” that tell the user that hidden assumptions are present and that failing to check that the assumptions are valid may be dangerous to investment health. It is essential that we recognize the difference between finance and science. In science, increasingly sophisticated mathematical techniques always produce better results over time. But this need not be the case in finance. Nevertheless finance can and should aspire to the status of an engineering discipline.
While you are unlikely to find health warnings on financial models any time soon, there are a few simple principles which can reduce the danger they present:
It is far more important to look to simplicity (and common sense) than it is to look to increasing complexity as a means to better control investment outcome.
A model whose robustness is unknown or unknowable should never be employed.
Sophisticated tools should only be used if it is possible to verify that all required assumptions are satisfied (at least to a good approximation). When this condition can be met, a simple application of a sophisticated technique is preferable to a complicated one.
Keeping these in mind will reduce the risk that financial models may pose to your investment health!
and how many people heed the health warnings on just about anything else?
One of the main problems is that the models can and do drive the market, disengaging it from reality.
And, before anyone says arbitrage, I’d like to point that the arbitrage would then work only if you had unlimited funds. Which is why bubbless are hard to arbitrage away.
Government Wealth Warning: Investing May Seriously Damage Your Wealth.
Biggest assumption:
heads: get a fat bonus,
tails: get a new job.
Model that!
I do this cr@p for a living, and unlike the good professor, I don’t get high on my own supply.
Not being in the financial world myself, I’m curious to what extent finance types have backgrounds in statistics. While you hear about plenty of physics and math PhDs entering finance, I’m not sure their training includes much statistics, to say nothing of the undergrad econ major or newly graduated MBA.
It seems that many of the mistakes being made in the models are actually pretty elementary rules in statistics. It’s a little frightening that people making billion dollar models don’t understand the appropriate use of normal distributions. Anyone in the industry willing to offer any insight?
In a way, it is a result of the scientific background. In Physics, at least, the number of analytical solvable models is limited. A physicist would prefer to use it, even if it’s just a rough approximation. The analytical treatment provides, in many cases, insights on the problem.
The problem arises when science must become engineering. And a nice model is not enough, it must also be closer to reality.
But what can you do when your tools can only be applied to an inaccurate model? You try to patch it and hope for the best.
I expect that the mathematical models function in the same way as clothes for the salesman, to make a presentable case. In the old days you could make a sale by saying “the great JP Morgan likes this stock”, these days since we are more sophisticated we need to be sold on the Ito calculus and Weiner processes. But (as I think) Mr Taleb implied there isn’t a whole lot more to financial modelling than the random walk which one can handle with the binomial theorem. So we have a situation where the mathematicians provide cover and intimidation for the industry with incomprehensible stuff while the other guys clean out. Add to that the evident corruption and cynicism among financial operators and you have stoked up a huge ponzi scheme.
Ivan