A post over at the New York Times is arguing that one of the main causes of the financial crisis was inadequate quantitative models – models that tended to understate risk because they failed to provide a realistic model of the way the world works – neither incorporating risks such as a failure of liquidity, nor the complexities of human behaviour.
I certainly agree that the current stable of models which are in widespread use are inadequate given that the competitive market has made the spreads on trades so tight that there is no longer any buffer to cover the many short-falls in the models. Back when vanilla options were an exotic trade, the trader would incorporate plenty of fat in their options trades. Intense competition, a market that has steadily grown over the past 20 years (notwithstanding small glitches), and increased familiarity with the trades has served to camouflage the risks the traders were running in their options books.
Quantitative Finance is a very young science. I remember back in the early 90’s; the Black-Scholes model was the height of financial sophistication. I did not start working with finite difference methods for valuing American Equity Options until 1994. It was after Barings collapsed in 1995 that regulators started forcing banks to calculate VaR figures and allocate capital based on their calculated risk numbers. How far we have come in the last 20 years!
Of course, nearly all the models created in the early periods of finance have been based on the assumption that returns are normally distributed. Unfortunately this assumption does not correspond to observed reality. Attempts that DO try to create models that reflect the observed distribution lead to models that incorporate jumps, or contain stochastic volatility. Once we lose the normal distribution of returns assumption, we also have problems with correlation as an adequate measure of the relationship between the market factors. Suddenly we are in a whole world of quantitative pain.
There are also the practicalities of having neither the sheer computational power nor the software to incorporate the latest financial models. For some reason, the models currently in widespread use are not the most sophisticated models, but those that strain the limits of computation, software and trader/quant understanding.
It’s interesting to contemplate how hard finance will be with far more complex models. We are currently farming out our risk calculations onto large grids of computers. Advances in leveraging graphics cards to create cheap super-computers are arriving just in time, and are a hot topic in finance. The traders are demanding real-time risk figures (and so they should), and yet it is already hard to reliably provide these numbers with the simple models in vogue. It will be much, much harder as models increase in complexity.