I hate beating up on Gillian Tett, because even a writer is clever as she is is ultimately no better than her sources, and she seems to be spending too much time with the wrong sort of technocrats.
Her latest piece correctly decries the fact that no one has the foggiest idea of what might have happened if Greece defaulted (note that we are likely to revisit this issue in the not-too-distant future). But she makes the mistake of assuming the problem could have been solved (in the mathematical sense, that the outcome could have been predicted with some precision) by having better data. That is a considerable and unwarranted logical leap:
Today banks and other financial institutions are filing far more detailed reports on those repo and credit derivatives trades, and regulators are exchanging that information between themselves. Meanwhile, in Washington a new body – the Office of Financial Research (OFR) – has been established to monitor those data flows and in July US regulators will take another important step forward when they start receiving detailed, timely trading data from hedge funds, for the first time.
But there is a catch: although these reports are now flooding in, what is still critically unclear is whether the regulators – or anybody else – has the resources and incentives to use that data properly. The bitter irony is that this information tsunami is hitting just as institutions such as the Securities and Exchange Commission are seeing their resources squeezed; getting the all-important brain power – or the computers – to crunch those numbers is getting harder by the day.
That means that important data – on Greece, or anything else – could end up languishing in dark corners of cyberspace. That is a profound pity in every sense. After all, if the data could be properly deployed, it might do wonders to show how the modern global financial system really works (or not, in the eurozone.) Conversely, if data ends up partly unused, that not just creates a pointless cost for banks and asset managers – but could also expose government agencies to future political and legal risk, if it ever emerges in a future crisis that data had been ignored.
Since important information about the last crisis has been given short shrift, it’s a given that more data won’t necessarily yield a commensurate increase in understanding. We’ve lamented how, for instance, a critically important BIS paper debunking the role of the saving glut in the crisis and the use of the “natural” rate of interest in economic models, has been largely been ignored. Similarly, from what we can tell, there is perilous little understanding of how heavily synthetic and synthetic CDOs turned a US housing bubble that would have died a natural death in 2005 into a global financial crisis.
And Tett’s focus on “data”, no doubt reflecting the preoccupation of the officialdom, is a big tell. Economists routinely exhibit “drunk under the streetlight” syndrome: they prize analyzing big datasets, aren’t good at developing them (this was a huge beef of Nobel Prize winner Wassily Leontief), and pretty bad at doing qualitative research (they’d rather to do thought experiments and even when they undertake survey research, the resulting studies have strong hallmarks of a failure to do proper development and validation of the survey instrument).
Now, to the prospects for performing diagnostics and preventing the next crisis. One the one hand, it is a disgrace that the authorities didn’t have a good grip on who was on the wrong side of Greek credit default swaps. The US banks were thought to be reasonably exposed; that’s one reason Treasury Secretary Geithner was unduly interested in this situation (recall when Geithner intervened what was seen as decisively against an Irish effort to haircut €30 billion of unguaranteed bonds?). This is inexcusable, particularly in the wake of the financial crisis. We’ve harped on the fact the likely reason that Bear was bailed out was due to its credit default swap exposures. At the time of the Bear failure, Lehman, UBS, and Merrill were seen as next. The authorities went into Mission Accomplished mode rather than putting on a full bore, international effort to get to the bottom of CDS exposures. And the Greek affair suggests they’ve continued to sit on their hands.
This matters because, as Lisa Pollack illustrated in a neat little post, supposedly hedged positions across counterparties can quickly become unhedged if one counterparty fails. So a basic data gathering exercise would at least help in identifying who is particularly active and has high exposures to specific counterparties and products.
But this is of less help with big financial firms than you might think. While Lehman was correctly seen as being undercapitalized well in advance, pretty much no one foresaw Bear’s failure. It went down in a mere ten days. Confidence is a fragile thing. Similarly, while some positions are not very liquid or all that easy to hedge (think of our favorite bete noire, second liens on US homes), in general big financial firms have dynamic balance sheets. With more extensive reporting, could regulators have seen and intervened in MF Global’s betting the farm on short-dated Italian government debt? Even if they had perceived the risk, Corzine would have argued that the trade would have worked out (and it did even though the firm failed by levering it too much).
And think what would have happened if the regulators had gone in. In our current overly permissive regime, intervening to shut down MF Global would have been seen as the impairment or destruction of a profitable business. No one would know the counterfactual, that the firm would not only die but also lose customers boatloads of money. Or a swarm by regulators could have precipitated a customer run, again taking the firm down. In the current environment where executives have good access to the media and highly paid PR professionals to present their aggrieved messaging, it’s going to take some pretty tenacious and articulate regulators to swat back their arguments.
While it is hard to object to having better data, and we desperately need better information in some key policy areas (the lack of good information in the housing/mortgage arena and in student debt is appalling), more data is unlikely to get us as far in the financial markets sphere as Tett hopes. The problem is, as we and others have discussed before, is that the financial system is tightly coupled. That means that processes progress rapidly from one step to another, faster than people can intervene. The flash crash is a recent example.
There are many reasons why tightly coupled systems are really difficult to model. They tend to undergo state changes rapidly, from ordered to chaotic, and you can’t see the thresholds in advance. And financial systems also have the nasty tendency for products that were uncorrelated or not strongly correlated to move together as investors dump risky positions and flee for the safest havens. So exposures that might not seem to all that problematic can become so when the system comes under stress (who in fall 2007 would have thought that auction rate securities would blow up, for instance, or more important, in the heat of the crisis, even Treasuries were not accepted as repo collateral?).
And this problem is made worse by the fact that economists have long been allergic to the sort of mathematics and modeling approaches best suited to this type of analysis, namely systems dynamics and chaos theory. I discussed both these aesthetic biases at length in ECONNED, but the very short version is that following Paul Samuelson, economists have wanted to put the discipline on a “scientific” footing, and that meant embracing the “ergodic” axiom. Warning: a lot of natural systems aren’t ergodic. The ergodic assumption means no path dependence and no tendency to equilibrium. If you get a good enough sample of past behavior, you can predict future behavior. If you think these are good foundation for modeling financial markets, I have a bridge I’d like to sell you.
If we want to reduce the frequency and severity of financial crises, it isn’t a data problem. It’s a systems design problem. As Richard Bookstaber wrote in his book A Demon of Our Own Design, published before the crisis, the most important thing to do in a tightly coupled system to reduce risk is reduce the tight coupling. Measures to reduce risk in a tightly coupled system often wind up increasing them because the tight coupling means intervention is likely to be destabilizing. And we all know what the big culprits are. It does not take better data capture to figure this out. Tett flagged two in her piece. The obvious one is credit default swaps. They serve no social value and are inherently underpriced insurance (adequate CDS premiums for jump to default risk would render the product uneconomic to buyers). Underpriced insurance, given enough time, blows up guarantors who take on too much exposure (AIG, the monolines, and the Eurobanks like UBS who had near death experiences by being de facto guarantors by holding synthhetic/hybprid AAA CDO tranches are all proof). But has anyone in the officialdom taken the remotest interest in addressing a blindingly obvious problem? No.
Similarly, Tett mentions that a Fitch study that ascertained that banks were back to their old habit of using structured credit products as repo collateral. We’ve also discussed how problematic that is; the BIS flagged it more than a decade ago. And despite the fact that it should be bloomin’ obvious that using anything other than pristine collateral for repo is a source of systemic risk, since it was a cause of trouble before, the officialdom is loath to intervene. They’ve bought the “scarcity of good collateral” meme pushed by the banks. While narrowly that is correct, they haven’t sought to question why so much collateral is really necessary. The big driver pre-crisis, as we pointed out, was the explosion in derivatives (as values fluctuate, counterparties have to post collateral or have their position closed out). The growth in those dubious CDS was a major contributor. Moreover (and this comes from someone who has worked with derivatives firms), many, if not most over the counter derivatives (and certainly the most profitable) are used to manipulate accounting and for regulatory arbitrage. The overwhelming majority of socially valuable uses of derivatives can be accomplished via products that can be traded on exchanges, but regulators have been unwilling to push back on the industry’s imperial right to profit, no matter how much it might wind up creating for the rest of us. (We admittedly have additional drivers post crisis, such as QE eating up Treasuries, but there is perilous little critical examination of the demand side of the equation).
So the answer does not lie in better data. It lies in the willingness of the authorities to stare down the financial services industry. And the next financial crisis is likely to be a necessary, but perhaps even then not sufficient, condition for that change in attitude.
Yup. The solution is always found in a good observation of human nature, plus experiment. The very best government actions in the economic realm were done with no theory or models at all: PM Walpole’s response to the South Sea Bubble in the 1720s, and FDR’s response to Wall Street excesses in the 1930s.
Both were following principles of human nature understood for many centuries (see Book of Proverbs) and both were experimentalists. Their only theory was “Try it. If it works, keep it. If it doesn’t work, discard it.”
Amazingly, their actions worked.
When you do what works, it’s going to work.
Really shouldn’t be surprising.
No modern government understands this. Instead they use equations designed by speculators, which gives results that enrich speculators.
Also shouldn’t be surprising.
no surprise-the answer to whistleblowers and transparency in corporate lobbied
legislation is to substantially reduce oversight..cut departments-claim government is inept, and prove so, by not allowing resources necessary to do job. Smaller govt, to drown in bathtub for Grover..
Yves, “banning CDSes” or “regulating CDSes under the insurance laws” is, in a certain sense, “having better data”, and I think you agree that either of those things WOULD have improved the situation, right?
I guess those things are a bit stronger than “having better data” though.
Yves,
You observed that “While it is hard to object to having better data, and we desperately need better information in some key policy areas (the lack of good information in the housing/mortgage arena and in student debt is appalling), more data is unlikely to get us as far in the financial markets sphere as Tett hopes. The problem is, as we and others have discussed before, is that the financial system is tightly coupled.”
It is the tight coupling that is exactly the reason that better data is needed.
The way our financial system is structured all market participants are responsible for the gains/losses on their exposures.
As a result, market participants have an incentive to use this better data to independently assess the risk to each of their exposures and adjust the amount and price according to this risk assessment. This is what reduces the tight coupling risk.
Without this data, this risk assessment cannot be done.
The classic example of this are our ‘black box’ banks. Nobody, including the banks themselves, can figure out how risky any particular bank is.
“You can’t buff a turd,” as they say in the Navy. Why not elimintate the tight coupling?
Exactly. Tightly coupled systems, especially nonlinear systems with a fair amount of noise, are inherently unstable. Going down the “we can control it better with better data” path is superficially true, but essentially false. And in practice, such an approach will tend to make the eventual breakdown much worse, even if less frequent
I was at the IAFE meeting in May 2007 when Bookstaber spoke, just after releasing his book. Some panelists took the technocratic position that “we can just run more scenarios”, “buy more computing hardware” and solve a problem which, from a computing theory analysis is a hard problem (I mean in the precise, technical sense of “hard”, i.e. exponential in scaling).
The reason that the financial system is so tightly coupled is as a result of the lack of data in the first place.
Take AIG as an example. Because of opacity, AIG was able to sell multiples of what it could afford to pay without going bankrupt in the way of insurance on subprime securities.
If market participants could have seen how much insurance AIG was selling, they would have stopped buying from AIG because a simple analysis would have shown that AIG could not cover its bets and that the insurance they were buying was worthless.
Finally, if you look across the areas of the financial system that froze/failed during the financial crisis, all of them are characterized by opacity (banks, structured finance securities…).
The areas of the financial system that continued to perform are characterized by transparency.
This suggests that disclosure of data does have a positive effect by reducing tight coupling and its related contagion.
…let alone Greenberg re-insurance scams…remaining $$$ to be divided up at end of year among elites…
No, you again have that wrong. That is a remarkable inaccurate statement. Hindsight bias, big time.
Go look at haircuts on AAA CDOs over time. The “market” gave it low haircuts well into 2007 and would not have seen AIGs exposures as problematic. And no financial firm is going to let counterparties see and pick off exposures in real time, so your solution is ridiculous.
Causality is tricky here. It appears that people who are creating tight coupling are *also* trying to hide data.
I believe that tight coupling does *not* cause data-hiding; I believe that data-hiding does *not* cause tight coupling.
I submit that both data-hiding and tight coupling are symptoms of the “criminogenic environment” which William Black described. Fraudsters like to hide data *and* they like to get everyone tied in and dependent on them. Get it?
No, even with better “data” they won’t understand their risks because their risk models hopelessly understate risk. I’ve discussed this at length in ECONNED. It’s due to the use of Gaussian distributions when tail risk is greater than that, plus the use of the continuous markets assumption (allows for dynamic hedging to be seen as a legitimate way to manage risks) and the assumption that price correlation is pretty stable over time.
And on top of that, the statistical approaches that banks use are inherently flawed. They aren’t even suitable in situations where risks are not normally distributed (financial markets) and you have complex payoffs (which banks create with opaque, complex, levered products). See http://www.nakedcapitalism.com/2009/04/taleb-presentation-on-fourth-quadrant.html for a longer discussion.
And regulators getting better data does not mean market participants will get that info. Think anyone is going to tolerate position information being given to a competitor? So in OTC markets, there will ALSO be concerns about counterparty risk.
“Better” data does not make a defective model better. In fact, it could make matters worse because you have more confidence in your bad model.
“..because their risk models hopelessly understate risk.”
Thank you! Bivariate Gaussian copula models, binomial expansion technique variations (AIG), etc., ad nauseum, don not a valid model make!
But, “I hate beating up on Gillian Tett,…”
I stridently disagree, Ms. Smith, and I ardently love to beat up on the official JPMorgan Chase apologist — when they are the top dog, and among the top three, for controlling that global financial virus known as credit derivatives and credit default swaps, etc.
The question to the Group of 30 on the use of credit derivatives came from JP Morgan. (G30 reply: Must remove legal risk, and thus the Private Securities Litigation Reform Act was passed.)
That report, Glass-Steagall: Overdue for Repeal came from JP Morgan.
And, should there still be anyone who doesn’t yet realize the concentrated ownership: JPMorgan Chase (Morgan-Rockefeller), Morgan Stanley (Morgan), Citigroup (Rockefeller and some foreign interests), Bank of America (Mellon-Rockefeller-Morgan), and Goldman Sachs is anybody’s guess!
Yves,
There are two different issues going on here. One is reporting better data. The second is the use of this data.
Let me start with the second issue and concede all the points you make about how we currently model risk. I am familiar with these points as I helped to invent some of these modeling techniques.
Now, let us move to reporting better data.
One my favorite quotes from you is ‘no one on Wall Street was ever compensated for creating low margin, transparent products’.
CDOs are the very definition of an opaque product designed so that market participants could not properly assess the risks. This is not hindsight. It was knowable at the time — see your posts on Magnetar.
To claim that the haircuts on opaque AAA rated CDOs show that the market would not have seen AIG’s exposures as problematic is simply wrong.
Knowing that they are responsible for 100% of the losses on their exposures, every market participant has an incentive to use the data that better disclosure would provide. This would apply to CDOs or AIG or to individual banks.
Your primary objection to better data is that the banks won’t let this occur. Is the fact that banks don’t want this to happen a feature or a bug? I think it is a feature and based on your comment on Wall Street’s compensation, I use to think you thought it was a feature too.
I really don’t care about having “real-time” data. For purposes of assessing the risk of each bank, banks reporting their current asset, liability and off-balance sheet exposure details as of the end of the business day will do. This is something that would be easy for the banks to do because this is how their information systems are designed to track the exposure details.
Finally, I refer to the Office of Financial Research as the place that transparency goes to die. The reason is that the regulators through OFR are suppose to collect better data but not share this data with market participants.
This is fundamentally flawed. But, given that Wall Street’s Opacity Protection Team drafted the Dodd-Frank Act that created OFR, not surprising.
By definition, if regulators have a monopoly on the better data, and they already do, there is a source of instability in the financial system. This source of instability is the regulators if they fail to a) properly assess the risk of each financial institution providing data; b) properly convey this risk to all market participants and c) act on the risk in their regulatory capacity.
The ongoing financial crisis is a testament to the fact that it is unreasonable to expect regulators to perform perfectly on these three tasks 100% of the time.
The only way that better data is useful is if all market participants have access to it!
I debunked this short form on this thread and long form in ECONNED.
If people are collectively using bad models, no amount of “better” data will allow for better decisions. And separately, there are very real competitive reasons no firm will ever allow the level of disclosure of positions their position to competitors.
You are arguing from an ideological position and refusing to deal with facts.
This is not a question of ideology, but rather the proposal of a solution for a known problem.
As far as I am concerned, we are looking at a 2×2 matrix. One one axis is bad data and good data. On the other axis is bad models and good models.
If you look at the four possible outcomes, 3 of them are bad – bad data/bad models; bad data/good models; and good data/bad models.
One of the four possible outcomes is good – good data/good models.
Right now, we have bad data and bad models (in ECONNED, you thoroughly documented why they are bad). No surprise that we got a bad outcome.
So long as all that is available is bad data, it is impossible to develop good models. How would anyone know if they had a good model given that it uses bad data – garbage in, garbage out.
However, that changes if good data is made available. Suddenly, market participants have an incentive to create good models because by doing so they stand to make a lot of money.
We aren’t going to get good models unless good data is available and I do not think I hear you objecting to our having good data and good models.
Next, I know that the financial industry will resist disclosing their detailed information (they also resist this disclosure for structured finance securities where the investors own the loans).
The fact that banks will resist disclosure was well known in the 1930s. It is why the SEC was created and given the responsibility for making sure that market participants had access to all useful, relevant information in an appropriate, timely manner.
One of my favorite arguments the banks make against disclosure is that it will undermine their competitive position.
These guys are in the business of lending money. There is no special sauce and if there is, it is not something that the banks are being asked to disclose. Disclosure doesn’t occur until after they have booked the loan.
Since the loan has already been underwritten, what disclosure is about is monitoring the performance of the loan. Monitoring loan performance does not undermine a firm’s competitive position. Monitoring loan performance does however tell a lot about how good a job a bank does in underwriting its loans.
Now I can understand the banks’ argument for why disclosure undermines their competitive position when it comes to proprietary trading. However, the intent of the Volcker Rule is to stop banks from proprietary trading. Hence, it appears that disclosure is consistent with enforcement of the Volcker Rule.
Prior to the advent of deposit insurance, it was the sign of a bank that could stand on its own two feet that it disclosed its asset level detail. Apparently, back then bankers saw it as improving their competitive position.
Yves, you called on the regulators to stand up to the bankers. The short form of what I have added is that the regulators should stand up to the bankers and do their job of ensuring that market participants have access to all the useful, relevant information in an appropriate, timely manner.
Richard,
If you read ECONNED, and the discussion of Levy distributions, you cannot model the risk of financial systems (more accurately, you can measure the risks that are unimportant pretty well). And worse, per Andrew Haldane, the financial system GENERATES risk.
Your good model/bad model matrix is built on a bogus assumption: that you can measure tail risk in a financial systems well. You can’t. There is no “good model” for this problem. There are no good models for plenty of things, like telling the future. There are limits to knowledge, but when I tell you that, you stick your fingers in your ears and yell “Lalalalala”. You refuse to hear it. I already pointed to what Taleb wrote, that you cant’ use statistical approaches for non-Gaussian distributions and complex payoffs. The number of outcomes is so many you can’t reach any conclusion. THAT MEANS YOU CAN’T MODEL IT.
The more you write, the more you reveal that you don’t know what you are talking about and you continue to want to have the last word. That is not a sign of intelligence.
well i learned one thing from this exchange. Yves is kind of an a&&hole.
Ah, but Yves, better data would have allowed sane people to *detangle* themselves from the mess being made by the people with bad models and opaque products.
As I said above, I think the criminogenic environment is the key, as it created both the bad data and the entanglements.
I don’t believe in quant models either. Thinking back to 2008, there was contagion and also fear of contagion.
Then, fear is contagious. So I don’t believe in
multi-year accuracy of risk models in finance.
Like the finance “industry”, electric utilities are both data-intensive and prone to black swans. We analyze electric transmission outage data.
The premise that one can mine data to find the straw (or straws) that will break the camel’s back is almost always a ridiculous notion. You know it can happen, but the how, when and where is a mystery. Sure there are obvious weaknesses in a system that a bright analyst can identify and sometimes even initiate a fix. But for the most part this is the rare exception to the rule.
With respect to electric transmission, the electric reliability organization (NERC) and the regulator (FERC) have mandated all sorts or data collection after the 2003 blackout. And committees and working groups come up with all sorts of metrics. But this data infrastructure has been in place for several years now and they are doing very little with the data they collect. But this doesn’t stop mandating the submission of even more data.
They are swimming in data. But do very little productive with that information.
And even if they “did something”, it is very, very unlikely it will identify the straw (or straws) that will break the camel’s back. The analytical resources simply are unavailable.
oh, they’re “doing something”…attempting to PRIVATIZE public taxpayer provided infrastructure, which always costs MORE…that’s the plan, after all-Naomi Klein’s, “The Shock Doctrine-rise of disaster capitalism”…
You don’t need data to understand the disaster potential of credit default swaps. Because every bank is overexposed to every imaginable counterparty, the only thing that matters is the gross amount of exposure. The idea that default risk can be hedged is absurd. One important counterparty goes balls up and you are back to September 2008. The reason Greece will not “fail” is that the major banks will collapse if it does fail. Fewer things are more certain than that there will ultimately be some kind of “voluntary” restructuring which will leave CDS buyers sucking eggs.
…not “sucking eggs”, if banks succeed in shifting costs and blame to others…which government is endorsing, media is propagandizing…
The ergodic assumption means no path dependence and no tendencies to instability. If you get a good enough sample of past behavior, you can predict future behavior.
False. For example, a doubly ergodic dynamical system exhibits weak measurable sensitivity.
And no, even if you are using the word “ergodic” strictly in the sense of stochastic process, the definition of ergodic is that no sample helps meaningfully to predict values that are very far away in time from that sample.
Your definition of ergodicity is incorrect and your comment does not refute what I wrote.
Huh? I’m pretty sure the following are correct.
Definitions: Let (X,E,P) be a probability space for simplicity. T:X->X a measurable transformation.
(1) T is ergodic if T is probability-preserving and the only measurable invariant sets (i.e. sets A with T^{-1}(A)=A) has probability 0 or 1.
(2) T is doubly ergodic if, given A,B measurable non-null, there exists a positive integer n such that both T^{-n}(A)/\A and T^{-n}(A)/\B are non-null.
(3) T is weakly mixing if, given any pair of measurable sets A,B, we have an increasing sequence of positive integers n_i such that P(T^{n_i}(A)/\B) tends to P(A)P(B) as i tends to infinity.
(4) T exhibits measurable sensitivity if, whenever (Y,F,Q,S) is measuably isomorphic to (X,E,P,T) and d a Q-compatible metric on Y, there exists delta>0 such that for any y in Y, all epsilon>0, and some integer n such that
Q(\{d(y,y’)delta\})>0.
Theorems: Let (X,E,P) be a probability space.
(5) doubly ergodic is equivalent to weakly mixing (for a proof, see e.g. Invitation to ergodic theory by César Ernesto Silva, Proposition 6.4.1)
(6) If (X,E,P,T) is a nonsingular doubly ergodic dynamical system, then T exhibits measurable sensitivity. In particular, weakly mixing measure-preserving transformations exhibit measurable sensitivity. (This is Proposition 2.1 in Jennifer James et al, Measurable Sensitivity, Proc AMS Vol 136, No 10 (Oct 2008), pp 3549–3559)
Now it is obvious that measurable sensitivity is the measure-theoretic version of the sensitive dependence on initial conditions clause in the usual characterisation of a chaotic dynamical system, no?
“And no, even if you are using the word “ergodic” strictly in the sense of stochastic process, the definition of ergodic is that no sample helps meaningfully to predict values that are very far away in time from that sample.”
Yves has the definition of ergodic correct with regard to long-run time averages vs. ensemble averages of a stochastic process. FWIW, I think your statement above is also correct. For example, for a wide-sense stationary process to be ergodic in the mean it’s sufficient that the autocorrelation goes to zero with increasing lag (i.e. future states are increasingly uncorrelated with past states). That doesn’t preclude using the model to make predictions for a given set of initial conditions and inputs. However, if the assumptions are wrong (i.e. a normally distributed, ergodic process), then I think it’s time and energy wasted on a fantasy.
As I indicated in my comment on Gillian Tett’s sight, it is not reasonable to expect regulators to gather the requisite information when they are biased in favor of the industry they are regulating — See Guardians of Finance: Making Regulators Works for Us, http://www.amazon.com/Guardians-Finance-Making-Regulators-Work/dp/0262017393/ref=sr_1_1?s=books&ie=UTF8&qid=1331151222&sr=1-1. While it is true that some aspects of the crisis were hard to see, many were not — the housing price bubbles, the rapid asset growth of the banks, the mismatches on their balance sheets, the skyrocketing compensation, the fact that so many institutions, especially in /Europe, were buying the very highest paying AAA rated securities. Why were those securities paying so much more than others — just one of the questions that regulators should have been asking, but didn’t. The view that the crisis was so complex to be understood doesn’t work in Iceland, Ireland, or the UK, and really doesn’t work in the US either. But it is an explanation that gets regulators off the hook.
here hear…
Regulators didn’t see the things they were paid not to see…
…the criminogenic environment is the root of the problem here.
The solution, of course, is to make it impossible to make a megafortune in finance. This eliminates the incentive for the criminals to get *into* the field.
The ultimate way to do this is very high top-level income tax rates (so that *nobody* can make a megafortune overnight), but simply banning most of the “major gambling” finance activities (high-speed trading, proprietary trading with insured businesses, insurance-like contracts without insurable interest, insurance-like contracts without suitable backing) helps a lot too.
I hope that Yves’ “drunk under the streetlight” meme introduces readers to a most useful way of coping with economic theories!
As I proceed reading the article, I understand that (regulatory) brakes on a slow moving vehicle will fail at high speeds.
Finally, in regard to models: “A butterfly can stampede a herd” (pace mixed metaphor purists).
The moral: When possible, keep butterflies away from cattle or sheep or even lemmings.
Yawn. We’ve been trying for 317+ years to make the current money system based on usury for counterfeit money (“credit”) work.
And it has worked! We now have the technology to kill most of humanity in several different ways.
The delicious irony is that the bankers and usury class are trapped on the same planet they have destabilized. I suppose they are digging their bunkers now – the good it will do them.
Madam
Great, great post. Thx a lot. You are one of the few that really head on refute the so called “official data” that are reported basically after a dinner with strong wine where they throw dices as to what numbers to publish.
As to Ms. Tett, your professional attack makes sense, but I’d say that you and everybody must understand that she has to keep the job and maintain her income. Eventually she has to do what her bosses tell her to do, unless………..she establishes a blog like this one.
“There are many reasons why tightly coupled systems are really difficult to model. They tend to undergo state changes rapidly, from ordered to chaotic, and you can’t see the thresholds in advance.”
Which indicates that they are actually chaotic (in the technical sense), as you later point out. Chaotic systems can have stable states for a long time (like the solar system). We need to admit the chaotic nature of our systems, and give up the myth that they are self-correcting. But to do so, of course, is to be fatalistic or to admit the necessity for proper regulation. The regulatory reforms of the Great Depression worked for half a century. Now let’s shoot for regulations that will work of a whole century.
The regulatory reforms of the Great Depression worked for half a century. Min
Since our money system is not fundamentally ethical, regulation will simply trade off performance for stability, I would bet. And I would also bet that a low performance system is ultimately not stable either since it won’t be tolerated for long.
So what’s wrong with a universal bailout plus fundamental reform? If we wish to have a “Good Society” how dare we neglect ethics when it comes to money?
Actually, the post-Depression New Deal era was high-performance *and* high-stability.
Of course, it was arguably an entirely different financial model, what with the 92% income tax rates at the top level (at levels equivalent to $2 million a year in today’s dollars).
Eliminating the ability to make a mega-fortune overnight really changes the situation.
My own personal experience wrt to consumer tools and gadgets was that the 1950s to 1980s were mostly stagnant. Then they exploded. And why not? High taxes on real (as opposed to financial) innovators CANNOT be good, can it?
Our current system is crooked. Punishing the rich indiscriminately is not the solution. Instead, every one should be bailed out of the current system and we should start again with an inherently ethical one.
Yes, high taxes on real innovators CAN be good.
What you may not realize is that pretty nearly all the “gadgets” you first saw during the 1980s were actually invented during the 1970s, or earlier.
Computers? Personal computers? Yep, 1970s. Almost all the major tech developments related to them (transistors, semiconductors, integrated circuits)? 1940s – 1970s.
The Internet? 1970s. Government funded. With lead backing from Congressman Al Gore.
The charge-coupled device, the basis of all digital cameras? 1970s.
I actually can’t think of anything significant invented since the end of the 1970s; I merely see things where it was figured out how to manufacture them a bit cheaper or a bit smaller.
Basically, my philosophy on taxation is that real great innovators should be able to make enough money to be set for life, and then to engage in some mild philanthropy (so, maybe a few times what it takes to be set for life), and after that, they just shouldn’t be making any more money.
If they’re real innovators, they will get other people to fund further projects.
$2 million is enough to be set for life. The Eisenhower top rates didn’t kick in until an amount per year which was equivalent to collecting $2 million per year today.
Collecting an entire lifetime’s fortune in one year can pretty much never be done entirely honestly, and deserves high tax rates even if it is.
F. Beard: “Since our money system is not fundamentally ethical, regulation will simply trade off performance for stability, I would bet. And I would also bet that a low performance system is ultimately not stable either since it won’t be tolerated for long.”
Isn’t that what happened with the regulations from the Great Depression? They were hindering performance, so Congress repealed them.
Before this Bremmer Bird & Fortune show from late 2008 / early 2009 the BIS paper was considered common sense but since then this false saving meme has taken over common discourse.
http://www.youtube.com/watch?v=H9-X4pg7n-w
Unregulated financial markets are an experiment. We saw how the experiment turned out. Now Tett and others want to continue the experiment, but with an eye in the sky to ‘monitor’ things — and do what, exactly? Propose new regulations that will be fought tooth and nail so long as money is being made? Greenspan and others had all of the evidence they could need that a stock bubble, followed by a housing bubble, were underway; but they opposed government interference on principle. That viewpoint is alive and well, because money keeps it alive.
We saw how that experiment worked out in the 19th century. How many times do we have to repeat it?
Heterodox economists question how college economics is taught: http://remappingdebate.org/
I think you are a little hard on Gillian Tett. Yes, she does deserve criticism for proposing that analysis of the data can be predictive because the models are insufficient, as you point out. However, her criticism of the lack of analysis of data available because of resource constraints is a very important observation. Review of the data can tell us things about what has happened and can do it on a timeline. All this can sharpen models and we can learn.
We must not allow the capability to analyze what has happened to be dissipated. This is of course what the laissez-faire element wants (no analysis) because it might inform us what is related to undesired outcomes and potentially put regulatory thorns under their saddles.
Ignorance of cause and effect is bliss to those who can steal the effects.
“The overwhelming majority of socially valuable uses of derivatives can be accomplished via products that can be traded on exchanges, but regulators have been unwilling to push back on the industry’s imperial right to profit, no matter how much it might wind up creating for the rest of us.”
What is the missing word? Pain?
The problem is not the data, the problem is the models. Additionally, we generally need models in order to generate the data in the first place, because it is very unlikely we will have a complete set of data that describes each agent with all its attributes in total. This is the purpose of the survey. OK, lets assume the sample design and survey are good, then we take that survey and synthetically generate the entire population of data, perhaps using model based regression methods such as multiple imputation or multinomial logistic regressions, then we are at the starting point. We haven’t even run the model yet, all we have done is established the starting point for running the model.
The problem is economics is still stuck in the past. There is an entire new paradigm for understanding and predicting complexity its emergent phenomenon. Amazingly, most of the computers we have are taxed in order to run the models which are capable of predicting something as simple as describing the agents within a model at the beginning point in time of running that model. Now lets take into account all of the possible decisions. Do you know the PowerMac and RAM we need to run such a model … for how many days?
We will get there, human’s are animals, and our behavior is predictable within certain likelihoods. It will take time to get there, especially when we are attempting to model something as complex as a macroeconomic system, with all of its iterative meso & micro scale interactions. Simply saying we shouldn’t believe in data, models and science isn’t the answer, the answer is to continue our belief that the modern majesty consists in work.