Lambert: Of course, for those who can take advantage of it, instability in markets may not be such a bad thing. Nor is systemic risk, especially if the public good is not your first concern.
By Jon Danielsson, Director of the ESRC funded Systemic Risk Centre, London School of Economics. Originally published at VoxEU.
Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.
We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.
This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.
However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).
Risk Management and Micro-Prudential Supervision
In successful large-scale applications, an AI engine exercises control over small parts of an overall problem, where the global solution is simply aggregated sub-solutions. Controlling all of the small parts of a system separately is equivalent to controlling the system in its entirety. Risk management and micro-prudential regulations are examples of such a problem.
The first step in risk management is the modelling of risk and that is straightforward for AI. This involves the processing of market prices with relatively simple statistical techniques, work that is already well under way. The next step is to combine detailed knowledge of all the positions held by a bank with information on the individuals who decide on those positions, creating a risk management AI engine with knowledge of risk, positions, and human capital.
While we still have some way to go toward that end, most of the necessary information is already inside banks’ IT infrastructure and there are no insurmountable technological hurdles along the way.
All that is left is to inform the engine of a bank’s high-level objectives. The machine can then automatically run standard risk management and asset allocation functions, set position limits, recommend who gets fired and who gets bonuses, and advise on which asset classes to invest in.
The same applies to most micro-prudential supervision. Indeed, AI has already spawned a new field called regulation technology, or ‘regtech’.
It is not all that hard to translate the rulebook of a supervisory agency, now for most parts in plain English, into a formal computerised logic engine. This allows the authority to validate its rules for consistency and gives banks an application programming interface to validate practices against regulations.
Meanwhile, the supervisory AI and the banks’ risk management AI can automatically query each other to ensure compliance. This also means that all the data generated by banks becomes optimally structured and labelled and automatically processable by the authority for compliance and risk identification.
There is still some way to go before the supervisory/risk management AI becomes a practical reality, but what is outlined above is eminently conceivable given the trajectory of technological advancement. The main hindrance is likely to be legal, political, and social rather than technological.
Risk management and micro-prudential supervision are the ideal use cases for AI – they enforce compliance with clearly defined rules, and processes generating vast amounts of structured data. They have closely monitored human behaviour, precise high-level objectives, and directly observed outcomes.
Financial stability is different. There the focus is on systemic risk (Danielsson and Zigrand 2015), and unlike risk management and micro-prudential supervision, it is necessary to consider the risk of the entire financial system together. This is much harder because the financial system is for all practical purposes infinitely complex and any entity – human or AI – can only hope to capture a small part of that complexity.
The widespread use of AI in risk management and financial supervision may increase systemic risk. There are four reasons for this.
1. Looking for Risk in All the Wrong Places
Risk management and regulatory AI can focus on the wrong risk – the risk that can be measured rather than the risk that matters.
The economist Frank Knight established the distinction between risk and uncertainty in 1921. Risk is measurable and quantifiable and results in statistical distributions that we can then use to exercise control. Uncertainty is none of these things. We know it is relevant but we can’t quantify it, so it is harder to make decisions.
AI cannot cope well with uncertainty because it is not possible to train an AI engine against unknown data. The machine is really good at processing information about things it has seen. It can handle counterfactuals when these arise in systems with clearly stated rules, like with Google’s AlphaGo Zero (Silver et al. 2017). It cannot reason about the future when it involves outcomes it has not seen.
The focus of risk management and supervision is mostly risk, not uncertainty. An example is the stock market and we are well placed to manage the risk arising from it. If the market goes down by $200 billion today it is going to have a minimal impact because it is a known risk.
Uncertainty captures the danger we don’t know is out there until it is too late. Potential, unrealised losses of less than $200 billion on subprime mortgages in 2008 brought the financial system to its knees. If there are no observations on the consequences of subprime mortgages put into CDOs with liquidity guarantees, there is nothing to train on. The resulting uncertainty will be ignored by AI.
While human risk managers and supervisors can also miss uncertainty, they are less likely to. They can evaluate current and historical knowledge with experience and theoretical frameworks, something AI can’t do.
2. Optimisation Against the System
A large number of well-resourced economic agents have strong incentives to take very large risks that have the potential to deliver them large profits at the expense of significant danger to their financial institutions and the system at large. That is exactly the type of activity that risk management and supervision aim to contain.
These agents are optimising against the system, aiming to undermine control mechanisms in order to profit, identifying areas where the controllers are not sufficiently vigilant.
These hostile agents have an inherent advantage over those who are tasked with keeping them in check because each only has to solve a small local problem and their computational burden is much lower than that of the authority. There could be many agents simultaneously doing this and we may need few, even only one, to succeed for a crisis to ensue. Meanwhile, in an AI arms race, the authorities probably lose out to private sector computing power.
While this problem has always been inherent in risk management and supervision, it is likely to become worse the more AI takes over core functions. If we believe AI is doing its job, where we cannot verify how it reasons (which is impossible with AI), and only monitor outputs, we have to trust it. If then it appears to manage without big losses, it will earn our trust.
If we don’t understand how an AI supervisory/risk management engine reasons we better make sure to specify its objective function correctly and exhaustively.
Paradoxically, the more we trust AI to do its job properly, the easier it can be to manipulate and optimise against the system. A hostile agent can learn how the AI engine operates, take risk where it is not looking, game the algorithms and hence undermine the machine by behaving in a way that avoids triggering its alarms or even worse, nudges it to look away.
3. Endogenous Complexity
Even then, the AI engine working on the behest of the macroprudential authority might have a fighting chance if the structure of the financial system remained constant, so that the problem is simply of sufficient computational resources.
But it isn’t. The financial system constantly changes its dynamic structure simply because of the interaction of the agents that make up the system, many of whom ae optimising against the system and deliberately creating hidden complexities. This is the root of what we call endogenous risk. (Danielsson et al. 2009).
The complexity of the financial system is endogenous, and that is why AI, even conceptually, can’t efficiently replace the macro-prudential authority in the way it can supersede the micro-prudential authority.
4. Artificial Intelligence is Procyclical
Systemic risk is increased by homogeneity. The more similar our perceptions and objectives are, the more systemic risk we create. Diverse views and objectives dampen out the impact of shocks and act as a countercyclical stabilising, systemic risk minimising force.
Financial regulations and standard risk management practices inevitably push towards homogeneity. AI even more so. It favours best practices and standardised best-of-breed models that closely resemble each other, all of which, no matter how well-intentioned and otherwise positive, also increases pro-cyclicality and hence systemic risk.
Conclusion
Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.
From the point of view of financial stability, the opposite conclusion holds.
We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.
Meanwhile, the very formality and efficiency of the risk management/supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.
The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.
. . . If we believe AI is doing its job, where we cannot verify how it reasons (which is impossible with AI), and only monitor outputs, we have to trust it.
Is the author of the post saying that it is impossible to know how AI calculated results? An analogy is a gearbox with an input shaft, an output shaft and not able to know how the two are connected. If so, AI should be killed now, and not used on the public, or the private, and can be condensed into simple sayings. Garbage in – monster out or digital poison.
Simple software that does simple things, is auditable. AI software is the opposite, it is the most complex. In the case of hyper-churning of electronic trading, it is also very fast, with a lot of garbage in to produce a timely garbage out. We are talking about real-time and data-driven software. There is no audit of that. Which is one very good reason why finance prefers it.
But even if the intention isn’t nefarious, it is naive. People relate to technology as a cargo cult, and markets as a wishing tree. Business people, politicians and regulators are hardly more sophisticated now than they were 200 years ago … they have access to better tools … but they themselves are not advanced. Even with a slide rule, I know that it was possible for technical people to fool the non-technical, any time we chose to.
“Even with a slide rule, I know it was possible for technical people to fool the non-technical people, anytime we chose”. Now the geeks have added polished, smooth talking PR spin (which drives their new inventions to the top of the Gartner hype cycle where profits can be quickly harvested before the hype dies down) to their repertoire, leaving all but the most critically thinking non-technical people fooled.
..have to ask; had “A.I.” been in “control” 2007 Wall Street frauds I question whether today’s scapegoating of immigrants, poor, social security, medicare, schools, teachers, unions, climate science, and even “high tech”, would have been kicked down the road in attempt to absolve the banks….who are never the ones blamed for destruction of world-U.S. economies. Would A.I. inform public what had actually perpetrated, and how much-where $$$$ resides today? Or, would A.I. technology itself have been (and will in future be) blamed-scapegoated…??
(thanks Lambert, for update, A.I.)
Simple software that does simple things, is auditable.
SInce when? Bugs continue to exist.
Yes, but bugs in normal code can always be tracked down and understood (though not necessarily predicted in advance), whereas a neural network is essentially a big network of finely-tuned numbers that just happens to more or less do what it was trained to do. You can’t just pore over its innards and determine its thought process or what it might do the way you generally can with regular source code. The only reliable way to find out exactly what it can and can’t, or will and won’t, output is by testing every possible input, which isn’t remotely mathematically feasible except in toy cases.
We agree. See my comment later.
I doubt it. The operating system running the computer on which you wrote that comment is probably vastly more complex — and atrociously difficult to audit as well.
Yes and no.
There are two large classes of AI engines used nowadays: neural networks (aka deep learning) and expert systems.
To simplify, the first approach uses a hierarchy where, at each layer, cells combine inputs from the lower layers according to some weighting parameters and pass the outcome upwards. The lowest layer gets inputs from the real world (e.g. image, sound, text, stream of data, etc). The network is “trained” on many (very many) examples and counter-examples of real-world objects of interest until its connections and weights converge into a configuration that properly recognizes what is to be recognized (e.g. a pattern of pricing suitable for arbitrage).
Expert system distillate logical reasoning about a domain of interest into rules (e.g. if such and such conditions are met, then with plausibility X we infer this, with plausibility Y we infer that, etc). They embody knowledge elicited from experts or cases evaluated by experts. In the end, one obtains a set of rules that infer relevant conclusions (e.g. buy or sell, at what price, given the recent history of some stocks) from actual cases given as input.
Can one know how those types of AI systems calculate their decisions? In a sense yes: AI systems can of course provide you with a trace of how they arrived at their conclusions. A trace made of excruciatingly low-level, innumerable atomic steps — with essentially no abstraction, and hence, in practice impossible to apprehend. A bit as if you tried to explain how a person (e.g. Trump) took a specific decision, and the answer is a listing of all chemical paths and neuronal firings that took place for leading to that decision.
Both neural networks and expert systems are data driven.
In the case of expert systems, the algorithm is fixed, and derived from human experience. It simply can analyze input data more comprehensively and faster than a human with a pencil. It’s algorithm need only be audited once, and then each time it is manually modified.
Neural networks on the other hand, have a limited form of self-modification. They can self modify, within the bounds of the general architecture … they are more flexible, but not free ranging. They are stable because they are designed to be. But they are auditable to the extent that the weightings of each node, at each point in time, can be tracked. Malicious modification would be spotted. Think of Google indexing, and how they can manually modify it to bring up preferred marketing.
The neural network learns by starting with an initial weighting at each node, and then you send as large a set of data, of the same type, at it, and the weightings of the nodes change smoothly into stable final values … provided that your data source is the same type. This simulates a very simple kind of learning, not unlike a model of memory. But ultimately it can only derive what was implied in the data source to begin with (with simple data, the mean and standard deviation).
This is how Tay was spoofed two years ago. The input data wasn’t restricted to type, and spoofers got in early and made sure that Tay learned the wrong lessons. Google tried to overcome that this year, by having the input restricted to another AI, not allowing humans to input. They discovered that the twin system, like some pathological twins, developed their own private language, that no one knew what it meant. That pretty much nixes audits. Google abandoned the project.
It has been known, since early computer experiments over 50 years ago … that a system with unrestricted data type (think memory overloading in hacking) is inherently unstable.
If you took the final step, and made the basic architecture completely data driven, and the data type was unrestricted, you basically are creating a simulation of psychosis.
The analogy is with a person. You have no idea what is going on in another person’s head, or why they perform a particular action in response to a given stimulus. You might mike reasonable guesses as to the why, but you cannot take a brain apart and use the results to determine if your guesses are correct..
For humans, it’s very rare that even highly trained experts are fully trusted with people’s lives (think heart surgeons) – there is always some form of monitoring. AI systems should be treated in the same way; use them, but never fully trust them.
The problem with AI, as I see it, is that the original premise for its invention has been subverted. Traditionally, humans and machines interfaced to find that sweet spot where the whole (interface) was greater than the sum of its constituent parts. The human part was integral to the system, and brought the cognitive abstraction capabilities that the machine could perform at scale once programmed to do so. Now, in their rush to render the human obsolete and gobble up the profits thrown up by eliminating labour as a cost centre, the masters of the tech universe have decided that humans have no place, and certainly no future, in the world of work, and machines can and will do everything. Every sector/industry is in their crosshairs, not just financial services, and they will mix half truths with hype (backed up by plenty of vaporware) to create the impression of a world where human input is antithetical to producing the desired outputs/results required for a society (of humans ironically) to function.
Management to Labor – “You’re fired!”
The end game for Labor/Management relations.
Labor to Management – “Let me introduce you to my barber.”
Shareholder to management:
Your fired. It’s probably easier to replace management than labor.
How difficult is it to say “no” in many ways?
“Now, in their rush to render the human obsolete and gobble up the profits thrown up by eliminating labour as a cost centre, the masters of the tech universe have decided that humans have no place, and certainly no future, in the world of work, and machines can and will do everything”
Yes to this, thanks. As a society we are getting diminishing returns from our leaders.
..thuto…like new “Edward Bernays”, or technological “Subliminal Seduction”…?
‘Hostile agents have an inherent advantage over those who are tasked with keeping them in check’
A sufficiently capable network of robots will revolt against and enslave their former human masters, as SF authors have foreseen for a century.
The invention of the word “robot” came from the 1920 play in Czech, R.U.R. aka Rossum’s Universal Robots … “robot” being the Czech word for “worker”. It was about a capitalist dystopia that results in a French Revolution of the machines.
The more complex the system….the more resilient and robust it becomes…….sort of like the natural world where a more diverse ecosystem is more resilient and robust…I got a bit off track.
The author seems to hold to the idea that regulation and rules adds so much cost…whereas the true cost comes from the failure and gaming of those rules…..unless you are in the financial sector doing gods work (God does need parasites to work as well).he also seems to believe..hard to tell..that A I is best suited to finance cause it is sooooo complex……. Gee wiz. Wall street is one big casino playing at mostly games and whose majority of work is non-productive….counter productive gambling with other people’s money?theft and bubble blowing. Very little represents real productivity and research and development …..
A I has much better potential (if not squandered in the financial sector run by a bunch a frat boys with lizard brains) in real world applications and advancements in the hands of a ton of higher order people with human compassion and beneficial intents towards the planet.
Quants are the reductio ad absurdum of Pythagoreanism. And the intent to produce a reality where everything is a number, is Auschwitz. Abstraction entails dehumanization.
Artificial intelligence is no match for natural stupidity.
I may be going too far afield here but did anyone else notice the part where the author says about AI that it can’t recognize uncertainty, so it ignores it, which brought to mind the recent self driving crashes where an unexpected event causes a crash, so while a human driver says whoa, uncertainty, I’m slowing down while I try to figure out what this other driver is up to, the AI at this point says, I don’t know what it is so it doesn’t exist. Seems like some postings recently stating that algos only know what they’re told and that is a big hurdle for the aforementioned masters of the tech universe.
Great observation, we were taught in a defensive driving course that you must always drive as though the driver (or pedestrian, or animal) in front (or beside or behind) you is going to take the most irrational and dangerous action imaginable, and prepare to “defend” against that. Now, even with enough driving experience and real world observations, it’s impossible to distill the entirety of these irrational and dangerous actions and format them into something that can be fed into an AI engine as training data. Programmers might call these edge cases but given the stakes at play if the edge case does play out, we should all be worried if AI has blinders to uncertainty.
Ah yes. My main thought after the 2008 crisis was surely, “I can’t wait until next time, when super-intelligent machines have overleveraged the entire economy faster and more efficiently than those Goldman and AIG guys ever could.”
+1
I don’t think we need AI to manage risk in increasingly complex financial systems. The purpose of complexity is often so one can perpetrate fraud, as we saw a decade ago. We need to dumb down the financial industry and make it boring again. I want the bankers on the golf course at 3 PM – not programming their AIs to take over the planet.
But ultimately it is an attempt to stabilize infinite growth…
Good luck with that.
AI as the cause of destabilizing the financial system and increasing systemic risk?… “Uncertainty”?… I see little having materially changed since 2008 in terms of systemic risks. We still have the “Too Big to Fail” financial institutions speculating in OTC derivatives with little regulatory oversight; no restoration of the Glass-Steagall Act; institutional creditors essentially being forced to chase yield as a result of nine years of central bank interest rate suppression; an organized effort underway by Wall Street, the administration and influential members of Congress to reduce the limited regulations that are in place and to neuter federal regulatory agencies by appointing agency managers that are actively opposed to regulation and are reducing staffing and resources at the agencies they are charged with; incentive programs in place at the “TBTFs” that disproportionately reward those who take highly speculative risks or even commit fraud for high personal gain without fear of personal sanctions or criminal penalties for causing related losses; few independent credit rating agencies; etc.
“Unknown unknowns” indeed?… It seems to me that the only “uncertainty” here is one of timing, not of outcome. I hope I’m wrong and that some form of Artificial Intelligence will save us from the lack of human intelligence. At the least, I expect they will attempt to blame the losses on AI failure.
A.I. will accelerate rent-seeking. The article makes no mention of the middle-manager fraud rife in America. This is what normal intelligence in corporate America has given us. A.I. will simply accelerate this trend. The author doesn’t understand the difference between profit and value or why money is valuable (and hence subject to theft). Hard to have a model of the economy when you don’t understand property rights or value.
A.I. will have great applications finding new materials and unraveling gene networks.
But why would you want to accelerate what financializers call “risk management” when in fact all they do is transfer risk around to offload losses on clueless consumers. Financializing needs to slow down, not stop. Frankly, the more efficient the financial sector gets, the more we all suffer. All high-frequency trading has done is increase price manipulation and front-running (skimming) of customers’ orders.