Artificial Intelligence and Financial Stability

Yves here. This post gives a clean, elegant explanation as to why AI looks almost destined to be deployed to more and more important decisions in financial institutions, particularly on the trading side, so as to virtually insure dislocations. Recall that the 1987 crash was the result of portfolio insurance, which was an early implementation of algo-driven trading. Hedge funds that rely on black box trading have been A Thing for easily a decade and a half. More generally, a lot of people in finance like to be on the bleeding edge due to perceived competitive advantage…even if only in marketing!

Note that the risks are not just on the investment decision/trade execution side, but also for risk management, as in what limits counterparties and protection-writers put on their exposures. Author Jon Danielson points correct to the inherent paucity of tail risk data, which creates a dangerous blind spot for models generally, and likely-to-be-overly-trusted AI in particular.

Unlike many article of this genre, this one includes a “to do” list for regulators.

By Jon Danielsson, Director, Systemic Risk Centre London School of Economics And Political Science. Originally published at VoxEU

Financial institutions are rapidly embracing AI – but at what cost to financial stability? This column argues that AI introduces novel stability risks that the financial authorities may be unprepared for, raising the spectre of faster, more vicious financial crises. The authorities need to (1) establish internal AI expertise and AI systems, (2) make AI a core function of the financial stability divisions, (3) acquire AI systems that can interface directly with the AI engines of financial institutions, (4) set up automatically triggered liquidity facilities, and (5) outsource critical AI functions to third-party vendors.

Private-sector financial institutions are rapidly adopting artificial intelligence (AI), motivated by promises of significant efficiency improvements. While these developments are broadly positive, AI also poses threats – which are poorly understood – to the stability of the financial system.

The implications of AI for financial stability are controversial. Some commentators are sanguine, maintaining that AI is just one in a long line of technological innovations that are reshaping financial services without fundamentally altering the system. According to this view, AI does not pose new or unique threats to stability, so it is business as usual for the financial authorities. An authority taking this view will likely delegate AI impact analysis to the IT or data sections of the organisation.

I disagree with this. The fundamental difference between AI and previous technological changes is that AI makes autonomous decisions rather than merely informing human decision-makers. It is a rational maximising agent that executes the tasks assigned to it, one of Norvig and Russell’s (2021) classifications of AI. Compared to the technological changes that came before, this autonomy of AI raises new and complex issues for financial stability. This implies that central banks and other authorities should make AI impact analysis a core area in their financial stability divisions, rather than merely housing it with IT or data.

AI and Stability

The risks AI poses to financial stability emerge at the intersection of AI technology and traditional theories of financial system fragility.

AI excels at detecting and exploiting patterns in large datasets quickly, reliably, and cheaply. However, its performance depends heavily on it being trained with relevant data, arguably even more so than for humans. AI’s ability to respond swiftly and decisively – combined with its opaque decision-making process, collusion with other engines, and the propensity for hallucination – is at the core of the stability risks arising from it.

AI gets embedded in financial institutions by building trust through performing very simple tasks extremely well. As it gets promoted to increasingly sophisticated tasks, we may end up with the AI version of the Peter principle.

AI will become essential, no matter what the senior decision-makers wish. As long as AI delivers significant cost savings and increases efficiency, it is not credible to say, ‘We would never use AI for this function’ or ‘We will always have humans in the loop’.

It is particularly hard to ensure that AI does what it is supposed to do in high-level tasks, as it requires more precise instructions than humans do. Simply telling it to ‘keep the system safe’ is too broad. Humans can fill those gaps with intuition, broad education, and collective judgement. Current AI cannot.

A striking example of what can happen when AI makes important financial decisions comes from Scheurer et al. (2024), where a language model was explicitly instructed to both comply with securities laws and to maximise profits. When given a private tip, it immediately engaged in illegal insider trading while lying about it to its human overseers.

Financial decision-makers must often explain their choices, perhaps for legal or regulatory reasons. Before hiring someone for a senior job, we demand that the person explain how they would react in hypothetical cases. We cannot do that with AI, as current engines have limited explainability – to help humans understand how AI models may arrive at their conclusions – especially at high levels of decision-making.

AI is prone to hallucination, meaning it may confidently give nonsense answers. This is particularly common when the relevant data is not in its training dataset. That is one reason why we should be reticent about using AI to generate stress-testing scenarios.

AI facilitates the work of those who wish to use technology for harmful purposes, whether to find legal and regulatory loopholes, commit a crime, engage in terrorism, or carry out nation-state attacks. These people will not follow ethical guidelines or regulations.

Regulation serves to align private incentives with societal interests (Dewatripont and Tirole 1994). However, traditional regulatory tools – the carrots and sticks – do not work with AI. It does not care about bonuses or punishment. That is why regulations will have to change so fundamentally.

Because of the way AI learns, it observes the decisions of all other AI engines in the private and public sectors. This means engines optimise to influence one another: AI engines train other AI for good and bad, resulting in undetectable feedback loops that reinforce undesirable behaviour (see Calvano et al. 2019). These hidden AI-to-AI channels that humans can neither observe nor understand in real time may lead to runs, liquidity evaporation, and crises.

A key reason why it is so difficult to prevent crises is how the system reacts to attempts at control. Financial institutions do not placidly accept what the authorities tell them. No, they react strategically. And even worse, we do not know how they will react to future stress. I suspect they do not even know themselves. The reaction function of both public- and private-sector participants to extreme stress is mostly unknown.

That is one reason we have so little data about extreme events. Another is that crises are all unique in detail. They are also inevitable since ‘lessons learned’ imply that we change the way in which we operate the system after each crisis. It is axiomatic that the forces of instability emerge where we are not looking.

AI depends on data. While the financial system generates vast volumes of data daily – exabytes’ worth – the problem is that most of it comes from the middle of the distribution of system outcomes rather than from the tails. Crises are all about the tails.

This lack of data drives hallucination and leads to wrong-way risk. Because we have so little data on extreme financial-system outcomes and since each crisis is unique, AI cannot learn much from past stress. Also, it knows little about the most important causal relationships. Indeed, such a problem is the opposite of what AI is good for. When AI is needed the most, it knows the least, causing wrong-way risk.

The threats AI poses to stability are further affected by risk monoculture, which is always a key driver of booms and busts. AI technology has significant economies of scale, driven by complementarities in human capital, data, and compute. Three vendors are set to dominate the AI financial analytics space, each with almost a monopoly in their specific area. The threat to financial stability arises when most people in the private and public sectors have no choice but to get their understanding of the financial landscape from a single vendor. The consequence is risk monoculture. We inflate the same bubbles and miss out on the same systemic vulnerabilities. Humans are more heterogeneous, and so can be more of a stabilising influence when faced with serious unforeseen events.

AI Speed and Financial Crises

When faced with shocks, financial institutions have two options: run (i.e. destabilise) or stay (i.e. stabilise). Here, the strength of AI works to the system’s detriment, not least because AI across the industry will rapidly and collectively make the same decision.

When a shock is not too serious, it is optimal to absorb and even trade against it. As AI engines rapidly converge on a ‘stay’ equilibrium, they become a force for stability by putting a floor under the market before a crisis gets too serious.

Conversely, if avoiding bankruptcy demands swift, decisive action, such as selling into a falling market and consequently destabilising the financial system, AI engines collectively will do exactly that. Every engine will want to minimise losses by being the first to run. The last to act faces bankruptcy. The engines will sell as quickly as possible, call in loans, and trigger runs. This will make a crisis worse in a vicious cycle.

The very speed and efficiency of AI means AI crises will be fast and vicious (Danielsson and Uthemann 2024). What used to take days and weeks before might take minutes or hours.

Policy Options

Conventional mechanisms for preventing and mitigating financial crises may not work in a world of AI-driven markets. Moreover, if the authorities appear unprepared to respond to AI-induced shocks, that in itself could make crises more likely.

The authorities need five key capabilities to effectively respond to AI:

  1. Establish internal AI expertise and build or acquire their own AI systems. This is crucial for understanding AI, detecting emerging risks, and responding swiftly to market disruptions.
  2. Make AI a core function of the financial stability divisions, rather than placing AI impact analysis in statistical or IT divisions.
  3. Acquire AI systems that can interface directly with the AI engines of financial institutions. Much of private-sector finance is now automated. These AI-to-AI API links allow benchmarking of micro-regulations, faster detection of stress, and more transparent insight into automated decisions.
  4. Set up automatically triggered liquidity facilities. Because the next crisis will be so fast, a bank AI might already act before the bank CEO has a chance to pick up the phone to respond to the central bank governor’s call. Existing conventional liquidity facilities might be too slow, making automatically triggered facilities necessary.
  5. Outsource critical AI functions to third-party vendors. This will bridge the gap caused by authorities not being able to develop the necessary technical capabilities in-house. However, outsourcing creates jurisdictional and concentration risks and can hamper the necessary build-up of AI skills by authority staff.

Conclusion

AI will bring substantial benefits to the financial system – greater efficiency, improved risk assessment, and lower costs for consumers. But it also introduces new stability risks that should not be ignored. Regulatory frameworks need rethinking, risk management tools have to be adapted, and the authorities must be ready to act at the pace AI dictates.

How the authorities choose to respond will have a significant impact on the likelihood and severity of the next AI crisis.

See original post for references

Print Friendly, PDF & Email

4 comments

  1. Watt4Bob

    Yves here. This post gives a clean, elegant explanation as to why AI looks almost destined to be deployed to more and more important decisions in financial institutions, particularly on the trading side, so as to virtually insure dislocations.

    ‘Virtually insure’ means guaranteed right?

    last I heard, 60-70% of trading was programmed, so yeah, let’s mate AI to that mess and let’r rip.

    Regulatory frameworks need rethinking, risk management tools have to be adapted, and the authorities must be ready to act at the pace AI dictates.

    Musk is sure to recommend some dandy regs, and Larry summers is sure to warn us about the risks, just in time.

    “… authorities must be ready to act…”

    Maybe Hank Paulson will tell us when?

    Reply
  2. Cervantes

    Why is the answer “get smarter AI designed better with more connections to outside systems/databases/AIs ASAP” instead of “kill it and dismember the body”?

    Reply
    1. Adam1

      Why? Well in my opinion it’s partly in its name – Artificial Intelligence. Too many people actually believe it is intelligent and that any weaknesses can be fixed. But as Rev Kev said in a comment yesterday, it’s not sentient, which means it can never get that gut feeling that something is wrong or that maybe my output is incorrect even though my processing results say this is the best (which must mean correct) response.

      Reply
  3. Adam1

    “…where a language model was explicitly instructed to both comply with securities laws and to maximise profits. When given a private tip, it immediately engaged in illegal insider trading while lying about it to its human overseers.”

    Are we sure this wasn’t the desired outcome?!?! It behaved just like any sociopathic Wall Street trader/financier would have.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *