How the Financial Authorities Can Respond to AI Threats to Financial Stability

Lambert here: Is a bullshit generator really a “rational maximising agent”?

By Jon Danielsson, Director, Systemic Risk Centre London School Of Economics And Political Science, and Andreas Uthemann, trincipal Researcher Bank Of Canada; Research Associate at the Systemic Risk Centre London School Of Economics And Political Science. Originally published at VoxEU.

Artificial intelligence can act to either stabilise the financial system or to increase the frequency and severity of financial crises. This second column in a two-part series argues that the way things turn out may depend on how the financial authorities choose to engage with AI. The authorities are at a considerable disadvantage because private-sector financial institutions have access to expertise, superior computational resources, and, increasingly, better data. The best way for the authorities to respond to AI is to develop their own AI engines, set up AI-to-AI links, implement automatic standing facilities, and make use of public-private partnerships.

Artificial intelligence (AI) has considerable potential to increase the severity, frequency, and intensity of financial crises. We discussed this last week on VoxEU in a column titled “AI financial crises” (Danielsson and Uthemann 2024a). But AI can also stabilise the financial system. It just depends on how the authorities engage with it.

In Norvig and Russell’s (2021) classification, we see AI as a “rational maximising agent”. This definition resonates with the typical economic analyses of financial stability. What distinguishes AI from purely statistical modelling is that it not only uses quantitative data to provide numerical advice; it also applies goal-driven learning to train itself with qualitative and quantitative data, providing advice and even making decisions.

One of the most important tasks – and not an easy one – for the financial authorities, and central banks in particular, is to prevent and contain financial crises. Systemic financial crises are very damaging and cost the large economies trillions of dollars. The macroprudential authorities have an increasingly difficult job because the complexity of the financial system keeps increasing.

If the authorities choose to use AI, they will find it of considerable help because it excels at processing vast amounts of data and handling complexity. AI could unambiguously aid the authorities at a micro-level, but struggle in the macro domain.

The authorities find engaging with AI difficult. They have to monitor and regulate private AI while identifying systemic risk and managing crises that could develop quicker and end up being more intense than the ones we have seen before. If they are to remain relevant overseers of the financial system, the authorities must not only regulate private-sector AI but also harness it for their own mission.

Not surprisingly, many authorities have studied AI. These include the IMF (Comunale and Manera 2024), the Bank for International Settlements (Aldasoro et al. 2024, Kiarelly et al. 2024) and ECB (Moufakkir 2023, Leitner et al. 2024). However, most published work from the authorities focuses on conduct and microprudential concerns rather than financial stability and crises.

Compared to the private sector, the authorities are at a considerable disadvantage, and this is exacerbated by AI. Private-sector financial institutions have access to more expertise, superior computational resources, and, increasingly, better data. AI engines are protected by intellectual property and fed with proprietary data – both often out of reach of the authorities.

This disparity makes it difficult for the authorities to monitor, understand, and counteract the threat posed by AI. In a worst-case scenario, it could embolden market participants to pursue increasingly aggressive tactics, knowing that the likelihood of regulatory intervention is low.

Responding to AI: Four Options

Fortunately, the authorities have several good options for responding to AI, as we discussed in Danielsson and Uthemann (2024b). They could use triggered standing facilities, implement their own financial system AI, set up AI-to-AI links, and develop public-private partnerships.

1. Standing Facilities

Because of how quickly AI reacts, the discretionary intervention facilities that are preferred by central banks might be too slow in a crisis.

Instead, central banks might have to implement standing facilities with predetermined rules that allow for an immediate triggered response to stress. Such facilities could have the side benefit of ruling out some crises caused by the private sector coordinating on run equilibria. If AI knows central banks will intervene when prices drop by a certain amount, the engines will not coordinate on strategies that are only profitable if prices drop more. An example is how short-term interest rate announcements are credible because market participants know central banks can and will intervene. Thus, it becomes a self-fulfilling prophecy, even without central banks actually intervening in the money markets.

Would such an automatic programmed response to stress need to be non-transparent to prevent gaming and, hence, moral hazard? Not necessarily. Transparency can help prevent undesirable behaviour; we already have many examples of how well-designed transparent facilities promote stability. If one can eliminate the worst-case scenarios by preventing private-sector AI from coordinating with them, strategic complementarities will be reduced. Also, if the intervention rule prevents bad equilibria, the market participants will not need to call on the facility in the first place, keeping moral hazard low. The downside is that, if poorly designed, such pre-announced facilities will allow gaming and hence increase moral hazard.

2. Financial System AI Engines

The financial authorities can develop their own AI engines to monitor the financial system directly. Let’s suppose the authorities can overcome the legal and political difficulties of data sharing. In that case, they can leverage the considerable amount of confidential data they have access to and so obtain a comprehensive view of the financial system.

3. AI-to-AI Links

One way to take advantage of the authority AI engines is to develop AI-to-AI communication frameworks. This will allow authority AI engines to communicate directly with those of other authorities and of the private sector. The technological requirement would be to develop a communication standard – an application programming interface or API. This is a set of rules and standards that allow computer systems using different technologies to communicate with one another securely.

Such a set-up would bring several benefits. It would facilitate the regulation of private-sector AI by helping the authorities to monitor and benchmark private-sector AI directly against predefined regulatory standards and best practices. AI-to-AI communication links would be valuable for financial stability applications such as stress testing.

When a crisis happens, the overseers of the resolution process could task the authority AI to leverage the AI-to-AI links to run simulations of the alternative crisis responses, such as liquidity injections, forbearance or bailouts, allowing regulators to make more informed decisions.

If perceived as competent and credible, the mere presence of such an arrangement might act as a stabilising force in a crisis.

The authorities need to have the response in place before the next stress event occurs. That means making the necessary investment in computers, data, human capital, and all the legal and sovereignty issues that will arise.

4. Public-Private Partnerships

The authorities need access to AI engines that match the speed and complexity of private-sector AI. It seems unlikely they will end up having their own in-house designed engines as that would require considerable public investment and reorganisation of the way the authorities operate. Instead, a more likely outcome is the type of public-private sector partnerships that have already become common in financial regulations, like in credit risk analytics, fraud detection, anti-money laundering, and risk management.

Such partnerships come with their downsides. The problem of risk monoculture due to oligopolistic AI market structure would be of real concern. Furthermore, they might prevent the authorities from collecting information about decision-making processes. Private sector firms also prefer to keep technology proprietary and not disclose it, even to the authorities. However, that might not be as big a drawback as it appears. Evaluating engines with AI-to-AI benchmarking might not need access to the underlying technology, only how it responds in particular cases, which then can be implemented by the AI-to-AI API links.

Dealing with the Challenges

Although there is no technological reason that prevents the authorities from setting up their own AI engines and implementing AI-to-AI links with the current AI technology, they face several practical challenges in implementing the options above.

The first is data and sovereignty issues. The authorities already struggle with data access, which seems to be getting worse because technological firms own and protect data and measurement processes with intellectual property. Also, the authorities are reluctant to share confidential data with one another.

The second issue for the authorities is how to deal with AI that causes excessive risk. A policy response that has been suggested is to suspend such AI, using a ‘kill switch’ akin to trading suspensions in flash crashes. We suspect that might not be as viable as the authorities think because it might not be clear how the system will function if a key engine is turned off.

Conclusion

If the use of AI in the financial system grows rapidly, it should increase the robustness and efficiency of financial services delivery at a much lower cost than is currently the case. However, it could also bring new threats to financial stability.

The financial authorities are at a crossroads. If they are too conservative in reacting to AI, there is considerable potential that AI could get embedded in the private system without adequate oversight. The consequence might be an increase in the intensity, frequency, and severity of financial crises.

However, the increased use of AI might stabilise the system, reducing the likelihood of damaging financial crises. This is likely to happen if the authorities take a proactive stance and engage with AI: they can develop their own AI engines to assess the system by leveraging public-private partnerships, and using those establish AI-to-AI communication links to benchmark AI. This will allow them to do stress tests, simulate responses. Finally, the speed of AI crises suggests the importance of triggered standing facilities.

Authors’ note: Any opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the Bank of Canada.

References available at the original.

Print Friendly, PDF & Email

This entry was posted in Ridiculously obvious scams on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.

9 comments

  1. rick shapiro

    The fact that macroeconomics is highly non-linear, and arises from self-referential actions, means that there will always be unanticipatable crises, with or without AI, that more AI will not suppress. Just think of the 2010 flash crash, which, while computer-driven, did not involve actual AI. Also, the 1986 crash was driven, not by AI, but by a craze to semi- automate individual protection against market corrections.

  2. Acacia

    Um, why should we care about attempts to “stabilize the financial system” which do nothing to change the status quo?

    So we can “fight for change” later?

  3. GramSci

    «In Norvig and Russell’s (2021) classification, we see AI as a “rational maximising agent”.»

    It having been some 20 years since I last read Norvig, I thought I’d take a peak. In this 2020 interview he says pretty much off the bat that ‘We can optimize anything, but we scientists can’t determine what to optimize. How much will you pay us?’

    OK, he didn’t state that last proposition out loud, but there will always be some “Good German” scientists who will rise to the bait.

  4. MFB

    This is basically a call to start a run on the Bank of Canada. Otherwise, the category in which Mr Strether filed it says all that needs to be said about the piece.

  5. Skip Intro

    This is a weird piece, mostly vague about what actual role the author envisions for ‘AI’. I think the conclusion, (drawn from thin air) is revealing:

    If the use of AI in the financial system grows rapidly, it should increase the robustness and efficiency of financial services delivery at a much lower cost than is currently the case. However, it could also bring new threats to financial stability.

    A pitch for AI funding for regulators, which swallows the AI hype whole. But one can’t expect too much from an economist talking about AI, as BS is nonfungible.
    Lambert’s opening question however, is much more interesting. Machine learning algorithms can be made to ape human trading patterns, and if there were lots of training data of actual rational maximizers’ trades, those could be aped too. But given AI implementations’ propensity for plausible errors, it may be a while before they will be broadly trusted to do completely autonomous trading. If AI systems do start to dominate trading, then they will increasingly be forced to rely only on historical data. Otherwise they risk training on their own results, which can lead to ‘model collapse’.
    On the other hand, we can be sure that AI algorithms have been influencing and automating some trading for years already, and either AI does so well that it is a tightly kept secret of invisible billionaires, or it apes humans so well that it provides mediocre results.
    Regulators should be concerned with precisely this ability of AIs to mimic training data, as the creative accounting required to convincingly cook the books for money launderers, for example, would be trivial for an AI, especially if some losses along the way were acceptable. The systemic risk posed by AI is an acceleration of the escape of plutocratic wealth from public accounting.

  6. LilD

    Public-private partnerships…

    Typically socializing the downside and rent extraction of the upside…

  7. Luis Garcia de la Fuente

    Funny, I always thought the “financial authorities” were the main threat to financial stability. Now they will convince the plebs that besides evil man Putin we have to be careful of Terminador. The show must go on.

Comments are closed.