Banks fear Fed crackdown on AI models

Dealers say the agencies’ request for info could prompt new rules that stifle model innovation.

US regulators are pushing banks to disclose more information about their use of machine learning (ML) – a move that some fear could stymie the development of promising approaches to modeling everything from credit decisioning to regulatory stress scenarios, where techniques used to power models can defy easy explainability.

Led by the Federal Reserve, US prudential regulators issued a request for information (RFI) in April on the uses of artificial intelligence (AI) and ML. The deadline for comment was extended by a month to July 1 to give commenters more time to respond, in acknowledgment of the technical nature of the information requested.

While some view the regulatory interest as a healthy sign, others are worried that nascent development efforts will be derailed before they have a chance to prove themselves.

“Guidance, yes. Rules, no. This field continues to develop, and innovation is important, and there can be unintended consequences from closing off innovation. At the same time, it is important to acknowledge that there are risks,” says a model risk management executive at a large US bank.

While the issue of explainability has existed as a barrier to adoption of certain ML approaches for some time, the past year has seen an acceleration of regulatory actions that banks say has led to them pre-emptively determining which applications are best suited to AI, and being prepared to explain how they work.

Several banks speaking to Risk.net for this article say they are already asked during supervisory reviews to justify that the models they use are appropriate for the applications. Much of the existing regulatory apparatus is geared towards asking questions about traditional forms of modeling that are transparent.

Guidance, yes. Rules, no. This field continues to develop, and innovation is important, and there can be unintended consequences from closing off innovation

Model risk management executive at a large US bank

The stated objective of the RFI is to provide regulators with insight into how banks are using new modeling techniques. A draft response by the Bank Policy Institute industry group notes that AI-based models are already well established in fraud detection, anti-money laundering, credit underwriting and other applications, and that regulatory actions should support, not undermine, those initiatives.

A large US regional bank acknowledges in its draft response to the RFI that a stricter degree of explainability and model transparency may be required in certain applications, such as fair lending, where an explanation of the reason for credit denial may be desired.

It notes that explainability carries multiple types of risk. First is compliance risk: the bank must be able to justify why it denied credit. Second is adoption risk: it must be able to show that the model is providing intuitive, reliable outputs. Third is model performance risk: a lack of explainability makes it more difficult to identify areas where the model may be weak.

However, the draft response also notes that AI models fit squarely within the existing model risk guidance from regulators, known as SR 11-7, and that banks do not need more regulations that may stifle innovation or put banks at a greater competitive disadvantage.

Regulators should avoid imposing specific, technical requirements that can quickly become outdated – a principles-based approach is more appropriate, according to the regional bank’s RFI draft response. They should also avoid requiring higher explainability and transparency requirements for non-human processes than human processes merely because some degree of AI is involved, it adds, noting that humans that can be discriminatory, too.

Others are hopeful of a proportionate outcome, acknowledging that there are trade-offs in the use of AI, in the form of potential discrimination that could lead to penalties and lawsuits.

“The requirements for fair lending do not need to be imposed on all uses of AI or ML, but there are risks that do need to be acknowledged, and the RFI seems to be tilting at those,” says a model executive at the large US regional bank.

The prudential agencies’ move follows a period of engagement with industry participants on artificial intelligence. In December 2020, the Fed hosted an ‘Ask the Regulators’ workshop on the uses of AI, and in January 2021, it hosted a two-day symposium addressing AI interpretability and explainability.

Also in January, the Fed’s Annual Model Risk Forum provided a supervisory update on the use of artificial intelligence, in which it stressed the importance for board members and senior management to consider the risks from AI/ML and how to manage them, and identified key challenges of AI and ML, including explainability, fairness, unlawful discrimination, consumer compliance and scarcity of talent or expertise.

The European Commission, meanwhile, has issued a draft regulation on artificial intelligence, which specifically categorizes as ‘high-risk’ AI systems used to make credit decisions – and, therefore, subject to stricter standards of explainability. A separate report by the European Insurance and Occupational Pensions Authority last week also highlighted the requirement for strict explainability provisions for “high-impact” AI use cases, such as underwriting and pricing.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here