US regulators are pushing banks to disclose more information about their use of machine learning (ML) – a move that some fear could stymie the development of promising approaches to modeling everything from credit decisioning to regulatory stress scenarios, where techniques used to power models can defy easy explainability.
Led by the Federal Reserve, US prudential regulators issued a request for information (RFI) in April on the uses of artificial intelligence (AI) and ML. The deadline for comment was extended by a month to July 1 to give commenters more time to respond, in acknowledgment of the technical nature of the information requested.
While some view the regulatory interest as a healthy sign, others are worried that nascent development efforts will be derailed before they have a chance to prove themselves.
“Guidance, yes. Rules, no. This field continues to develop, and innovation is important, and there can be unintended consequences from closing off innovation. At the same time, it is important to acknowledge that there are risks,” says a model risk management executive at a large US bank.
While the issue of explainability has existed as a barrier to adoption of certain ML approaches for some time, the past year has seen an acceleration of regulatory actions that banks say has led to them pre-emptively determining which applications are best suited to AI, and being prepared to explain how they work.
Several banks speaking to Risk.net for this article say they are already asked during supervisory reviews to justify that the models they use are appropriate for the applications. Much of the existing regulatory apparatus is geared towards asking questions about traditional forms of modeling that are transparent.
Guidance, yes. Rules, no. This field continues to develop, and innovation is important, and there can be unintended consequences from closing off innovation
Model risk management executive at a large US bank
The stated objective of the RFI is to provide regulators with insight into how banks are using new modeling techniques. A draft response by the Bank Policy Institute industry group notes that AI-based models are already well established in fraud detection, anti-money laundering, credit underwriting and other applications, and that regulatory actions should support, not undermine, those initiatives.
A large US regional bank acknowledges in its draft response to the RFI that a stricter degree of explainability and model transparency may be required in certain applications, such as fair lending, where an explanation of the reason for credit denial may be desired.
It notes that explainability carries multiple types of risk. First is compliance risk: the bank must be able to justify why it denied credit. Second is adoption risk: it must be able to show that the model is providing intuitive, reliable outputs. Third is model performance risk: a lack of explainability makes it more difficult to identify areas where the model may be weak.
However, the draft response also notes that AI models fit squarely within the existing model risk guidance from regulators, known as SR 11-7, and that banks do not need more regulations that may stifle innovation or put banks at a greater competitive disadvantage.
Regulators should avoid imposing specific, technical requirements that can quickly become outdated – a principles-based approach is more appropriate, according to the regional bank’s RFI draft response. They should also avoid requiring higher explainability and transparency requirements for non-human processes than human processes merely because some degree of AI is involved, it adds, noting that humans that can be discriminatory, too.
Others are hopeful of a proportionate outcome, acknowledging that there are trade-offs in the use of AI, in the form of potential discrimination that could lead to penalties and lawsuits.
“The requirements for fair lending do not need to be imposed on all uses of AI or ML, but there are risks that do need to be acknowledged, and the RFI seems to be tilting at those,” says a model executive at the large US regional bank.
The prudential agencies’ move follows a period of engagement with industry participants on artificial intelligence. In December 2020, the Fed hosted an ‘Ask the Regulators’ workshop on the uses of AI, and in January 2021, it hosted a two-day symposium addressing AI interpretability and explainability.
Also in January, the Fed’s Annual Model Risk Forum provided a supervisory update on the use of artificial intelligence, in which it stressed the importance for board members and senior management to consider the risks from AI/ML and how to manage them, and identified key challenges of AI and ML, including explainability, fairness, unlawful discrimination, consumer compliance and scarcity of talent or expertise.
The European Commission, meanwhile, has issued a draft regulation on artificial intelligence, which specifically categorizes as ‘high-risk’ AI systems used to make credit decisions – and, therefore, subject to stricter standards of explainability. A separate report by the European Insurance and Occupational Pensions Authority last week also highlighted the requirement for strict explainability provisions for “high-impact” AI use cases, such as underwriting and pricing.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Regulation
Off-channel messaging (and regulators) still a massive headache for banks
Waters Wrap: Anthony wonders why US regulators are waging a war using fines, while European regulators have chosen a less draconian path.
Banks fret over vendor contracts as Dora deadline looms
Thousands of vendor contracts will need repapering to comply with EU’s new digital resilience rules
Chevron’s absence leaves questions for elusive AI regulation in US
The US Supreme Court’s decision to overturn the Chevron deference presents unique considerations for potential AI rules.
Aussie asset managers struggle to meet ‘bank-like’ collateral, margin obligations
New margin and collateral requirements imposed by UMR and its regulator, Apra, are forcing buy-side firms to find tools to help.
The costly sanctions risks hiding in your supply chain
In an age of geopolitical instability and rising fines, financial firms need to dig deep into the securities they invest in and the issuing company’s network of suppliers and associates.
Industry associations say ECB cloud guidelines clash with EU’s Dora
Responses from industry participants on the European Central Bank’s guidelines are expected in the coming weeks.
Regulators recommend Figi over Cusip, Isin for reporting in FDTA proposal
Another contentious battle in the world of identifiers pits the Figi against Cusip and the Isin, with regulators including the Fed, the SEC, and the CFTC so far backing the Figi.
US Supreme Court clips SEC’s wings with recent rulings
The Supreme Court made a host of decisions at the start of July that spell trouble for regulators—including the SEC.