Fighting FAIRR: Inside the bill aiming to keep AI and algos honest

The Financial Artificial Intelligence Risk Reduction Act seeks to fix a market abuse loophole by declaring that AI algorithms do not have brains.

Regulators face a dilemma when approaching new technologies in financial services: How can this be suitably regulated so nobody gets ripped off, but also not so over-regulated that innovation is smothered in its cradle? It requires a delicate balancing act, especially when emerging technologies like generative AI take hold so quickly, are easily misunderstood, and present both great opportunities and new threats. 

Last year was generative AI’s breakout year, with almost everyone across the financial services space taking an interest in ChatGPT and beginning to understand the potential time savings of this new technology. 

But it was not without its detractors. Generative AI models are possibly committing plagiarism and copyright fraud simultaneously, and even if it avoids both of those, there’s still a chance that the results it spits out—whether a result of incorrect input data, or a lack of data leading the model to “hallucinate” or make up answers—could simply be wrong. 

While other jurisdictions can enact sweeping AI rules across multiple countries, the unique government structure in the United States, coupled with the legislative sluggishness that comes with every presidential election year, makes unilateral decisions on AI regulation impossible. 

The Brennan Center for Justice, a progressive nonprofit and public policy institute at New York University School of Law, has a live artificial intelligence legislation tracker on its website that lists all the bills related to AI that have been introduced but still not resolved. The oldest bill on the list was introduced in early February of last year. 

For the moment, it seems like the best way to stay on top of the regulation of AI is through short, concise bills with clear aims. Enter the Financial Artificial Intelligence Risk Reduction Act. 

FAIRR, as it is known, was first introduced in mid-December 2023, and it was heard in the Senate Committee on Banking, Housing and Urban Affairs in early February of this year. 

It's basically putting developers on notice to say, ‘If you say this works, or if you say this is safe, you better have checked it’
Paul Cottee, Nice Actimize

It aims to close a specific loophole, based on an assumption in market abuse law that seemed logical at the time. In a court of law, proving the defendant had the intention of wrongdoing is key to securing a guilty verdict. However, AI algorithms that contravene market regulations don’t have intentions. While generative AI’s development has been significant, AI models do not yet have brains and cannot, therefore, think consciously about breaking the law. 

This may seem niche, but there are examples of this loophole backfiring before. In the 2010 Flash Crash case, when $1 trillion was wiped off the stock market for 36 minutes, London-based trader Navinder Sarao used an algorithm to generate large sell orders that spoofed the market, pushing prices down, only to cancel his orders and buy at the lower price.

Sarao’s plea agreement outlines the nature of the algorithms he used, as well as how frequently he used them to game the market for profit. And while Sarao took the fall for the crash, the developer he instructed to optimize his spoofing tools was not charged. The FAIRR Act intends to remedy this. 

Despite containing references to AI and market abuse, as well as legislation for future-proofing further algorithmic misconduct, the bill itself is a mere five pages. Paul Cottee, director of regulatory compliance at surveillance and compliance specialist Nice Actimize, said he was “amazed” by its brevity. 

Cottee, who worked on Nasdaq’s Smarts Trade Surveillance system before joining Nice Actimize in 2020, thinks that FAIRR will be a warning shot for algo developers. He compares the idea of a rogue algorithm to a self-driving car getting into an accident, and as some countries have already introduced liabilities for the latter, the FAIRR Act does so for the former. 

The bill opens the possibility that in the event of a rogue algorithm committing market abuse, “any person who, directly or indirectly, deploys or causes to be deployed, an artificial intelligence … shall be liable to the same extent as if such person had committed such acts … unless such person took reasonable steps to prevent such acts.”

“This act amends certain pieces of legislation, including the Securities Exchange Act, to introduce a strict liability,” Cottee says. “It’s basically putting developers on notice to say, ‘If you say this works, or if you say this is safe, you better have checked it. Because if you haven’t and you bake in a design defect, or you say it’s okay when it’s not, or you’re reckless to the outcome, then you’re potentially liable there.’”

Bills on parade

The FAIRR Act is still in its infancy, and there is no guarantee that it will even make it any further in the legislative process, but it does have some potentially favorable winds behind it. The bill has bipartisan backing, both from the junior Republican senator for Louisiana, John Kennedy, and the senior Democratic senator from Virginia, Mark Warner. 

Support from both sides of the aisle is useful, but by no means a guarantee of further progress, says Mike Nonaka, a partner at law firm Covington & Burling, who serves as co-chair of its financial institution group. Nonaka says that while both a Republican and Democrat have backed the bill, this does not mean both parties have wholly embraced it, and many more people need to get behind it than just two senators.

“I don’t know if it’s necessarily easy to pass,” Nonaka says. “We’re in an election year, so legislation isn’t easy right now. I think what you really need is the chairs of the actual Senate Banking and House Financial Services Committees to take an interest in it and to actually put it on the agenda.”

Nonaka says the bill is aided by the fact that debates on AI regulation currently do not attract the same fervor as those on more politicized financial services elements, such as cryptocurrency.   

“AI isn’t particularly politicized at the moment,” he says. “It’s interesting, because, for example, crypto, Bitcoin, and digital assets have become very politicized. I think there’s interest [from] both sides in fostering the development of [AI] and exploring it, so I don’t think it’s the case that you could see one side just trying to submarine legislation on AI because they see it’s politically advantageous to do so.”

Last year, the Biden administration issued an executive order on “safe, secure and trustworthy artificial intelligence”. If Biden’s predecessor in the Oval Office, Donald Trump, wins the presidential election later this year, he has indicated he will also seek to regulate AI usage, which he said might be “the most dangerous thing out there.”  

The FAIRR Act polices not only the creators of rogue algorithms but also those responsible for deep-fake generation. Deepfakes, which are forms of media that appear realistic but do not exist in real life, have caused alarm for their potential to confuse and slander individuals. One deepfake robocall that mimicked Biden encouraging people not to vote in the January primary elections caused confusion among voters, and in the same interview with Fox Business, Trump recalled hearing his voice advertising a product he did not promote. 

The bill proposes an addition to the Financial Stability Act of 2010, which would let the Financial Stability Oversight Council (FSOC) coordinate with member agencies on potential risks caused by “the generation and use of false representations of events or the likeness, speech or actions of persons by malign actors to manipulate financial markets, institutions or instruments or to cause disruption in financial markets.” 

Despite its coverage of many relevant topics, Nice Actimize’s Cottee says the bill is sometimes too broad in its scope. 

“I think representatives and senators are trying to look like they’ve got fingers on the pulse,” he says. “But there are still questions as to who’s going to enforce it. Who’s going to assess the risks? I mean, they’re talking about setting up standards bodies, and so on, within the various market bodies, and doing surveillance, but who’s actually going to do that? The devil really is in the detail.”

Cottee is not alone in this summation. Jack Solowey, financial technology policy analyst at the Cato Institute, a Washington-based libertarian think tank, says the broad wording of the bill in its current form potentially makes many open-source developers liable for criminal activity based on the reach of their programs—specifically, OpenAI.

“I think there’s an argument that they overlook the nature of third-party model provision,” Solowey says. “If a securities professional uses ChatGPT to craft a marketing email that later is used by an individual for fraud or manipulation, I think under this proposal, OpenAI [the creator of ChatGPT] would be found liable. Maybe that was intentional. Maybe it was an oversight. But I think that would be a dramatic expansion of liability.”  

Solowey says that while the bill will likely be marked up and changed as it progresses through the legislative system, at present the wording is too vague to not implicate open-source developers. He also notes that FSOC’s powers of regulation are not strong enough to set a solid example. FSOC currently has the authority to make recommendations to other financial regulators to make or enhance regulations, but those recommendations are not binding. 

The bill does not specifically account for the potentially unfair scenario of a developer exporting AI models they have coded onto open-source software-sharing platforms like GitHub, which then are used by a third party to game the market. In a post on social network X that he posted last year about the act, Solowey wrote: “To call this bill ‘FAIRR’ is Orwellian.”

Fingers Crossed

The success of AI legislation in the Senate begets further successful regulation efforts. That’s the hope for regulators, anyway. It’s a gambit supported by Carlo di Florio, global advisory leader at ACA Group, who says FAIRR is a positive development, and depending on the bill’s success, could augur future change. 

“I think it’s the first stalking horse AI regulation because most of that will happen through AI,” he says. “The regulators are limited with their existing authority to be able to regulate their industries as proactively as they would like. So that really calls on Congress to step in and solve that problem.”

Di Florio expects that a large, Dodd–Frank scale piece of AI legislation could come into force later this year. 

“We expect it because generative AI came on the scene so fast, it scared everybody, and we’re seeing incidents of inappropriate use,” he explains. “We need to take proactive responses as a country.”

Nice Actimize’s Cottee says there is a reasonable chance that FAIRR will be passed before the end of the current Congress in January. Like Covington & Burling’s Nonaka, he thinks the bill’s progress will be helped due to AI not currently being one of the most controversial topics in the US. 

“This has nothing to do with the border, and it definitely isn’t a budget deficit or anything like that,” Cottee says. “I think a lot of people in Congress in both houses would want to be seen doing something that furthers investor protection. It’s a loophole that we’ve never traditionally had to worry about, because if you’re placing an order it’s always been a person. Here, it’s just closing a loophole.”

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here