EU's AI Regulations Could Lay Blame With CTO
Jo wonders if the EC's approach to regulating AI could adapt existing liability laws—with implications for individuals.
Need to know
What we need to be attentive to is where in the chain of liability are individuals affected. Think, for example, of the CTO: if there is a flaw in the design of … any other form of AI product or service, does that mean that, from either a liability or a regulatory perspective, the human being overseeing the process … now has an increased risk of liability in one form or another?
In ancient Mesopotamia, King Hammurabi turned Babylon into a city-state to be reckoned with. His famous code is one of the earliest legal frameworks that modern humans know of. Its 282 laws include some regulation of the construction industry: if a house collapsed upon its owner and killed him, the builder could be put to death. If it killed the owner’s son, the builder’s son could be killed in restitution.
Fortunately, the EU’s modern legal system doesn’t put anyone to death for harmful products or services. But the idea of protecting the consumer and holding the producer liable for harm or loss persists, 4,000-odd years after Hammurabi.
The European Commission is currently considering how to apply these principles for protecting consumers to the regulation of artificial intelligence, technology that, by its nature, makes it difficult to apportion blame along its production chain.
Developments over the past two years seem to signify that the Commission will publish proposals for AI regulation, probably early next year. In 2018, it announced investment into the technology and subsequently published ethical guidelines. In late 2019, a high level expert group of independent advisors published a report of recommendations on how liability and product safety law could be extended to AI products. And in February 2020, the EC put out a consultation on a whitepaper, an early indication of what its approach to the legislative framework for emerging technologies will look like.
A major aspect of this approach, as set out in the whitepaper, is adapting existing EU liability concepts to AI. In this, it draws heavily on the 2019 expert group report, which says that while the existing liability regime offers protections for users of AI, adjustments need to be made to make it suitable for emerging technologies.
What we need to be attentive to is where in the chain of liability are individuals affected. Think, for example, of the CTO: if there is a flaw in the design of … any other form of AI product or service, does that mean that, from either a liability or a regulatory perspective, the human being overseeing the process … now has an increased risk of liability in one form or another?
John Ahern
Firstly, “strict liability” must lie with the person who is in control of the risk associated with the operation of the AI. Strict liability means that the producer of an AI product is liable for harm resulting from its use, even if they were ignorant of the fault in the product.
These operators also have duties of care, including monitoring the system, the report says.
Could the chief technology officer be held responsible for a defective artificial intelligence product, or for ruinous decisions made by an algorithm, then?
John Ahern, a partner in the financial services group at law firm Covington in London, says this is a question that lawmakers and regulated entities have to ponder as the EC’s approach to a legal framework solidifies. While AI regulation is desirable, he says, it could come at the cost of innovation, as increased liability would decrease interest in top tech jobs at financial firms, or make chief technical officers overly cautious.
“What we need to be attentive to is where in the chain of liability are individuals affected. Think, for example, of the CTO: if there is a flaw in the design of a database product, or an algo, or any other form of AI product or service, does that mean that, from either a liability or a regulatory perspective, the human being overseeing the process or the product design or the development now has an increased risk of liability in one form or another?” Ahern says.
Ahern says that financial services already has a regulatory framework around product safety, in the sense that regulators can intervene if a product is detrimental to the market or to consumers. The UK’s Financial Conduction Authority, for example, recently permanently banned the mass-marketing of speculative securities to retail investors.
But there are no stipulations in law specific to financial services or regulation against the products themselves.
“What is not in the regulatory framework in a really explicit way right now is in the product design, where the product has an intrinsic flaw, and somebody suffers loss having invested in the product—that specific issue is not legislated for,” Ahern says.
After the whitepaper leads to law, “you may see a regulatory framework grow up to address issues, along with other sectors, in product design.”
This would be a pivotal decision in the production of AI tools, Ahern adds. Who would get the blame for a proprietary algorithm that causes a software glitch that lost investors’ money? There might be liability attached to the firm, and legal consequences for the individuals that oversaw the creation of the algo.
“And where individuals have regulatory or legal liability, there is a risk/reward quotient that comes into taking on that role, and the risk increases,” Ahern says.
Legal experts globally have long worried that individual liability for software products could make developers skittish and stifle innovation. Similarly, tech firms responding to the EU’s whitepaper consultation say an ill-considered legal and regulatory framework could hobble the development of these technologies.
This is not just food for thought in the EU. Whatever the Commission’s legislative framework for AI ends up looking like, its implications will extend beyond the bloc. While the EU has been scrambling to catch up with the US and China when it comes to AI development—as the US and China have, to borrow Facebook’s old motto, moved fast and broken things—the EU has led the world in technology law.
The world took its cue from the General Data Protection Regulation, which has spawned imitators elsewhere, including in the US. It may do so again when it comes to regulating AI.
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Regulation
Off-channel messaging (and regulators) still a massive headache for banks
Waters Wrap: Anthony wonders why US regulators are waging a war using fines, while European regulators have chosen a less draconian path.
Banks fret over vendor contracts as Dora deadline looms
Thousands of vendor contracts will need repapering to comply with EU’s new digital resilience rules
Chevron’s absence leaves questions for elusive AI regulation in US
The US Supreme Court’s decision to overturn the Chevron deference presents unique considerations for potential AI rules.
Aussie asset managers struggle to meet ‘bank-like’ collateral, margin obligations
New margin and collateral requirements imposed by UMR and its regulator, Apra, are forcing buy-side firms to find tools to help.
The costly sanctions risks hiding in your supply chain
In an age of geopolitical instability and rising fines, financial firms need to dig deep into the securities they invest in and the issuing company’s network of suppliers and associates.
Industry associations say ECB cloud guidelines clash with EU’s Dora
Responses from industry participants on the European Central Bank’s guidelines are expected in the coming weeks.
Regulators recommend Figi over Cusip, Isin for reporting in FDTA proposal
Another contentious battle in the world of identifiers pits the Figi against Cusip and the Isin, with regulators including the Fed, the SEC, and the CFTC so far backing the Figi.
US Supreme Court clips SEC’s wings with recent rulings
The Supreme Court made a host of decisions at the start of July that spell trouble for regulators—including the SEC.