EU's AI Regulations Could Lay Blame With CTO

Jo wonders if the EC's approach to regulating AI could adapt existing liability laws—with implications for individuals.

Robot artificial intelligence

What we need to be attentive to is where in the chain of liability are individuals affected. Think, for example, of the CTO: if there is a flaw in the design of … any other form of AI product or service, does that mean that, from either a liability or a regulatory perspective, the human being overseeing the process … now has an increased risk of liability in one form or another?

In ancient Mesopotamia, King Hammurabi turned Babylon into a city-state to be reckoned with. His famous code is one of the earliest legal frameworks that modern humans know of. Its 282 laws include some regulation of the construction industry: if a house collapsed upon its owner and killed him, the builder could be put to death. If it killed the owner’s son, the builder’s son could be killed in restitution.

Fortunately, the EU’s modern legal system doesn’t put anyone to death for harmful products or services. But the idea of protecting the consumer and holding the producer liable for harm or loss persists, 4,000-odd years after Hammurabi.

The European Commission is currently considering how to apply these principles for protecting consumers to the regulation of artificial intelligence, technology that, by its nature, makes it difficult to apportion blame along its production chain.

Developments over the past two years seem to signify that the Commission will publish proposals for AI regulation, probably early next year. In 2018, it announced investment into the technology and subsequently published ethical guidelines. In late 2019, a high level expert group of independent advisors published a report of recommendations on how liability and product safety law could be extended to AI products. And in February 2020, the EC put out a consultation on a whitepaper, an early indication of what its approach to the legislative framework for emerging technologies will look like.

A major aspect of this approach, as set out in the whitepaper, is adapting existing EU liability concepts to AI. In this, it draws heavily on the 2019 expert group report, which says that while the existing liability regime offers protections for users of AI, adjustments need to be made to make it suitable for emerging technologies.

What we need to be attentive to is where in the chain of liability are individuals affected. Think, for example, of the CTO: if there is a flaw in the design of … any other form of AI product or service, does that mean that, from either a liability or a regulatory perspective, the human being overseeing the process … now has an increased risk of liability in one form or another?
John Ahern

Firstly, “strict liability” must lie with the person who is in control of the risk associated with the operation of the AI. Strict liability means that the producer of an AI product is liable for harm resulting from its use, even if they were ignorant of the fault in the product.  

These operators also have duties of care, including monitoring the system, the report says.

Could the chief technology officer be held responsible for a defective artificial intelligence product, or for ruinous decisions made by an algorithm, then?

John Ahern, a partner in the financial services group at law firm Covington in London, says this is a question that lawmakers and regulated entities have to ponder as the EC’s approach to a legal framework solidifies. While AI regulation is desirable, he says, it could come at the cost of innovation, as increased liability would decrease interest in top tech jobs at financial firms, or make chief technical officers overly cautious.

“What we need to be attentive to is where in the chain of liability are individuals affected. Think, for example, of the CTO: if there is a flaw in the design of a database product, or an algo, or any other form of AI product or service, does that mean that, from either a liability or a regulatory perspective, the human being overseeing the process or the product design or the development now has an increased risk of liability in one form or another?” Ahern says.

Ahern says that financial services already has a regulatory framework around product safety, in the sense that regulators can intervene if a product is detrimental to the market or to consumers. The UK’s Financial Conduction Authority, for example, recently permanently banned the mass-marketing of speculative securities to retail investors.

But there are no stipulations in law specific to financial services or regulation against the products themselves. 

“What is not in the regulatory framework in a really explicit way right now is in the product design, where the product has an intrinsic flaw, and somebody suffers loss having invested in the product—that specific issue is not legislated for,” Ahern says.

After the whitepaper leads to law, “you may see a regulatory framework grow up to address issues, along with other sectors, in product design.”

This would be a pivotal decision in the production of AI tools, Ahern adds. Who would get the blame for a proprietary algorithm that causes a software glitch that lost investors’ money? There might be liability attached to the firm, and legal consequences for the individuals that oversaw the creation of the algo.

“And where individuals have regulatory or legal liability, there is a risk/reward quotient that comes into taking on that role, and the risk increases,” Ahern says.

Legal experts globally have long worried that individual liability for software products could make developers skittish and stifle innovation. Similarly, tech firms responding to the EU’s whitepaper consultation say an ill-considered legal and regulatory framework could hobble the development of these technologies.

This is not just food for thought in the EU. Whatever the Commission’s legislative framework for AI ends up looking like, its implications will extend beyond the bloc. While the EU has been scrambling to catch up with the US and China when it comes to AI development—as the US and China have, to borrow Facebook’s old motto, moved fast and broken things—the EU has led the world in technology law.

The world took its cue from the General Data Protection Regulation, which has spawned imitators elsewhere, including in the US. It may do so again when it comes to regulating AI.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here