Regulatory Uncertainty Hinders AI Innovation

Firms are reluctant to move ahead with AI technologies as there is little regulatory guidance on its ethical implications, exchange providers say.

Artificial intelligence

A lack of regulatory certainty around the ethical implications of black box artificial intelligence (AI) is holding back innovation at financial firms and market infrastructure providers, exchange providers said yesterday.

Firms are nervous about implementing any technology or strategies that could clash with social policy, said John Pigott, founder and CEO of ABE Global, a security token exchange that was launched this year.

There are few ethical concerns surrounding the use of algorithmic trading strategies, perhaps, but ”there is a very big qualitative gap when you start using AI technologies in terms of determining access to capital,” Pigott said.

Pigott was speaking on a panel at the Extent conference, hosted by Exactpro in London on September 17.

“You suddenly have social policies getting in there—who gets a credit card, who gets a stock transfer? And if you don’t have some ability to come in and explain how that engine is working, then there is an inherent clash with social policies,” Pigott said.

Public concern is growing around the ethical use of AI as these technologies proliferate. In the capital markets context, governments and financial firms are beginning to grapple with ethical and systemic risk implications of AI, with both public and private sector companies setting up their own committees and working groups to consider future policy and implement ethical AI programs.

These debates often hinge on the transparency into an algorithm, particularly when it comes to deep learning neural networks, which are widely used in many industries and are especially obscure. 

“Any time you are going to go black box … that probably works fine for an individual trade, but when it comes to capital access, when there is any public interest involved, there is a bright line that a number of firms are not willing to cross because regulatory guidance is lacking,” Pigott said.

Black box algorithms, however, are the most complex and capable, and therefore where users will find the most advantage in the capital markets.

“Black box has superior capabilities,” said Mario Quonils, CTO of the London Metal Exchange.

The LME has actively trialed AI in a market surveillance context.

Quonils said he has been involved in discussions with academics, who have concluded that a deep learning structure would have to have nine layers to be truly useful for market participants. Deep learning is made up of layers of stacked neural networks.

“Nine different layers! That is nine angles that any one individual would need to understand, that they would need to be able to compute. That makes it maybe 1.05% of humanity that could have that vision. … To what degree is it then ethical? Is this prescribed behavior?” Quonils said.

The opacity and complexity of these networks are therefore a risk, he added, and regulators do not like risk. “As such, that is where we will have some issues to push this [technology] forward.”

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here