Can algos collude? Quants are finding out

Oxford-Man Institute is among those asking: could algorithms gang up and squeeze customers?

  • Regulators including the Bank of England and the Dutch AFM have raised concerns about the possibility that machine learning trading algorithms could ‘learn’ to collude.
  • Quants at the Oxford-Man Institute are using game theory to investigate how such collusion might happen.
  • Learning algos need not talk to each other to start working together in non-competitive ways, say quants.
  • No empirical evidence exists yet of such behavior in markets.
  • But in simulations, researchers found that algorithms did work together to make more money from customers.

Pricing algorithms—for anything from airline tickets to cinema seats—are a part of daily life. And daily life produces some of the most striking instances of what can happen when those algos go awry.

Take the bizarre story of a book about fruit fly genetics that wound up with a $24 million listing on Amazon in 2011. Two bookdealers that relied on Amazon’s pricing algos to maximize profit saw the algorithms become locked in a kind of inverse bidding war on Peter Lawrence’s The Making of a Fly, resulting in the gargantuan price tag.

An outlandish asking price for a little-known book makes for an amusing headline. But could less pronounced—and more problematic—effects occur in financial markets that transact billions? Regulators and quants suggest they could.

The AFM believes machine learning-driven, algorithmic trading has the potential to fundamentally change the market microstructure
Dutch regulator AFM

In October, the Bank of England published a discussion paper on the adoption of artificial intelligence and machine learning in financial markets that cited risks to competition if algorithms were to start to collude.

The paper quoted academic research showing that “by detecting price changes from rivals and enabling a rapid or automatic response, AI systems could potentially facilitate collusive strategies between sellers and punish deviation from a collusive strategy”.

The Dutch regulator AFM will publish a report about the effects of algorithmic trading, and machine learning in particular, in the coming months. “The AFM believes machine learning-driven, algorithmic trading has the potential to fundamentally change the market microstructure. As such, it has our full attention,” says a spokesperson.

Meanwhile, the Oxford-Man Institute of Quantitative Finance, an academic research partnership between $138 billion hedge fund Man Group and University of Oxford, is trying out game theory as a tool to help explain what happens when algorithms make decisions in financial markets—something that has not yet been explored, according to director and professor of mathematical finance Álvaro Cartea.

The Institute is looking for evidence of algos purposefully or inadvertently working together to inflate prices. “We are writing mathematical models of the interaction between algorithms, using a combination of game theory, mathematics and machine learning to see how the interaction in markets can take you to an equilibrium that might be positive for the market—or not,” Cartea says.

Accidental collusion

Long-only buy-side traders transact more than half their volume using algorithms, according to a survey by industry publication The Trade. And learning algos are gaining in popularity. BarclayHedge research found more than a quarter of hedge funds used artificial intelligence in trade execution as far back as 2018.

Should these packets of computer code start to collaborate and cause spreads to widen, the losers would be the pension funds, insurers, hedge funds, sovereign wealth funds, and so on, that would pay higher trading costs as a result. It could arguably dent market efficiency, too, leading to the misallocation of capital.

The winners, of course, would be market-makers, although, nobody is saying these firms actively wish for their algorithms to join arms; the collusion being considered would arise by accident, not by design.

No empirical investigations into algorithmic collusion in financial markets have yet been published. Broader research by economists, though, points to algos bunching together to inflate prices in theoretical models and in other types of real-world situations.

Technology and artificial intelligence is creating serious enforcement issues
Emilio Calvano, University of Rome

Emilio Calvano, an economics and finance professor at University of Rome, Tor Vergata, specializes in the field. In a 2020 paper in American Economic Review on ‘Artificial Intelligence, Algorithmic Pricing, and Collusion’, Calvano and other researchers studied the behavior of self-learning pricing algorithms in a simulated theoretical marketplace through a series of repeated interactions.

The algorithms consistently learned to charge elevated, “supra-competitive” prices, without communicating with one another. Critically, they learned to push prices up, but not so far as to crash their own market.

Calvano believes algorithmic collusion is damaging to society, and that learning algorithms are set to create grave problems for regulators.

“Prices have important allocative properties,” he says. “High prices cause extensive economic harm, destroying more value than the value that the undertakings [setting] those high prices capture. In other words, ramping up prices damages consumers much more than it benefits firms.”

US antitrust laws were created to keep prices in check in 1890, he points out. Today, “technology and artificial intelligence is creating serious enforcement issues”, Calvano says.

From cartels to computers

At Wharton University, professor Joseph Harrington’s research started with collusion in real-life cartels, but his field of interest recently broadened to the realm of artificial intelligence.

“One of things I've come to appreciate is that this desire to restrain competition can occur in many forms, and one of the forms that has become a recent concern is algorithms,” Harrington says.

Collusion could occur if the executives at competitor firms agreed to use specific pricing algorithms. But with self-learning machines, such explicit collaboration may not even come into play.

“Competitors could adopt these learning algorithms, and the learning algorithms might produce higher profits by finding better pricing rules. But, unbeknownst to the firms’ managers, the reason for higher profits might be that the learning algorithms co-ordinated on collusive pricing rules,” he says.

Research looking at retail gasoline prices found that in isolated areas with just two gas stations, where one adopted algorithmic pricing nothing changed. If both stations adopted algorithmic pricing, however, average prices crept higher.

“That evidence is quite convincing,” says Harrington. “There are many reasons to adopt algorithmic pricing that would not harm competition—including the reduction of optimization error and responding more quickly to information through automated pricing. One might expect to see an effect on a firm’s prices even if it were the only one to adopt [algorithms] in a market.

“However, if the effect only appears when two or more firms adopt these pricing algorithms, it is likely due to having reduced competition,” he says.

In finance, market-makers might use algorithms that learn how to optimize the quotes they send to the limit order book, with a similar outcome.

Unfair competition

At the Oxford-Man Institute, quants have been revisiting a whole literature on game theory that started in earnest in the 1950s but is now being revised with new techniques, Cartea says.

“At a higher level, an important question from regulators and society is: are these machines really competing in a fair way? Game theory—the repeated game aspect—can help to understand roughly how that works,” he adds.

For algorithmic collusion to take hold, experts say, the algos must learn to punish each other for breaking the collaboration.

In game theory, that equates to players learning to co-operate in the so-called Prisoner’s Dilemma—a thought experiment postulated by US mathematicians in1950 in which two captured prisoners face a choice to confess or remain silent.

The selfish choice for the prisoners individually—to confess and pin blame on their accomplice—incurs a longer sentence than if both stay silent. But the prisoners cannot communicate to co-ordinate their decisions. Nor can they be sure whether to trust one another.

In some states, and if the game is played through multiple iterations, players learn to co-operate—they start to punish fellow players that failed to collaborate in past turns.

The same can happen with algos, Cartea says: “If there is a deviation from co-operation… [one black box] gets punished by the other black box.” Eventually each algorithm learns to take a path to collusive prices, rather than competitive prices.

Detective work

So far, no-one has proven the existence of algorithmic collusion in financial markets by looking at data. If learning algos were teaming up, though, spreads would widen more than a competitive market should observe. So, the Oxford-Man Institute is looking at different algorithms to see how they act in practice.

The group has confirmed that ‘players’ in such simulations do learn how to collude without talking to each other.

“For a particular framework of repeated games, our research shows that through repeated interactions, decentralized learning algorithms can learn to both define the terms of a collusive arrangement and to implement that arrangement,” Cartea says.

The team’s further research shows that algorithms can learn a reward-punishment mechanism to sustain collusion, he adds. And “under certain assumptions, in market-making in electronic markets” the group found, “the algorithms tacitly collude to extract rents”.

All this poses a challenge to regulators, who have usually pursued anti-competitive practices by relying on communications between participants as evidence of conspiracy and collusion.

Harrington became interested in algorithmic collusion, he says, because of concerns about how society could prevent it. In the US and UK, for example, competition regulation is rooted in the idea of conspiracy, where humans co-ordinate and commit to some sort of agreement to not compete.

No part of that scenario makes sense for self-learning algos, though. “They don’t have an understanding,” Harrington says, “much less a mutual understanding”.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here