The Research Renaissance: An Examination of How Quantitative Analysis is Evolving

Although it still needs more time in the oven, technology is powering a quiet revolution in research and analytics—and the fusion of the two may transform the investment process.

oil

Obviously, Mifid II and its unbundling component will be a major reason for that. But for the purpose of this article, let’s move past regulatory concerns and focus in on some of the trends permeating the research space, and quantitative analysis as a whole.

First, processing power has exploded exponentially. According to online tech community Experts Exchange, computing power has increased 1-trillion fold in performance over the last six decades. Add to that, the cost to store this data has dropped in a similarly precipitous manner. Cloud provider BackBlaze says that the cost to store one gigabyte of data has dropped from $500,000 in 1981 to less than $0.03 today. And the amount of data that’s being created is eye-popping. IBM says that 2.5 quintillion bytes of data is created every day, and that number will balloon with the advent of the Internet of Things.

Those three things combined have helped to push capital markets firms to turn to public cloud providers, most notably Amazon Web Services, followed by Microsoft Azure and, increasingly, Google Cloud Platform and IBM Cloud. That idea was anathema to almost any investment bank—much less hedge funds—just five years ago.

Increasingly, the evolution of machine learning, deep learning, natural language processing and other forms of artificial intelligence (AI) are also allowing firms to free up staff to actually examine the data rather than waste manpower on time-consuming manual processes. Again, processing power, infrastructure cost reduction, and the amount of data are helping to drive evolution in the AI space.

On this list you can also throw in Field Programmable Gate Arrays (FPGAs), as they are helping firms to move data around at higher velocities, in that they allow for streaming dataflow computations while using less power than CPUs or GPUs, although generally on standardized calculations using repetitive processes than more esoteric forms of data crunching with unstructured information.

And, finally, let’s not forget the industry’s adoption of open-source tools. Everyone from Goldman Sachs to AQR to Schroders to Deutsche Bank and beyond are experimenting with open-source applications, and even opening up their own code. These vibrant communities are creating innovative solutions around the area of big data, such as Hadoop.

Perhaps most importantly, because of these developments, alternative data sources are becoming less costly to drill into. Over the past few years, satellite and drone imagery, credit card usage, cellphone location data, environmental, social and governance (ESG) metrics, and information from social media outlets, news orgs and blogs can be more efficiently sucked into the investment decision-making process, alongside that of market and reference data.

Technology and data are powering a quiet revolution in research and analytics—and the fusion of the two may transform the investment process. The walls to accessing and consuming research are not coming down—far from it. But for those with the resources, there are possibilities today that were not just not practical even a few years ago.

Research in Motion

This then brings us to programming languages, libraries, and packages, since this is what underpins quantitative analysis. One such tool that is slowly making an entrance into the capital markets is the Julia programming language.

The language is still in its infancy, but it was built to be faster than the likes of R, Python, and Matlab. While it’s still largely in the development phase, many data scientists are experimenting with the language, including Thomas Sargent, who won the 2011 Nobel Prize in Economics and has been an advocate of Julia.

Viral Shah, co-inventor of the language and CEO of consultancy Julia Computing, says that they are inching closer to releasing version 1.0 of Julia. While the group was hoping to release 1.0 earlier this year at JuliaCon this past summer, he now suspects that they will release v0.7 in late December, early January, and that “0.7 and 1.0 will be almost simultaneous releases.”

During a recent Waters webinar, Predrag Cvetkovski, senior vice president of operations and technology, governance and analytics at Citi, noted that while Julia was “a little bit more of a high-performing programming language” than R and Python, and it is being used for “prototyping and experimenting” at the bank, he said that his unit was “still an R and Python shop,” largely.

During that same webinar, Mark Ainsworth, head of data insights at Schroders, noted that when he arrived at the asset manager three years ago, almost everyone doing quantitative analysis were using Matlab, though quite a few knew R to some degree. They had a discussion and decided to adopt R as the data insights team’s home language.

He said the reason for this was because it was cheaper, but most importantly was the thriving community and ecosystem built around R and the momentum behind the tool. “Most of the people whom I’ve recruited, who tend to come from outside asset management, it’s absolutely R or Python that they know,” he said. “R is good for the quick-turnaround, ad-hoc pieces that we do.”

The quant unit uses Python for a lot of its data manipulation, fetching and moving around, with SQL mixed in and Tableau for visualization.

“We’re very focused on using machine-learning data analysis of alternative data sources to provide insights as to why certain things are happening and understand the patterns of those things to inform those individuals who then make a human judgement, decision,” Ainsworth said. “So the ability to easily build tools and to very easily build good data visualizations in order to influence and communicate with those people is important.”

New Insights

Bringing it all together, Ainsworth gave one example of how it is now able to find value in dense, complex datasets: Schroders is using NLP to make sense of large bodies of text to inform long-term decisions and, specifically, for patents.

“One piece we did recently was looking at patents and, in particular, looking at car companies and the various patents that they were registering with the United States patent office, which amounts to thousands of these things,” he said. “Those documents are very dense, very technical, full of lots of detailed information, and we used some text-clustering algorithms to identify clusters of sorts of patents. That then lets us look at trends—here’s a clump of patents and having looked at a few of them you can tell that, yeah, all those are about battery-charging technology. Then we can present to the investment researchers trends in the sorts of technologies of companies that we’re investing in. Suddenly that sheds a light on the research agenda of these car companies, which is something that was otherwise entirely hidden.”

The Bottom Line

I acknowledge that this article meanders a bit, but hopefully it also shows just how much there is to be excited about when it comes to research. Most of the talk in 2018 will revolve around Mifid II and unbundling and what not, but we’re also entering into an interesting new era where information is more attainable and more easily analyzed—again, relatively speaking. Everyone won’t be equal; there will be winners and losers in this new gold rush. But for those with the resources and a cogent plan, there are opportunities to be found that didn’t exist before.

As with many things in the capital markets at present—be it blockchain, emerging technologies, post-trade reform, and other areas, it’s possible to see how technology is facilitating a widespread transformation in how the investment process operates when these items aren’t just looked at individually, but as part of a whole.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

‘Feature, not a bug’: Bloomberg makes the case for Figi

Bloomberg created the Figi identifier, but ceded all its rights to the Object Management Group 10 years ago. Here, Bloomberg’s Richard Robinson and Steve Meizanis write to dispel what they believe to be misconceptions about Figi and the FDTA.

Where have all the exchange platform providers gone?

The IMD Wrap: Running an exchange is a profitable business. The margins on market data sales alone can be staggering. And since every exchange needs a reliable and efficient exchange technology stack, Max asks why more vendors aren’t diving into this space.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here