In pursuit of API-ness: Rise of API interfaces hints at data platforms of the future

Nasdaq’s new Data Fabric managed data infrastructure service presents an opportunity for firms to outsource many elements of their market data platforms to the exchange. But making it possible—and also driving innovations at other data providers designed to simplify data access—is the humble API.

Application Programming Interfaces (APIs) aren’t new. They aren’t sexy. They’re pretty boring, actually. They’re simple gateways that allow applications to talk to each other. So, while market data becomes ever-more complex and valuable for areas such as data science, and the options for delivery of that market data turn to new technologies like the cloud, why are providers getting so excited about the humble API?

Think price, performance, and stability. As the data world erupts in an explosion of new datasets, alternative data, and new use cases for traditional data—in many cases, all in combination with each other—the industry needs increasingly sophisticated tools to make sense of that data and turn it into insight that is relevant, actionable, and timely. As the cost of the technical resources required to achieve that—and the personnel required to engineer those solutions—goes up, firms are seeking mechanisms that give them greater control over how they access data. Hence, the API’s renaissance.

“The use of APIs to access market data is exploding across the industry, and it’s primarily being driven by the cost savings and performance improvements they allow for,” says Patrick Flannery, CEO of data provider MayStreet.

Beyond those factors, there’s also the benefit of flexibility, which becomes more important when dealing with non-traditional datasets and demanding new use cases.

patrick-flannery-maystreet
Patrick Flannery, MayStreet

“With the newest APIs, firms can pull only the market data they require, not the entire day’s order book like you have to with most APIs. That is key, since it improves response time and reduces cost for things like bandwidth, storage and compute resources that are associated with downloading massive sets of data,” Flannery says.

MayStreet itself introduced its High-Performance Query API (HPQ API) at the start of November, providing an additional method for accessing data in its cloud-based Market Data Lake of exchange data, allowing clients to stream “near real-time and historical full-depth data” at their preferred time intervals and frequency, along with preconfigured calculations for order types like Vwap, Twap, and completion time.

HPQ API allows clients to use a single API to stream both intraday and historical data at time intervals of their choosing, which also greatly increases the potential use cases,” Flannery adds. “For a bank or large asset manager, this means that different desks and groups within the organization can share access to the API and use it according to their respective needs, which represents further opportunity for savings.”

API acceleration

San Mateo, Calif.-based web services data vendor Xignite, which has used APIs to supply data to its clients for more than a decade, has also seen “accelerated interest” in its API offering over the past couple of years, says SVP of sales and business development Ryan Burdick. He says this is being driven by a combination of different factors, including emerging data providers seeing APIs as a well-established and understood mechanism to reach potential clients, and user firms trying to move projects that were seen as “nice ideas” prior to the Covid-19 pandemic to the front of the development queue, leveraging the simplicity of APIs.

“These consumers need access and can’t spend time working through siloes. They want data on-demand. And APIs make it easy to integrate and roll out products quickly… [whereas] big, bulk file delivery and datafeeds are expensive to maintain and are not so easy for users to consume,” Burdick says.

Burdick also stresses the importance of flexibility in a complex market environment.

“The content that drives financial markets isn’t standard or vanilla—there are different latency requirements, for example—so to be able to architect a scalable API platform for all these requirements, that’s where we can help,” he says, noting that just because the data is complex and non-standard, that doesn’t mean the mechanisms for delivering that data can’t offer a sense of standardization that makes APIs easy to work with.

“We have a Data-as-a-Service business with standard APIs, where the look and feel of equities and fixed income data, or real-time and historical data, are the same,” Burdick says, adding that Xignite is seeing even greater interest in its other main business line—a Platform-as-a-Service offering where the vendor works with banks and asset managers to “overhaul their infrastructure to be API-forward, so they can distribute data to internal consumers. We see APIs as building blocks. Our advice is that APIs should be a part of any infrastructure proposition.”

arthur-tricoire-iress
Arthur Tricoire

One company that would echo that sentiment is data vendor Iress, which this week announced the full integration of its acquisition of French data and connectivity provider QuantHouse to form its new API Data and Trading Solutions business. Though known best for its low-latency data technology and strategy-building tools aimed at quantitative traders, QuantHouse was also a longstanding proponent of using APIs, and this approach was even a factor in Iress’ decision to buy the vendor, says Arthur Tricoire, general manager, commercial, for the API Data and Trading Solutions business.

“At Iress, definitely at a group level, there is a drive to invest in using APIs for everything they do. When we first saw that Iress was interested [in QuantHouse], some of that was related to content acquisition… but there is also a broader conversation that can be had with clients about platforms and APIs,” Tricoire says.

That conversation revolves largely around how client firms that may have used Iress in one area and QuantHouse in a different area can now not only benefit from the full set of capabilities of the combined company, but that different business areas—rather than using separate solutions and buying and storing data in a siloed manner—can gain access to a broader range of data sources via a common platform and APIs for its data services, ranging from its QuantFeed low-latency datafeed, its historical and on-demand data, and its managed infrastructure and connectivity solution.

“For clients, and even for us as a vendor, the ideal position would be to have a single API endpoint to the full spectrum of the data you have in-house,” he says. “For internal development, that’s the approach we’re taking. So for client endpoints, we want to be able to streamline data access.”

Stitching a ‘Fabric’

In the second—and potentially broader and more impactful—major API-related announcement from this week, Nasdaq unveiled its new Data Fabric, a managed service that provides access to its Data Link marketplace comprising hundreds of datasets—including Nasdaq proprietary market data, public datasets that it scrapes form the web, and alternative datasets delivered via the former Quandl platform, which Nasdaq acquired at the end of 2018—via a single API.

Bill Dague, head of alternative data at Nasdaq, says users will find the ease of browsing and accessing these datasets “very reminiscent of the Quandl experience.” After acquiring Quandl, Nasdaq kept the vendor’s ethos of making market and economic data easier to use, allowing clients to find and display data within minutes, and to access it easily via a Rest API.

“That informed our thinking when we did the acquisition,” Dague says. “Part of the acquisition plan from the beginning was to take that experience and technology to the Nasdaq level. With the launch of Data Link, we’ve elevated that… and over time, will make all Nasdaq data available through that Rest API.”

But somewhere during the development of Data Fabric, its scope expanded beyond being a delivery mechanism for data just from Nasdaq and its data partners, and became a more central infrastructure component for firms’ entire data delivery environment.

“We started getting feedback from users, saying ‘I wish we could do that for my data.’ The process of deploying a new dataset is something we do really well, but it’s something that others in the industry struggle with,” Dague says. “For us, with Data Fabric, there’s only a marginal cost for us to add new datasets to Data Link. Clients can bring their data to the platform and have us manage it for them.”

bill-dague-nasdaq
Bill Dague

In short, Data Fabric potentially becomes the conduit for all a firm’s data from a multitude of sources, via dedicated and isolated, secure channels, from where they can consume the data in applications built using the python programming language, or in Microsoft Excel spreadsheets.

“For example, if someone subscribes to Bloomberg Fundamentals, we go and pick it up. If a second client needs that, we go and get it again—we’re not trying to collect it once and federate it… and we don’t want to become a redistributor,” he says. “We think of our platform more like a Snowflake, Databricks, or even some of Amazon Web Services—as an extension of the client’s infrastructure.”

In addition, firms can use Data Fabric for compliance and governance tasks, such as to centrally track and manage purchases and usage, entitlements, and reporting.

He says the sweet spot for Data Fabric is hedge funds and asset managers in the $1 billion to $20 billion range—“Those who have enough scale to realize it’s a problem they need to solve, but don’t have enough resources or reliance on data to have already solved it.”

But what gets Dague especially excited is the potential for firms to also use Data Fabric to distribute their own proprietary data throughout their organizations. “That’s the real coup de grace of the platform—firms bringing their own data: research data, risk outputs, and potentially sensitive data. Having the ability to distribute that to the people who need it is very powerful,” he says.

Though this ability isn’t available in Data Fabric yet, Nasdaq already has a prototype. “That, I think, is the higher-value proposition we’re leaning towards—shortening the distribution cycle and improving firms’ investment in data and research,” Dague says.

Commercial opportunities and challenges

For Nasdaq, Data Fabric potentially opens up a wealth of new opportunities. “With Data Link, it’s about a better customer experience, it makes us more sticky, and it makes it easier for clients to find our data, so they’ll buy more of it,” Dague says. “We have all this great Nasdaq content on the shelf that people can now unlock. There are also a lot of data partners, and there’s a lot of free data, which is especially popular among academics. So there are a lot of opportunities for cross-selling and up-selling, and more choice for clients.”

For Iress, integrating QuantHouse was key from a business perspective to being able to provide a more complete offering of data and other services, to reduce duplication at client sites, and to be able to reach new use cases within organizations—but APIs are what turn that business goal into a technically achievable reality.

“When you think about the adoption of APIs and non-display usage, the part that’s been growing and accelerating over the last couple of years has been around compliance, risk monitoring, and post-trade position monitoring. Back-office applications need more data points, and APIs help to scale those,” Tricoire says. “We’ve been able to look at a broader range of opportunities at clients and realize there are more areas that we can be serving… and how we can bring together a proposition to respond to those.”

ryan-burdick-xignite
Ryan Burdick

But beyond the ease of data access that APIs afford, there are other inevitable considerations: once data delivery becomes more flexible, surely the commercial terms under which the data is licensed must also adapt to changing use cases.

“Commercial models will have to evolve also,” says Xignite’s Burdick. “If data originators and exchanges don’t evolve to embrace data on-demand, and commercial models don’t evolve to react to that, then someone else will come along and solve that.”

Certainly, those embracing APIs are getting a head-start along that path. Now, data originators and exchanges need to embrace licenses and fees for data that are as flexible as the technologies they use to distribute it.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

‘Feature, not a bug’: Bloomberg makes the case for Figi

Bloomberg created the Figi identifier, but ceded all its rights to the Object Management Group 10 years ago. Here, Bloomberg’s Richard Robinson and Steve Meizanis write to dispel what they believe to be misconceptions about Figi and the FDTA.

Where have all the exchange platform providers gone?

The IMD Wrap: Running an exchange is a profitable business. The margins on market data sales alone can be staggering. And since every exchange needs a reliable and efficient exchange technology stack, Max asks why more vendors aren’t diving into this space.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here