By Neena Dholani,Global Marketing Programme Director, FIX Trading Community
“Age is no guarantee of efficiency—and youth is no guarantee of innovation.” So goes the conversation in Skyfall, the recent James Bond film, that finds Bond contemplating whether he’s still up to his job while meeting his new, much younger and more technologically-inclined quartermaster. It is a pivotal point, after which the pair deploy a little bit of both old and new to vanquish an antagonist intimately familiar with MI6.
In the post-crisis era, it has often been asked what would come after the period of big regulation; frequently, disruptive technologies are posited as the answer. But as speakers pointed out at this year’s FIX Trading Community Americas Trading Briefing, adopting these technologies—whether to virtual currencies’ institutionalization, artificial intelligence (AI) applications, more sophisticated algo wheels or wider post-trade automation—isn’t as easy as plug-and-play. Far from it. Despite trendy topics dominating the day, the overriding watchword was not “revolution” but rather “experimentation”. And as the audience heard at this year’s event, hosted by State Street in Boston, there is lots of room—indeed, even necessity—for data wisdom to pair with innovation.
The year 2018 was certainly one for cryptocurrencies’ rise—one which saw the price of Bitcoin spike and fall and roil, greater adoption of tokenization for better and worse, and major investment banks and institutions all taking acute notice. Panelists, including two from Boston-based buy-side giants, agreed that interest in digital assets is no longer drawing from internal curiosity for the future, but instead directly from client demand bubbling up today.
As they engineer new crypto platforms and infrastructure, many questions remain. For starters, colorful disagreement arose around the extent that tokenization can—and should—go. Some panelists saw it ultimately reaching not only to traditional market activities like initial public offerings (IPOs) and bonds issuance, but to trading in esoteric, real assets as well. Others argued that would be a bridge too far, creating additional operational risks—when the intent is the reverse—and over-occupy regulatory attention, as fraudulent initial coin offerings (ICOs) already have.
Meanwhile, it was pointed out that institutional trading in existing crypto “bearer assets” like bitcoin remains mostly off-exchange, negotiated over the phone or chat. Panelists agreed this is down to staying away from “whales” hunting in the space, as well as lack of portability across crypto venues, and taking a conservative approach to a space that remains nascent in its oversight.
That was demonstrated in a FIX study of the landscape this year, which analyzed 84 different crypto venues’ application programmable interfaces (APIs). “The functionality in FIX has 98 percent of what you need for order flow, execution, and settlement of cryptocurrencies. The missing thing is the symbology, really facilitating the payment channel,” explained one panelist, noting that choosing the right industry entities to manage and maintain those processes is still in its early days, but “standardizing them now is easier than herding traders towards adoption later.”
Another pointed out that this should be achievable in practice, arguing that “with new tools for identifying assets on a blockchain, you can truly create a 32-bit identifier” and build in a sufficient security envelope for messaging around it. But any effective symbology framework will also need to be flexible to handle another unique aspect of this new world—forks, or permanent divergences in the blockchain.
Post-Trade: Towards ‘Transaction Finality’
Though far less of a greenfield, a second discussion highlighted experimentation in the post-trade space, particularly for fixed income and derivatives.
Clearing, posting margin, collateral movements and related payments have become more crucial because of regulation, and many collaborative data and technology efforts—even among natural competitors, like exchange operators—have pushed those activities towards greater efficiency. Crucially, FIX data is flowing from front offices more freely, and deeper downstream. “[The next step] involves bringing more of what’s happening [with FIX] at broker-dealers over to the custodian,” one speaker, with experience on both sides, told the audience. “The protocol is there. You don’t need new technology [like distributed ledgers] to do it; those are still several years away. Just extend the protocol and compel greater adoption.”
One tactical objective lies in finding the “low-hanging fruit”, as another panelist described it, areas whose value-add can be easily justified within the layers of a large financial services organization, and demonstrates potential. Several projects were mentioned for illustration: including one bank’s automation for interest-rate swap periodic payments across four systems, another tackling repurchase (repo) agreements pair-offs, and collective support for progress around collateral eligibility and other asset servicing activities. Junior analysts, one speaker offered, “have joined our firm and honestly asked me ‘what is a fax?’—and yet so much of what we do is still handled that way, manually by fax or email. Changing that in those areas would be incredibly powerful.”
The strategic goal, then, is to push a common data language through to custodians’ systems that is interoperable and takes advantage of the “collective intelligence already threaded through FIX.” As one participant from a leading money manager put it, the goal is enabling “transaction finality”: being able to initiate settlement across multiple parts of a complex transaction—be it a security being traded, a loan associated with the trade, or a corporate action involved—all at once, and have the data generated and status on those processes updated as close to real-time as possible.
“Until that capability is reached,” he said, “we haven’t really guaranteed what has happened in a client’s portfolio. And that is our job.”
Algo Evaluation, Explainability: Grounding AI’s Future
Finally, a pair of sessions touched upon spaces ripe for development in finance—though one, algorithmic trading and analysis, is cleanly in the present, while the other, deeper AI applications, retains the most future potential.
The former focused on algo wheels, which while neither new nor unique, have refocused industry attention in recent years upon algo providers’ ability to rank, select, set, deploy, evaluate and re-adjust algos. Speakers from a number of institutions and independent algo providers cited the increasing need for transaction cost analysis (TCA) data in the era of MiFID II, and the challenges associated with wrangling that data and accurately comparing it “apples to apples” as a TCA engine is built out.
But perhaps more notable were the areas tabbed for future study and expansion—developing a “simpler” algo offering that can modify performance against changing market conditions and evolving execution objectives, rather than simply VWAP or break up an iceberg. “The future of trading is dynamic, so it’s really about having smarter algos to use, not more, which are more complex behind the scenes—effectively using AI,” said one speaker. “We’re already seeing clients overloaded with different algo products today; they don’t know how they all work, and they are rejecting that. So simplify the experience, instead of using 20 algos with different behaviors—and in their design, it’s about that experimentation, rather than being prescriptive.”
That was something of a neat segue for the final discussion on other AI applications already active today—including using sentiment analysis and portfolio optimization for wealth management client relationships, machine learning for trade support, and even exercises to predict future regulatory regimes and load-balancing energy use within firms’ infrastructure. Interestingly, however, most of the more sophisticated AI techniques—reinforcement and other deep learning, for instance—are not left to decisions without human oversight… yet.
As with algos, a major part of the question, AI specialists said, involves documenting the “explainability” of implementations and decisions the AI made, and the way firms’ risk management and controls reflect the potential consequences at hand. One speaker described this new area of inquiry as the “biggest barrier” to greater AI adoption going forward, while another summarized it more philosophically.
“You must always ask ‘What experimentation is happening as a result of the AI being used?’”, he explained. “If it’s not that, then you’re just replacing human thinking—and humans—and there I think you run into other, much bigger problems.”
Click here for article.