Arun Arunachalam, Vice President and Head, Product Management
Data management and strategy is a topic of substantial debate across all securities firms on a continual basis. Given the scale of volumes and with data doubling at a faster rate than ever before, it does look like that the topic will continue to be debated for a long time to come.
A leading industry analyst proposed the 3Vs of Data – Velocity, Volume and Variety, and how it impacts the growth of data in a typical transaction processing world. With advances in analytics and AI, the scale of data becomes a self-perpetuating feedback loop. As the number of use cases for leveraging data increases, its volume and variety increases and harnessed by the use cases, velocity of data generation increases, too. This cycle continues gaining momentum as more data is used and generated, resulting in the further complication of data management architectures that organizations are trying to put in place. This is applicable to both structured and unstructured data; although, the former is ahead in terms of scale.
In the securities back-office world, data, beyond its use in transaction processing, sees three areas of importance - In ensuring regulatory compliance, extracting operational efficiency (cost reduction) and in enhancing client servicing (revenue enhancements). The securities back-office world has been able to manage the data value chain for meeting regulatory needs, albeit, with a patch work of data architectures. The aspects related to costs and revenues are tougher to realize and validate holistically in terms of solid use cases due to challenges in data management.
On the transaction processing side, despite standardization initiatives on the securities messaging side, significant pre-processing effort continues in data capture, cleansing, enrichment, etc. The data fed into our enterprise architectures will consist of global, market and client specific nuances, and this will keep evolving to handle market regulations and every client’s need to differentiate. Every enterprise architecture will have to evolve and plan for the variants and nuances. This is the very basis of how ISO 20022 standards in corporate actions are laid out with the concept of extension blocks – allowing local agents to align to the global standards, while keeping the local nuances alive within the same standard.
Amidst these scale and standardization dynamics – would it be fair to assume that the data problem will always be a step ahead of any solution that will emerge?
One of the aspects we could also consider closely is to look at how other industries are managing data. If we look at large e-commerce retailers and peers in the retail banking business who are so focused on the digital first, mobile first world, they may not have the perfect data management strategy, but they have found a way to exploit its usage across their supply chains or client transaction lifecycles from both, cost optimization and revenue enhancement perspectives. They have also leveraged the cloud and microservices architecture far better than any other industry. Any cloud architecture comes with its own range of tools to handle data – both structured and unstructured, in terms of different options of databases, Big Data/data lakes and analytics and AI engines in a single plug and play ecosystem. This has helped streamline the process of aggregating data from multiple sources into data layers, which is probably the toughest step in any data management strategy for an enterprise.
Of course, we cannot assume, that we can move massive legacy infrastructures overnight into the cloud, but observing these industries shows that they have focused more on exploiting data, working backwards from the end use case perspective on an incremental basis, much better than anyone else. (rather than the traditional approach of Big Bang and so-called ideal technology data management solutions)
Can we consider that all the data needs of the use cases of analytics, AI, reporting, digital client servicing, etc, are seen in a more cohesive manner, rather than as silos, to allow securities firms to exploit data better?
Despite all the challenges, there are three big areas, that securities firms have started working on to leverage data on an incremental basis.
Assisted AI for Operations: Traditional operations were structured more as exception- or priority-driven processing and based on the breaks in a transaction, or market cut off windows, or preferred client processing. All of this by default only uses existing transaction data to complete the trade processing life cycle. Increasingly, firms have started moving up the maturity curve of analytics from descriptive to prescriptive to predictive analytics to suggest possible actions to be taken before an event happens. This depends a lot on the available datasets and data enrichment possibilities within a business context or scenario. Of course, AI in this form still leaves the final decision to human intervention given the operational risks involved in any transaction in the securities back office, and the regulatory guidance regarding white-box and explainable AI. This is a process that will continually evolve and will move towards selective self-healing approaches based on the ability of historical data to predict confidence levels of close to 100% for any possible exception.
API Marketplace/Catalog: The retail banking industry has stolen a march over securities firms in the adoption of APIs to make data accessible to larger, internal or external ecosystems. This has been facilitated by regulatory initiatives like PSD2 or market bodies for Open Banking. SWIFT made announcements in 2019 about the first results of API pilots for securities settlement status, position, NAV distribution, etc., related to the post-trade securities world. This should provide impetus to the industry at large to consider APIs as a means by which data can be accessed or distributed by the various players in internal and external ecosystems subject to common technical preconditions like security, authorization, consent, etc., to be in place. This will also help organizations co-opt with fintechs in their value chains for operations and client servicing.
Digital Client Servicing: The securities back office also has jumped on to the digital channels bandwagon albeit a bit late by leveraging devices available in the various form factors. The availability of APIs is also a big driver for this. There is almost always an iterative discovery phase that is ongoing in terms of what is the transaction data from the massive back office repository, that could be useful to end clients in real time or near real time to help them make decisions to have a positive monetary impact.
Ideal state data management may be tough to achieve but let’s begin by exploiting the potential of data on an incremental basis by applying analytics and AI in a way that has impact on cost optimization as well as client servicing.
Disclaimer: Views or opinions represented in this blog is based on author’s own research and does not represent TCS BaNCS