Developing a data strategy to reduce manual processes and leverage actionable insight

This content has been archived. It may no longer be relevant

The views and opinions expressed in this article are those of the thought leader as an individual, and are not attributed to CeFPro or any particular organization.
Rajesh Kaveti, Director, BNY Mellon

How can building out a data platform reduce manual process? 

Data has been one of the most of critical assets of any bank. Governance, Accuracy, Integrity, Comprehensiveness, Clarity and Useful are some of the key principles that regulators have asked for banks to adhere to and be compliant to some of the regulations line BCBS 239. These requirements resulted in companies building data warehouses specific to LOBs using a mix of automated and manual uploads.

Many of the companies have data warehouses built specific to LOBs using a mix of automated and manual uploads. For all those data requirements, the data is moved around multiple times to build enterprise level reports. Each time the data moves, the complexity increases with more latency and in the process, the quality of data suffers.

Having a single data platform consolidates all data assets and standardizes them into a single consistent data model. Data Platform can be the authoritative single source of truth for the entire enterprise. Having a centralized model would results in better data governance. These datasets can be integrated into enterprise systems to handle majority of data needs, including machine learning and other streaming needs.  It serves as a single central distribution hub for all data consumers significantly reducing the costs and time associated with manual data preparation.

Being on a data platform means that all data personas can be accommodated which includes data engineers, data scientist and data analyst. Having a centralized platform means that

  • there is a single source of truth.
  • Any data aggregation is centralized and visible.
  • It also prevents Data Silos which results in data only accessible to a single line of business and leads to inefficiencies, wasted resource and obstacles.
What are the benefits of implement controls that document end to end flows? 

One of the important aspects of controls is to ensure that enterprise conforms to expected financial regulations. Banks need compliance with regulations (BCBS-239, GDPR, CCPA and other compliance efforts) and as part of that, data flow need to be documented through systems from source to destination.

Some of the key controls are

  • Ensure the validity and timeliness of transactional data
  • Powerful reporting capabilities that cut across organizational and platform silos
  • Metrics for measuring compliance

Specifically, banks are required to report thousands of metrics to regulators, and often these metrics have to be generated by cobbling together data and information from diverse systems. Implementing controls helps banks deliver on these requests by linking together various systems and processes and giving regulators a complete picture of how their data flows across the enterprise — at the conceptual, logical and physical layer.

How has the data requirements for liquidity risk and funding changed over the last five years? 

One of the areas that have significantly changed is that the reporting is more granular resulting in more larger datasets. The reporting is much more detailed and the SLAs tighter. The auditors are monitoring the overall process and all the controls need to be documented and reconciled. Any manual uploads are resulting in audits and the expectations are they will be rectified quickly. There is an expectation from the regulators that banks use new developed enterprise tools, methodologies and control strategies to provide increased transparency across the entire data supply chain.

In what ways can the growth in granular data and new technologies help banks overcome legacy systems? 

As stated earlier, granular data is resulting in larger data sets as regulators expect data with Variety, Velocity and Veracity. These properties requires that enterprise adapt to newer technologies compatible with BIG DATA. This is resulting in companies moving to BIGDATA systems which can scale out horizontally. Legacy systems are mostly vertical servers and are not designed to easily scale up. Many of the new technologies like Hadoop and SPARK are meant to address these kinds of exponential growth and legacy systems cannot respond to those needs.

Why should banks look to build out enterprise level data systems? 

One of the requirements is that we need to have single source of truth which can increase transparency resulting in controlled data framework. These datasets needs to have central data repository with strong data governance framework. This can be only achieved by building an enterprise data system which provides rules around data quality, data lineage and data dictionary. These controls can be instituted at the central level by building a centralized datalake. This can be regarded as the single authoritative data source thereby improving data accuracy for any enterprise-wide consumer not just for regulatory reporting.

Rajesh will be speaking at our upcoming Treasury and ALM USA Congress

You may also be interested in…

Have you made your free account?