Full oversight for risk reporting: Aggregating reporting across multiple systems and jurisdictions

Full oversight for risk reporting: Aggregating reporting across multiple systems and jurisdictions

By David Stomski, Director, Operational Risk Management, Credit Suisse

Can you please tell the Risk Insights readers a little bit about yourself, your experiences and what your current professional focus is?
My background is primarily Operations and I’ve led a number of line, project, and risk functions across multiple business areas. I started my career in Japan, then worked in Europe before returning to the States about 12 years ago. I currently work in Operational Risk Management with oversight of third party suppliers and outsourcing, intra-group outsourcing, and financial market infrastructure. One of my primary areas of focus right now is on connecting multiple internal and external data sources related to third party risk management, then extending those connections to other parts of Operational Risk.  We are looking to create composite views of the operational risk associated with a given function or business, of which third party risk is just a part. I have been working heavily with R, the open source code, to develop prototypes of data transformation and reporting solutions, and then use those prototypes to develop specifications for our IT department. 
What, for you, are the benefits of attending a conference like Vendor & Third Party Risk USA 2019 and what can attendees expect to learn from your session?
I find conferences such as this a good opportunity to hear how other firms are tackling a common set of issues. Sometimes it is not just the other’s solution or approach that is enlightening, but how they may interpret the problem differently. As I imagine with all of the sessions, you may pick up tidbits that give you an idea how to approach your problem differently, or possibly you are already taking a similar approach and it reinforces what you are doing, or maybe you totally disagree and it reinforces for you that yours is the optimal approach. 
In your opinion, how can we look to effectively monitor multiple systems across jurisdictions whilst also complying locally?
The key thing, in my opinion, is your data strategy. If you are working with multiple operational systems, or even just one workflow system, it is best to capture the data you need for reporting and monitoring in a data hub. Data to support workflow, especially if across multiple systems,  may have different standards applied or be structured differently from the data you need for reporting. The data hub will have hooks into the operational systems, and should perform all of the needed data transformations and standardizations to support your reporting. Ideally there are SLA’s in place with the owners of the operational systems such that they do not make changes to the data without timely notification and testing. The data hub should protect your reporting from upstream changes in the workflow systems by catching anything that doesn’t fit the data specifications.  To support local compliance, there should be attributes on your records so that you can isolate the records and the specific values you may need. Once you have a solid data management solution in place, it should be feasible to isolate the records and attributes you need and reporting should be easy to layer on. 
What are the key considerations that need to be made when reporting minimum requirements based on global regulations?
It is important that you have the drivers well defined and have attributes on the records to support filtering and aggregation. There may be cases where there has to be multiple flags for the same thing, such as criticality or outsourcing due to specific impact on different legal entities or legal definitions in different jurisdictions. In the background, the data attributes and their usage has to be well documented.  People that come along later need to understand both the regulation and how the data meets the specifications. That is so they do not change the data inadvertently or use it for something else for which it was not purposed. 
What challenges and opportunities could be faced when aggregating reporting across multiple systems and jurisdictions?
There are a number of pitfalls associated with aggregating data. Anytime you aggregate, you may be losing the ability to identify data quality issues or you may be smoothing over some records that flag a risk by averaging. It is always good to be familiar with summary statistics such as max, min, standard deviations and to have investigated significant outliers. If your data hasn’t been standardized, it is possible that you fail to capture some of the target records, for example, if some records say USA and others United States of America when you are using country name as an aggregation key. These issues should all be addressed by the standards employed in building out your data hub. 
To take full advantage of aggregated data, any categorical fields that are also ordinal, such as risk ratings like “Low”, “Medium” and “High”, should be masked with numeric values to allow for averaging, mathematical comparisons, and efficient sorting. Often, to fully understand the data, we need to be able to compare it to something. If I said the risk of x associated with a new supplier is 4 and asked if you thought that was high, you wouldn’t be able to answer without context. If I said the average score across the aggregated vendor population is 3.5 and the maximum is 7, you have a bit more perspective to think that it is a bit higher than average but not very high. If I then said, the average score for this type of service is 2.5 and the maximum is 4, you can now see that the relative risk of x for this engagement is high compared to other providers of similar services. The ability to aggregate data and cut it multiple ways helps us to perform comparisons and put risk assessments into perspective. Further, when we connect data sets across multiple systems, it allows us the ability to summarize data elements in one data set based on aggregation keys only available in a second data set. This could lead to richer reporting and ideally improved intuitions regarding risk exposures. 
How do you see the impact of vendor & third party risk evolving over the next 6-12 months?
 I think the new EBA guidelines on outsourcing that have a focus on integrating risk management frameworks may hasten other regulators around the globe to do more of the same. Views of nth party exposures and the full supply chain, particularly for Cloud activities, are already topics of focus. There is likely to be a renewed emphasis in this region  on intra-group or inter-affiliate sourcing with requirements to demonstrate transparency through your affiliate service relationships, to your affiliate’s suppliers, and your affiliate’s suppliers sub-contractors, etc. 
vendor & third party risk usa series
become-a-risk-insights-member-banner-1