By Stevan Maglic, SVP, Head of Quantitative Risk Analytics, Regions Bank
Can you please tell the Risk Insights readers a little bit about yourself, your experiences and what your current professional focus is?
I’ve got over 20 years of experience as a practitioner in risk management and line-of-business functions at Merrill Lynch, BMO, ABN AMRO, and most recently at Regions Bank. At this point in my career, I have worked on just about every model that a financial institution uses. My experience includes stress testing and economic capital methodologies, risk rating models, CECL methodologies, derivative valuation, transfer pricing, portfolio construction, securitization analytics, and counterparty credit risk analytics. Most recently my focus has expanded to include how these methodologies need to be integrated into a comprehensive risk architecture for the firm.
What, for you, are the benefits of attending a conference like Risk Americas 2019 and what can attendees expect to learn from your session?
Risk Americas is a great place to hear directly from industry leaders and get up to speed on current topics and challenges in the field. Due to the high quality content, Risk Americas is a very well attended conference and among the best networking opportunities in the industry.
Relating to my talk in particular, we will be taking a step back to see where we are going as an industry and what needs to happen to improve analytics and analytical processes. This discussion is especially relevant as the industry continues to face increasing pressure from competitors that include non-traditional lenders, Fintech, and Big Tech, while at the same time grappling with regulatory constraints. Efficiency, effectiveness, and adaptability is going to be of paramount importance.
With regard to analytics, the industry has made enormous investments building stress testing models, CECL methodologies, and now implementing new AI and other machine learning techniques. This comes on top of an already sizable model infrastructure that banks use to manage themselves. During my talk, we will be discussing how we can use analytics much more efficiently, leveraging models for multiple purposes, more effectively developing and deploying models, and also describing the environment needed to support these activities.
What are the benefits of integrating methodologies and using models for multiple purposes?
Mostly, it comes down to efficiency. Many models were built for good reason with a specific use in mind, but that has created overlap with different models seemingly doing related things. If fewer models could be used for multiple purposes, the overhead to develop, validate, and maintain the models can be reduced. With fewer models and teams running the models, you have fewer handoffs and greater transparency, simplicity, and efficiency. This also means more optimal use of quantitative staff to maintain models. This will result in fewer but more sophisticated multipurpose models, leading to faster turn-around times to produce results.
However, it is also worth noting that model integration also improves the quality of the results that are produced. With fewer more connected models, assumptions become more aligned and the quality of results becomes higher. Furthermore, with an integrated analytics architecture, you are more likely to understand the risks that you are taking. As an example, it becomes more possible to understand how credit, interest rate, and liquidity risk work together in your firm’s exposures.
What are the key factors when considering design, computational, and environmental requirements?
The discussion begins with the need to have a centralized and standardized data source for the firm. Many banks have strived for “one source of the truth” and have made serious efforts towards a centralized data repository. In many cases, firms have created a Chief Data Officer role as a testament to their commitment to this concept. Just as data sources need to be consolidated, the same holds for analytics as well. Furthermore, for analytical processes to run efficiently, these activities must be close to where the data resides.
To say a few more words about analytics, with so much bespoke model development activity that has taken place at each institution, there really is a need to rethink how to standardize the process and make this all much more effective. For instance, how can we automate model development and scale it? The same holds true for the model lifecycle that includes model development, validation, ongoing monitoring, and deployment. How can we improve the entire model process, moving seamlessly from development to validation to deployment without ever leaving one computational environment?
Other than CECL and CCAR, what else is being integrated in the analytics architecture of firms?
The CECL implementation has been a real success story of repurposing CCAR loss forecasting models for the setting of reserves. Although each firm has a unique analytics infrastructure, similar integration opportunities exist within each firm. To illustrate this for yourself, a good starting point is to review the inventory of models and processes within your firm and identify related or redundant activities.
At your firm, one integration opportunity might be economic capital and stress testing processes which both estimate tail losses, but in many cases rely on differing methodologies. How can these processes be integrated? Similar questions can be asked elsewhere: how many cash flow engines does you firm have and can the processes be rationalized in some way? Prepayment models and assumptions are embedded in MSR valuation, CCAR processes, asset liability management analytics, balance sheet valuation activities, and elsewhere – how can we reduce redundancy and improve consistency?
What could a multiple year plan to integrate models and redesign analytics architecture look like?
What we are effectively talking about here is redesigning the risk architecture of your firm, which is without question an involved endeavour at any firm. Since this cannot happen overnight, it will require a multiyear plan to realize this vision and for these reasons will require buy-in and ongoing commitment at the highest level of your firm. Careful planning is required in order to ensure a steady stream of deliverables and milestones to maintain the confidence of senior executives.
Many things need to come together in order to realize this end-state. In this case, it is helpful to think backwards from what the end-state looks like and determine what we need to start doing now in order for this all to come together. Data sources need to be reconciled, integrated, and migrated to the new environment. The new environment needs to house not only the data, but also needs to meet the computational requirements of all of the models in development and production. Models that need to be able to talk to each other may be written in different languages and may need to be modified accordingly. Highly involved or bespoke models and processes need to generalized and modularized if possible to work alongside other models. Given the number and complexity of models, the integration can only take place piecemeal over time. Another parallel stream of development needs to focus on the model lifecycle, allowing model developers, validation staff, and end users access to different areas of the environment. Finally, with so many models, processes, and users, a robust governance structure would be critical to ensure all components work as designed and interdependencies are clearly understood.
Although this may all sound a bit overwhelming, the good news is that there is plenty of help out there in the form of consultants, vendors, and packages to help you with your risk architecture redesign.