By Chris Smigielski, Model Risk Director, Arvest Bank
What, for you, are the benefits of attending the ‘Risk Americas Convention’ and what can attendees expect to learn from your session?
The Risk Americas Convention covers various streams of risk, including model risk, and the topics covered are always contemporary and relevant. Presentations and panel discussions offer insightful and informative content that can benefit the practitioner to influence the maturity of their own risk programs. Within Model Risk alone, the broad array of topics ranges from MRM governance best practices to emerging issues like validation challenges with artificial intelligence (AI) and machine learning (ML) approaches as they influence the model inventory.
Risk Americas is also a great opportunity to meet with industry thought-leaders and practitioners to learn and share information about risk practices. Attendees can expect great topics, great dialogue and networking with fellow risk professionals, so they can return to their organizations equipped and ready to take their risk program to the next level. I always benefit from hearing about best practices and learning from experts about some of the challenges that they have addressed. I usually have a topic or two that I am exploring for my program and look to get insights from other professionals who have already addressed similar challenges. I consider the Risk Americas Convention an important conference to attend for any Risk professional.
Why is it important to identify where models and data comes from?
Identifying models and data sources is very important to the “Why” of what the model risk function is charged with. The simple answer is that we can’t fundamentally assess model risk unless we understand a model’s purpose and its data foundation. This is critical for all models in the inventory and more evident with the growth of AI and ML modelling approaches. For example, using a machine learning approach can improve model effectiveness if the data is abundant, relevant and available. Simply using an ML approach does not guarantee a better result if data is sparse or limited. This really underscores the importance of MRM’s independent validation and governance in the risk function.
Model Risk Guidance, SR 11-7, tells us that banks should maintain a comprehensive set of information for models in-use, in-development or retired. Model Risk Management is charged with maintaining a firm-wide inventory of all models, so that its individual and aggregate model risk can be assessed. Further, the guidance suggests that the type and source of inputs used by a given model and its underlying components (which may include other models) should be inventoried to support the assessment of model risk. Data used in model building should be well understood and rigorously tested. A simple example from a former institution explains why; a qualitative model was built to estimate the behaviour of an acquired loan and lease portfolio. Data was somewhat limited and payment amount was an available data point. The developer assumed payment amount to represent the contractual payment amount which was then used to calculate amortization, driving average life and duration metrics. The ‘payment amount’ was later found to represent something other than contractual payment which forced re-evaluation of the data and the model itself. That is model risk. Model calculations, model interdependencies and data (quality, relevance and completeness) are some of the critical components to assessing model risk.
How can a financial institution increase efficiency and automation of model risk management?
A high performing Model Risk Management program must be adaptive and flexible to maximize program efficacy and minimize their ‘cost of governance’ to the organization. Increasing the efficiency and automation of model risk activities is central to that theme, especially for validation activities. For example, model turnover (redeveloped models) or models developed for initiatives like Stress Testing, CECL, or Marketing Analytics may thrust bunches of new models into the inventory. Emerging AI and ML-based financial applications are giving us new ways to look at digitizing the customer experience and are challenging (read: expanding) our definition of a model. The point is that the model inventory is not static and as the program matures, it constantly changes. Routine validation and governance activities must be automated and approaches optimized so that the size and talent of the MRM staff is fully leveraged and able to pivot to new models or validate model changes in addition to their assigned validations. Automating model risk activities like model testing or report writing are smart activities that ensure the validation analyst’s talent is put to the highest and best use.
What control and governance processes need to be considered when ensuring model inventory is accurate?
Ensuring a complete and up-to-date model inventory is a requirement under SR 11-7. What is surprising about this requirement is the relative challenge it is to accomplish the task of capturing new models or changes to the inventory. Indeed, most model owners waste no time notifying MRM that a model has been retired because they know the governance load will soon be lifted. The reality is that self-identification of a new model does not routinely happen for the opposite reason.
To ensure an accurate model inventory, model risk considers control activities like inventory attestations that verify the status of current models in the inventory, query for new models, and ask if existing models have been materially changed. Other control processes can include creative ways to uncover models by reverse engineering financial statements or committee reports to ask how information is derived. Incorporating model identification questions within the Purchasing or Third-Party Risk Management areas also catch potential models at key acquisition points. These activities help to assure that all models are inventoried and receive appropriate model governance.
In your opinion, how do you think model risk governance will develop over the next twelve months?
Model Risk Management has been on a steep trajectory since the financial crisis. Regulatory guidance SR 11-7 has not materially changed from its original form almost a decade ago. There is quite a lot of discussion right now about the model risk impact of Artificial Intelligence (AI) / Machine Learning (ML) models and banking applications. The largest bank organizations are already validating machine learning models and some have even invested in AI/ML Centers of Excellence. It may seem likely that over the next twelve months there be could be a regulatory update to model risk management guidance that specifically addresses the unique elements of AI and ML models – but that may not happen. Authors of the guidance have been open about the fact that it is guidance and not law; thus, understanding the spirit of the guidance and applying the principles to an evolving model risk landscape can be instructive.
In my opinion, the guidance will not change but Model Risk’s purview will expand to include any application or approach (not under IT) where the risk of failure can be material to the risk appetite either directly or indirectly; where failure could trigger a compliance failure, reputational harm, operational or financial losses. Model Risk Management professionals will need to adapt to this and explain how the model risk control framework provides assurance that the models and applications, including AI/ML approaches, are working as intended and avoiding errors of commission and omission.