By Jon Hill, former Managing Director, Global Head of Model Risk Governance, Credit Suisse
By Jon Hill, former Managing Director, Global Head of Model Risk Governance, Credit Suisse
Jon Hill, Ph.D., is a former Managing Director with over twenty years of experience in various areas of quantitative finance, specializing most recently in model risk management, governance and validation. At Credit Suisse he was the Global Head of Model Risk Governance, a team comprised of 14 model risk managers in New York London, Zurich, Mumbai and Singapore. Jon’s team at CS was responsible for the ongoing identification, measurement, risk rating, inventory and monitoring of CS corporate model risk across all business units, regions and legal entities and for validation of medium risk models.
Prior to joining Credit Suisse in January of 2017, he was the founder and global head of the Morgan Stanley’s global market and operational risk validation team; his team of 7 Ph.D. and Masters level quants in New York and Budapest is responsible for the validation (second-line-of-defense) of Morgan Stanley’s global market risk models, including Value at Risk (VaR), Stressed VaR, Incremental Risk Charge, Comprehensive Risk Measure and all firm wide Operational Risk models.
Prior to Morgan Stanley Jon was a member of the model validation group at Citigroup for six years, concentrating on equity, fixed income, foreign exchange, credit and market risk models. Before joining the Citigroup model validation team, he worked for eight years on model development and general quantitative risk analytic methodologies as a member of the Quantitative Analysis Group at Salomon Smith Barney, which later merged with Citibank to form Citigroup.
Jon’s current professional focus is on developing better methodologies for automating the process of ascertaining the completeness and accuracy of a firm’s model inventory and especially on innovative ways to track model usage in a way that is scalable to a firm’s entire global inventory. A second area of current interest is in identifying the most important challenges model risk managers in finance are currently facing and proposing ways to address those challenges, which is the topic he will be speaking on at the Stress Testing USA Congress to be held on Nov. 6-7 in New York City.
Jon is a frequent lecturer on model risk management and governance at professional conferences and masterclasses and is a guest lecturer for at Baruch College and Columbia Business School in New York and Claremont College in southern California. He is also the author of a paper to be published in the Fall edition of the Journal of Structured Finance entitled “Shouldn’t a Model ‘Know’ Its Own ID?”
What, for you, are the benefits of attending a Congress like the ‘Stress Testing USA Congress’?
For me one of the greatest benefits is hearing from leading practitioners at other firms about the initiatives and innovations they are introducing into their model risk management practices. I first became aware of initial applications of Machine Learning to model validation at a few leading-edge firms a several years back at CFP’s Risk Americas Conference. I also find the session breaks and luncheons to be opportune times to meet professional colleagues working in my field and share views on current and future trends in stress testing and model risk in general. Opportunities to have conversations with regulators in an environment that is more amenable to open discussion than a bank exam offer can be invaluable in gaining their insights and clarifications on regulatory expectations.
Why should the approach to developing model risk governance be done in a holistic approach rather than an atomistic approach?
An effective model risk governance practice will impact every phase of the model life cycle, from initial identification through retirement and is therefore responsible for identifying and mitigating model risks that arise during each of those phases. Not all model risks come from individual models themselves but arise from the whole fabric of a firm’s model ecosystem. Model validation has traditionally focused on the risks inside individual models such as invalid assumptions, inappropriate modeling choices, errors in implementation, stability under stressed input conditions, etc. Focusing solely on these types of individual model risks would be the atomistic approach to model risk management.
There are other model risks that arise between and among models due to interdependencies. For example, model A produces output that serves as input to model B, but the two models employ conflicting assumptions that neither model owner is aware of. Version incompatibility is another example: different business units might different versions of the same risk model and either comparing or aggregating their results together without realizing different assumptions may be involved. Data integrity, model suitability, jurisdictional and model governance are several other forms of model ecosystem risks. To my knowledge the first paper to discuss ecosystem model risks appeared in the Spring, 2017 issue of the Journal of Structured Finance (“Much of Model Risk Does Not Come From Any Model”, Martin Goldberg, JSF, Spring 2017). Inventory risk is another such risk that does not arise from any particular model but one that is endemic to the overall model ecosystem.
Identifying and mitigating the many risks both outside and inside of models requires a holistic approach to model ecosystem risk management that can address these risks over the entire model life cycle. Such a holistic approach is a key mandate of any sound and effective model risk governance practice.
How can Machine Learning and Big Data impact management of model risk in both a negative and a positive way?
Model validation and review has been an essentially stagnant discipline for most of the last 18 years. Since the release of OCC 2000-16 (the predecessor to SR11-7) in the year 2000 validators have been performing model reviews in basically the same way at every firm, with incremental improvements in documentation and testing along the way to be sure, but it is still very much the same manual process that it was when I first started performing model validations back in 2003.
I anticipate that this process is going to begin to experience a dramatic shift over the next several years as the maturation of Machine Learning (ML) and Big Data will have a major disruptive but constructive effect on how model validation is performed by freeing knowledgeable validators up from many of the more tedious and /or time-consuming aspects of the validation process such as generation of test suites, creation of benchmark and challenger models and assessment & remediation of input data quality. Big Data applications will allow model developers to incorporate vastly larger amounts of econometric data describing, for example, how changing demographics might affect borrowing by home buyers and their credit worthiness. Incorporating large volumes of demographic data into credit models may require more sophisticated applications of machine learning than we have seen to date. On the downside, validating ML-based models will present serious challenges to validators as more advanced forms of ML such as Deep Learning (DL) are applied to modeling applications. Validating a DL model could be akin to validating a black box vendor model in that the actual operation of the DL model may not be known, or even knowable, presenting validators with the very difficult task of demonstrating the DL model is correctly implemented and operating as intended.
Model developers, validators and regulators will do well to take note of this pending disruption that will result from the application of ML and Big Data, for it will offer many opportunities for those who stay abreast of these emerging disciplines.
Why is minimising inventory risk such a priority?
First, let’s be clear about what I mean by the phrase ‘inventory risk’ as it is not yet a familiar component of the risk management taxonomy. I define inventory risk as“the risk resulting from incomplete or inaccurate quantitative model inventories, the use of models that have previously been retired or remain unvalidated or the use of models that have not been entered into inventory.”
It is safe to say that model validation practices at almost all firms subject to regulatory review have matured significantly since the release of SR11-7 nearly seven years ago. As a result, the focus of model exams performed by Federal regulators has been expanding each year to include other aspects of model risk management. Of these, model inventory creation and management, which has lagged behind much of the progress in validation, seems to be at the forefront, particularly in recent CCAR exams.
It is an unfortunate reality that many leading banks still do not have complete knowledge of all the models that are being used within every area of the firm, at home and abroad. There are many reasons for this state of affairs, including the fact that new models are being created frequently, not just in centralized model development groups in collaboration with the firm’s IT (Information Technology) organizations, but also on individual desktops across the globe creating a large number of what are called EUC (End User Controlled) models. At many firms the majority of models that fall through the cracks in inventory management are of the EUC variety, very often because the developers/owners of the models are not aware of the risk rating, validation and inventory requirements under the firm’s global model risk governance and validation requirements.
Ensuring completeness and accuracy of a large model inventory is not as straightforward as it might seem to those not involved in the process. As a result, it is still not uncommon for a model risk manager to discover unvalidated models that have nevertheless been put into production. It is not supposed to happen, but controls are not perfect in this area. There are a number of vexing questions regarding model usage that most firms have great difficulty answering with statistical accuracy, such as “How many times was this model actually executed over the last year?”, or “Are there any models with active status in your inventory that were not executed a single time over the last year?” or “Which models are used in each of your Legal Entities and geographic regions”. And yet questions of this nature are arising more frequently than ever before in regulatory exams as the focus shifts from model validation to a more holistic view of a firm’s overall model discipline.
How do you see the risk landscape evolving over the next several years?
Regulators seem to be raising the bar on requirements for model development, validation and governance each year. (Model governance is a framework of policies and procedures that oversees the entire life-cycle of a model, including development and validation.) Input data quality and model inventory are two particular areas of increasing focus in just the last few years. Model Risk Managers at many large firms still do not know about all of the models in use at their firms for a variety of reasons. There will be continual pressure to achieve full compliance with SR11-7/OCC 2011-12, a goal that I fear many banks simply will not have the resources to achieve.
As described in my answer to #4, the intrusion of Machine Learning and Big Data into the once staid space of model development and validation is likely to result in a major disruption in the ways that model development and validation are performed. I use the word ‘disruption’ in a positive sense as it will relieve model owners and validators of much of the tedious manual effort that currently is required for traditional model development and validation. This is not something to be afraid of or worried about for those who welcome change and can think out-of-the-box, for the pending disruption that I anticipate will offer many opportunities for those who stay abreast of these emerging disciplines.
You may also be interested in our Stress Testing USA Congress…
Sign up for your free account: