By Jon Hill, Former MD, Global Head of Model Governance, Credit Suisse
Jon Hill, Ph.D., is a former Managing Director at Credit Suisse with over twenty years of experience in various areas of quantitative finance, specializing most recently in model risk management, governance and validation. He is currently the Global Head of Model Risk Governance Standards at Credit Suisse, a team comprised of 14 model risk managers in New York London, Zurich, Mumbai and Singapore. Jon’s team is responsible for the ongoing identification, measurement, risk rating, inventory and monitoring of CS corporate model risk across all business units, regions and legal entities and for validation of medium risk models.
Prior to joining Credit Suisse in January of 2017, he was the founder and global head of the Morgan Stanley’s global market and operational risk validation team; his team of 7 Ph.D. and Masters level quants in New York and Budapest is responsible for the validation (second-line-of-defense) of Morgan Stanley’s global market risk models, including Value at Risk (VaR), Stressed VaR, Incremental Risk Charge, Comprehensive Risk Measure and all firm wide Operational Risk models.
Prior to Morgan Stanley Jon was a member of the model validation group at Citigroup for six years, concentrating on equity, fixed income, foreign exchange, credit and market risk models. Before joining the Citigroup model validation team he worked for eight years on model development and general quantitative risk analytic methodologies as a member of the Quantitative Analysis Group at Salomon Smith Barney, which later merged with Citibank to form Citigroup.
Jon holds a Ph.D. in Biophysics from the University of Utah. He is a frequent speaker and chairperson at professional model risk management conferences.
You will be presenting at the upcoming Model Risk Management Course to discuss if a financial model should know its own ID, why is this a topic that institutions should be considering?
It is an uncomfortable truth that today most financial firms cannot claim to have a complete and accurate inventory of all models used by the firm, even though this is a regulatory requirement under increasing scrutiny in bank exams. It is even more uncomfortable that these firms cannot answer with much accuracy such questions as “how many times was this model actually used during the last year?”, or “which models exhibit significant seasonality” or “in what geographic regions, or Legal Entities, is this model used?”, or “Were any unvalidated models used during the last year”.
I will argue in my presentation that this lack of transparency about model inventory and usage can be traced to a single fundamental industrywide shortcoming in model risk management: quantitative models at almost all leading financial firms today simply do not ‘know who they are’. By this I mean they don’t ‘know’ their own model ID. This is an area of model risk management – let’s call it inventory risk – that is rarely addressed within institutions or at professional conferences. At least I’ve never heard it discussed and I go to a lot of model risk conferences. Because I have an eccentric fondness for talking about relatively obscure topics that no one else seems to want to address, I offer this presentation.
I find it ironic that my iPhone, washing machine and automobile all ‘know’ their own serial numbers, today they are embedded in permanent memory (such as ROM) in the electronics for the device, but even before electronics physically stamped on the device. Yet at every financial institution I am aware of it is still not a common practice for quantitative model IDs to be embedded in the model’s actual source code, instead the model IDs are associated with the corresponding executable by reference managed by the execution platform. Why is the financial model development world so far behind manufacturers? I think the answer is probably because financial models were originally identified only by name, some of which could be very convoluted and obscure. It wasn’t until firms began developing centralized database repositories for model documentation (and in some cases source code) in the early 2000s, motivated by regulatory pressure, that it became necessary to assign numerical identifiers to models as an aid to indexing. And yet, if firms would take the simple expedient of requiring model developers/owners to embed their model’s assigned ID by adding a single line to the source code, and if this is done firm wide, the groundwork will have been laid for answering the types of usage questions posed above, and much more.
How can detailed tracking of model usage impact the model risk department?
The following question was addressed to one of my model risk colleagues by a US regulator in a recent model exam: “For this model, chosen at random from the inventory you have provided, can you tell us how often the model was executed over the last year and what entities within your firm executed it? My colleague was embarrassed that he was unable to answer the questions with any reliable accuracy because his firm had no automated way of tracking model usage globally.
As anyone involved in recent CCAR or horizontal model exams at major banks would know, model inventory and attestation of completeness and accuracy are receiving increasing scrutiny from US and European regulators. And yet in this age of automation it is still a very manual process at every firm for which I have worked or been involved with as a consultant. The fact is that most large firms still do not know how many models are being used, how often and where or even if any models have slipped through the governance control framework and gone into production without first being validated, a strict regulatory requirement in the US.
The ability to track model use globally across a firm will require some initial startup effort and support from senior management, but if accomplished the advantages for model risk management will far exceed the cost of development. Complete transparency for global model usage is an attainable goal, but few if any firms have undertaken the effort necessary to accomplish this through automation. One of the benefits would be the elimination of what I call inventory risk.
Could it be possible to automate this process? How could banks and financial institutions benefit from automating this process?
Automation of model usage tracking can be implemented in two steps. The first step I have already described above: embed model IDs in the model source code as soon as the model is approved for production; retrofit models already in production as part of the next scheduled release cycle. This is the easy part and requires very little effort on the part of developers. Confirmation of embedded IDs should become part of the standard validation process for new models. The second step will take more effort and involves development and deployment of a software feature I call the “model transponder function” into all models, progressively over time as part of the regular model development cycle. The transponder would be called once each time the model code is executed. I will describe the necessary salient features of the transponder function and how it can automate the model tracking as well as the inventory attestation process in more detail in my presentation. The model transponder will be a novelty in this industry but something any competent IT professional could develop in a month or so.
Finally, what regulatory changes do you foresee in the future? And do you have any advice for your peers on how best to handle them?
Regulators seem to be raising the bar on requirements for model development, validation and governance each year. (Model governance is a framework of policies and procedures that oversees the entire life-cycle of a model, including development and validation.) Input data quality and model inventory are two particular areas of increasing focus in just the last few years. There will be continual pressure to achieve full compliance with SR11-7/OCC 2011-12, a goal that I fear many banks simply will not have the resources to achieve.
Model validation and review has been an essentially stagnant discipline for most of the last 18 years. Since the release of OCC 2000-16 (the predecessor to SR11-7) validators have been performing their model reviews in basically the same way at every firm, with incremental improvements in documentation and testing along the way to be sure, but it is still very much the same manual process that it was when I first started performing model validations back in 2003. I believe this process is going to change dramatically over the next 5 or so years as the maturation of machine learning and big data will have a major disruptive but constructive effect on how model validation is performed by freeing knowledgeable validators up from many of the more tedious and /or time-consuming aspects of the validation process such as generation of test suites, creation of benchmark and challenger models and assessment & remediation of input data quality. Model developers, validators and regulators will do well to take note of this pending disruption for it will offer many opportunities for those who stay abreast of these emerging disciplines.