The Center for Financial Professionals interviewed Jon Hill ahead of the 5th Annual Risk Americas Convention where he will be leading the Model Risk Masterclass and presenting on our Stress Testing and Model Risk Stream of the main Convention. Jon gives us some insights on the topics he will be discussing with us at the Convention.
Jon, please tell our readers a little bit about yourself and your experience within Model Risk management
Jon Hill is an Executive Director at Morgan Stanley with over eighteen years of experience in various areas of quantitative finance. He is currently the Global Head of the Market and Operational Risk Model Validation team within the Model Risk Group at Morgan Stanley comprised of 8 Ph. D. and Masters level quants in New York and Budapest. Jon’s team is responsible for the ongoing validation of all Morgan Stanley firmwide market and operational risk models, including Value at Risk (VaR), Stressed VaR, Incremental Risk Charge, Comprehensive Risk Measure and the Advanced Measurement Approach model for Operational Risk. His team is also responsible for validating the Risk Weighted Asset market and operational risk projection models that are used for the annual CCAR/DFAST stress tests.
You are set to lead the Model Risk Masterclass as part of the Risk Americas 2016 Convention. What regulatory guidelines and requirements will you be discussing at the Masterclass for model risk and model validation?
SR 11-7 (OCC 2011-12) is the current guidance for model risk management and validation practices provided by US regulators, the FRB and OCC, for all financial institutions doing business in the US. This guideline requires all conforming institutions to develop formal policy and procedures for mitigation of all forms of model risk. The complement of models that falls under this mandate is very broad and applies to models using for pricing and risk estimation as well as to models used to support business decision-making (such as econometric and forecasting models). Generally, the UK regulator, the PRA, and many European regulators follow the guidelines set down by the US regulators.
SR 15-18, recently issued in 2015, elaborates further on the roles of models, overlays, benchmarks, sensitivity analysis and scenario design used by large, complex firms for capital planning purposes.
What are the key considerations for CCAR/ DFAST validations?
First and foremost, all models used for 2016 submission CCAR/DFAST stress tests must be validated to perform reliably under the outsized input shocks mandated by the FRB for the ‘adverse’ and ‘extremely adverse’ scenarios as well as for the BHC (Bank Holding Company) scenarios. It is important to understand that this applies to all ‘feeder’ models, that is, models whose output is used as input to other CCAR models. This is a new requirement that was not in place for CCAR stress testing in previous years. An example would be pricing models that compute risk factors (Greeks) used as input to a firm’s historical simulation VaR engine. Large financial institutions typically use 1000 or more pricing models for VaR inputs and each of these models will need to be validated to perform reliably under the extremely adverse shocks to their inputs.
You will also be presenting within the Risk Americas 2016 main Convention on “Reviewing The Origins And Comprehensive History Of Model Risk”. Why do you believe it is important to review historical nature and origins models?
Model risk did not first appear during the market meltdown of 1987 (caused partly by oversold portfolio insurance); it has been a constant companion in all areas of human endeavour for a very, very long time, at least 35,000 years or more. I believe that in order to achieve a holistic appreciation for the omnipresence of model risk and its inevitability throughout human history, it is very useful to study historic examples drawn from different periods of time and in situations seemingly unrelated to model risk in finance. And yet when examined through the lens of hindsight, these historical episodes offer up very recognizable examples of the types of model risk that are encountered in finance to this day. Examples are a large Swedish warship that capsized upon its launch in 1628 and an obscure naval engagement in 1914 that nearly caused the British to lose to an inferior enemy fleet due to a classic form of model error. An error that would be repeated some 68 years later!
As the 18th century British statesman Edmund Burke so wisely put it, “Those who don’t know their history are destined to repeat it.”
In less than one hour, this presentation will present an abbreviated narrative history of model risk that traverses 35 thousand years and 35 million miles of time and space, beginning with a Paleolithic era cave in France and ending on the surface of the planet Mars.
Without giving too much away, what do you believe are the most effective, industry-proven methods for mitigating model risk?
No model can perfectly describe all aspects of what is being modelled; the gap between a model and the reality being modelled is a systemic source of risk that can never be completely eliminated, only mitigated. This is especially true for complex models used in finance. The current best practices in the financial industry for model risk mitigation rely on 3 Lines of Defence (LOD).
The 1st LOD is the testing and challenges performed by model developers to satisfy themselves that the model is correctly implemented and performing as intended. The 2nd LOD is the additional stress testing and demonstrable effective challenge performed by a qualified quantitative model validation team that is as independent of the development organization as possible (i.e. model validators should not report into either the developers or users of the models they validate). The 3rd LOD is independent review of the 1st and 2nd LODs by the firm’s Internal Audit department, which should also be independent as possible within the same firm. IA should ensure that the 1st and 2nd LODs have performed their roles in compliance with the Firm’s formal model risk management policy and procedures as well as SR 11-7 expectations. A 4th LOD is the supervisory role of Federal regulators such as the FRB and OCC who review the first 3 LODs during frequent bank examinations.
George Box put it best in 1987: “Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” The purpose of model validation therefore is not to determine whether a model is right, in the way that we consider models in physics that can be derived from first principles to be right, but only whether or not it is useful for the intended application. All useful models in all arena of human enterprise must share this single common attribute: to be useful a model must be able to reduce complexity. That is the true purpose of the models we create. If a model does not reduce complexity, it will not be useful.
How much of an impact have recent regulatory guidelines and requirements had on the model risk department in the last few years, and how do you see the role of the model risk professional changing over the next 12 months?
The OCC 2000–16 bulletin, issued in the year 2000, broke new ground by establishing regulatory guidelines for model risk management and validation to be performed by all conforming banks and financial institutions. For the first time the playing field was levelled across the industry. OCC 2000-16 mandated the establishment of model validation teams that were as independent as possible from model developers and articulated a requirement for documentation “sufficiently detailed to support independent replication of the model by qualified professionals (i.e. quants)”. This was followed 11 years later by SR 11-7 (OCC 2011-12) that greatly expanded upon OCC 2000-16 and raised the bar for independent validation even further. These two documents completely changed the ways that model risk was recognized, assessed and mitigated at modern financial institutions. In subsequent years regulators have consistently raised the bar for independent model risk management ever higher, particularly with respect to the use of robust models for the Federally mandated CCAR and DFAST stress tests now required of all SIFIs (Significant Financial Institutions). All models used by a business must now be validated before being put into production; all material changes to existing models must also be similarly validated. One of the most far reaching consequences of this regulatory focus on model validation is to empower model validation teams with life or death authority over all models used by a firm, a level of authority they could only dream of 10-15 years ago.
As one department head at a leading investment bank recently put it to his model developers, “If you can’t convince the model validation team that your model is sound and implemented correctly, then you will never be able to convince the regulators.”