By Daniel Hoyt, Head of Model Validation, Euroclear
By Daniel Hoyt, Head of Model Validation, Euroclear
What, for you, are the benefits of attending a conference like the ‘Model Risk Management Europe’ and what have attendees learnt from your session?
It is always helpful to meet with other model risk management professionals to get another perspective on how this relatively new discipline is evolving in different organisations. Although all of our institutions are unique and face their own challenges, I believe that we can always find something to take away. This is a particularly important thing to remember when coming from a slightly exotic segment of the financial industry, namely the world of central securities depositories.
Time will tell what people actually take away from my session. Although the definition of ‘model’ has been a topic for some time now, I think there were still some interesting points to explore. On one hand there are new types of models – or should I say potential models – to consider. Also there is the important question of what happens to things which are not models. I hope that I triggered some fruitful discussions on these topics.
How does US model definition of model risk impact inventory and treatment of models across the institution?
One positive effect of the broad model definition in SR 11-7 was that it expanded management’s attention beyond the pricing and capital models which had traditionally been the focus of model governance. This is to be welcomed, as banks use many very important models which do not fit neatly into these two categories. It is possible that model governance would have naturally expanded to include such models, but SR 11-7certainly helped the process along.
An interesting side effect of having defined ‘models’ is that you implicitly define ‘non-models’ ( ‘calculation mechanisms’ in the Bank of England’s grander terminology). Some non-models may be just as important for an institution as its models – the Vasicek formula in the Pillar 1 credit risk capital calculation springs to mind. Clearly such non-models do not need ‘validation’ in the same sense as a model – it’s not clear that the Vasicek formula would pass anyway. But there will be ambiguous rules or cases which need interpreting, and part of the governance should be how well the varying interpretations reflect reality of how models (or non-models) are being used.
What are some of the criteria’s and justifications that contribute to model classification?
There are many criteria of course, but for me two of the most confusing for model (or non-model?) owners can be usage and complexity. A calculation used in one context may be a model but a non-model in another context. For example, a manager may use a structured approach to help rank order potential projects and so decide where to spend her limited resources. So long as this remains a simple input to the management decision and is not meant to replace the structured measurement of operational risks, there is a reasonable case for saying this is not a model. Should this rank ordering be mistaken for a measurement of risk – completing project #1 means I have reduced my risk more than if project #2 were completed – then this should be treated as a model.
Complexity can also be a confusing issue for some parts of the organisation. For many people in many institutions any complex spreadsheet is a model, and a model is always a complex spreadsheet or something more sophisticated. This confuses computational complexity and conceptual complexity. Suppose a bank does some lending against planes as collateral. It’s not a big business for the bank, so they apply a single haircut on the collateral. This is very simple computationally, but conceptually makes a big statement that all planes are created equal when it comes to price stability. Of course this is a model. It may not be a bad one – perhaps the business is small enough that it’s not worth investing in a better haircut calculation – but such a judgement needs to be validated and monitored like any other model.
How can institutions implement a uniform definition of a model across jurisdictions?
From one perspective having a single definition of ‘model’ is not overly complex so long as the definition is broad enough. I doubt that many regulators would complain about a broad (but reasonable) model definition, even if the regulator’s own focus is much narrower. Indeed, the ECB guide to internal models (EGIM) requires banks to have a model risk management framework which sets out the definition of ‘model’ to which the framework implies. So even though much of the ECB’s focus is on Pillar 1 capital models, one reading of EGIM suggests that banks are (rightly) expected to cast their net wider.
A perceived lack of urgency can be a key obstacle to overcome when applying a uniform model definition. Entities in jurisdictions with less rigorous model risk management regulatory requirements are likely to question why they need to comply with an SR 11-7 style definition. While it is true that applying a broader definition may lead to increased costs, there is another important factor for model risk managers to consider. If one takes a narrow view that ‘models’ are just for pricing or regulatory capital, this will not stop model risk from materializing in the models that have been mislabeled as ‘tools’. So the apparent cost savings of less model risk management can be quickly eradicated.
In your opinion what does the future hold for model risk management?
Like much of risk management, and life in general, I expect more use of automation. I think this is already happening in some of the easier cases, such as model monitoring or validation where tests can be automated and batched. But I think we will see more. Some commentators have raised the idea of ‘smart’ models – models which know their own ID and broadcast when and where they are being used. Perhaps we can imagine smarter models which can identify when results of a run are too unreliable to use. Or maybe we could even have PD models which are able to re-calibrate themselves when portfolio characteristics change?
Another change we may see is that model risk management becomes a (proper) subset of a new ‘quantitative’ risk management. As mentioned under #2, non-models deserve attention just as much, and sometimes more, than models. But the issue is potentially wider. We can expect more and more decisions to be made based on data, and I don’t just mean ‘big’ data here. Even if there are no ‘models’ in the production or processing of such data, one can always ask whether the data is the right data for making the decision at hand and how it should be interpreted.