Managing models in a recession environment and proactive identification of changes

This content has been archived. It may no longer be relevant

Any views expressed are solely those of the author and do not necessarily represent the opinions of Freddie Mac or its Board of Directors

George Soulellis, Chief Enterprise Model Risk Officer, Freddie Mac

How can financial institutions effectively spot performance deterioration or degradation?

As models are mere simulations of true data generating processes, model performance degradation is almost universally not a matter of if but, rather, when. Performance degradation typically arises from the following factors:

    1. a)  A model that was ‘overfit’ or tuned too closely to the data that it was built on and now, when subjected to data unseen, assumes relationships that were predicted on random vs real effects, or
    2. b)  A significant change in the macro / economic environment that has yielded new relationships in regard to customer/consumer or institutional behaviour that the model’s parameterization cannot, de facto, capture.

A high-frequency model monitoring plan with rigorous diagnostics and measures is key in order to spot model degradation in a timely manner. Diagnostic measures should be appropriate to the model’s objective – for example, a predictive model should contain measures related to accuracy and rank ordering power with accompanying thresholds reflective of the institutions tolerance for risk. Another important theme to consider is that of persistence – a model has to display a persistent degradation to avoid taking unnecessary action (investment of resources, etc) and remedy something that was essentially a false alarm or false positive.

 

How has an evolving environment caused a reduction in model development timelines?

The frequency of changes and adjustments to models and their projections goes hand in hand with an evolving and changing macro environment. Model change timelines need to necessarily reduce in order to accurately capture quick changing dynamics.

This holds primarily for high frequency recalibration efforts vs full scale redevelopments/rebuilds – although ways to accelerate full model rebuilds without compromising strong governance and controls has been top of mind for many financial institutions in recent times. Model development timelines can often be reduced by applying governance review procedures in parallel with the development steps, however this needs to be conducted in a way such that the independent challenge of the review function is preserved.

 

Why are machine learning models sensitive to change?

When we speak of sensitive to change, we mean that the ML algorithm’s ability to predict accurately is susceptible to changes in the environment – as opposed to how sensitive the output is to changes in the inputs – a nuanced difference. In this sense, machine learning models are sensitive to change due to their tendency to overfit to the data that they were built on. This tendency is a by-product of their complex highly parameterized non-parametric constructs. Whilst many ML algorithms are adept at identifying non-linearities in data, they often do so at a very granular level and pick up, oftentimes, random effects. Regularization is thus key with ML models – i.e. it is often prudent to go with a simpler construct and forsake a degree of accuracy in the development dataset to produce a more reliable model in the long run. This is commonly referred to as the bias variance trade-off.

 

How can financial institutions use credit modelling effectively in a down-turn?

Across many a financial institution, credit decisions are largely supported by models and predictive algorithms. Down-turn conditions, however, often present new challenges to credit models. Certain macro conditions may be unprecedented which presents a problem in relation to the model’s ability to forecast. It is in these times that human judgement must play a role in order to better manage model risk and uncertainty. There are steps that can be taken in regards to this. First and foremost, the range of the data underpinning the credit model must be made known to assess whether the model is being asked to extrapolate in a significant way. If this is the case, judgement must be applied to either temper or amplify model projections. Secondly, margins of conservativism may be applied to promote the necessary safety and soundness considerations.

 

How can financial institutions identify a model before it is decommissioned?

Decommissioning a model occurs when a) it is no longer able to achieve its objective – be it to predict or simulate, for example or b) there is an opportunity cost in not employing a new model which can take advantage of certain opportunities (more recent and relevant data, for example)

Decommissioning, as with all stages throughout the model lifecycle, requires resources and may introduce other risks. For example, a certain model planned for retirement may be significantly intertwined into a company’s business processes requiring care when disentangling and replacing code from it with its successor’s.

Model performance thresholds will need to be consistently breached with no sustainable improvement exhibited via recalibrations or a sizeable business opportunity identified in order for a model to be planned for decommissioning.

 

George will be speaking at our upcoming Advanced Model Risk Congress

You may also be interested in…

Have you made your free account?