Credit risk or social engineering?

Credit risk or social engineering?

By Brandon Davies, Former Head of Market Risk, Barclays

Credit risk or social engineering?

Until very recently no one would have thought of asking the above question, the development of credit risk modeling during my executive career was extraordinary but we now are embarking on developments that take us beyond simply better understanding of credit risk and towards developments that have profound ethical issues, with consequent reputational risk to those who pursue these developments.

The first reference to a credit risk model I can find is that developed by a British MP, Archibald Hutcheson, to assess the value of the South Sea Company in 1720.
Hutcheson used financial ratios combining accounting information and market price information to judge the value of an investment (in equity and debt) and the likelihood of those investments defaulting. What he did was to look at a range of different financial ratios of the debt to equity and of the cash flow of the company and make a judgment call on what the financial performance of the company was likely to be in the future.

This form of model is as a “univariate” model as the resulting “Grade” is based on judgment, the contribution of each piece of the analysis to the resulting Probability of Default (PD) or PD/Loss Given Default (LGD) is not disclosed, indeed it may not be known as it may vary based on the individuals experience. Such models are still in widespread use today and it was this form of model that formed the basis of my education as a credit risk officer in Barclays Bank some 40 years ago. It has also been adapted as the basis for personal credit grading which focuses on personal income and the ability to repay debt.

As any equity brokers report will evidence, these models are still very commonly use and are still used by many banks to make judgment calls on lending to private companies (typically, so called, Small and Medium Enterprises – SME’s), but also in modified form to back personal lending decisions and even in lending to public companies.

The most important development of these models resulted from the work of Edward I. Altman who developed Multivariate models, where a limited number of defined financial ratios were identified that had statistical explanatory power in differentiating defaulting firms from non-defaulting firms. Fixed weights were ascribed to each variable allowing the model to create a ‘score’ for each credit, which was then converted into a Probability of Default (PD).
Financial Ratios measuring profitability, leverage and liquidity are the most commonly used variables.

Altman’s work is best appreciated by a study of his Z-Score and Zeta models and these models have been restructured to fit many different types of companies.

Whilst provable in practice there is little economic theory to back the choice of any particular financial ratio in forecasting default.

Economic theory did, however, feature in the next big development in credit modeling came as a result of the insights of Black and Scholes (1973) and Merton (1974) into options pricing.

In Merton based credit models equity in a leveraged firm is modeled as a call option on the firm’s assets. If at expiration of the option (usually modeled over a 1 yr. period) the market value of a firm’s assets exceeds the value of its debt then the firm’s shareholders will exercise the option to “repurchase” the company’s assets by repaying the debt. If the opposite is true the shareholders will default.
The Distance to Default (DD) is defined as the number of standard deviations between current asset values and the debt repayment amount. The higher the Distance to Default the lower the Probability of Default (PD).

To convert a Distance to Default (DD) to a Probability of Default (PD), Merton assumed asset values were log normally distributed whereas in their version of the model Kealhofer, McQuown, and Vasicek (KMV) (see Moody’s – KMV) estimated the empirical PD using historic default experience. These models also led to insight into the appropriate leverage for a given company or industry as a firm’s leverage has the effect of magnifying its underlying asset volatility. so firms with low asset value volatility can support higher leverage than can firms with high asset value volatility.

The development of deep and liquid bond markets led Robert A. Jarrow to develop a model to discriminate defaulters from non-defaulters based on default probabilities generated from information in the bond market.

Forms of this model are used extensively in the pricing and trading of bonds issued by “risky” entities such as corporates and in pricing debt securitisations.
It uses the differential pricing of so called risk free (sovereign) bonds and so called risky (corporate) bonds to price the risky bond.
The model is now used extensively to model the pricing of Credit Default Swaps.

Jarrow models are now a class of models and are known as ‘Reduced Form’ or Intensity-Based Models as these models view default as a sudden unexpected event (consistent with empirical observations) which occurs randomly, with a probability determined by the intensity or “hazard” function that is a function of latent state variables.
Models use observable risky debt prices in order to ascertain the stochastic jump process governing default.
Credit spread can be viewed as a measure of the expected cost of default (CS = PD x LGD).

As there is no theoretical guidance on characterizing the default intensity process these models are said to be a-theoretic, in that they are less grounded in the economics driving default than in mathematical tractability.

The development of the different models pretty much track the development of my career over 40 years, from being a loans officer to structuring large corporate loans to trading bonds and securitizing the banks assets as its treasurer. Now a board non-executive director in FinTech and retail finance I believe I may be facing the biggest modeling challenge of my career, as we engage with “big data” and behavior modeling in the world of surveillance capitalism*.

In today’s world Data Brokers mine personal location and transaction data to create a picture of a persons life. The data is purchased from a variety of sources using cross device tracking from smart phones, tablets, televisions etc.
And then is sold to a range of industries, chiefly banks, insurers, retailers, telecoms, media companies and governments. We can connect people’s behavior to the real world with what they are doing on line to gain a 360-degree view of the customer.

Brokers include Acxiom, Oracle (which operates the largest of the data marketplace) and Experian. Oracle claims to sell data on 300m people worldwide and 80% of the US Internet population with some 30,000 data attributes on each individual.

Acxiom & Experian which are also both credit reference agencies use demographic, sociographic, lifestyle, cultural, mortgage and property data to categorise individuals. By operating across different data sources they can connect them to gain a more complete view of people. The anonymity of data is often claimed but is questionable given the widespread use of location data.

It is often stated that because of the capability of companies such as Google and Face Book to turn data into a business model surveillance capitalism has become the default model for capital accumulation in Silicon Valley. This is not about the nature of digital technology but about a new form of capitalism that has found a way to use tech. It works by providing free services that billions of people cheerfully use, enabling the providers of those services to monitor the behaviour of those users in astonishing detail – often without their explicit consent. These behaviours are fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into predictions that predict individuals buying behaviour now and in the future. Finally, these predictions are inputs to trading in a new kind of marketplace, characterized by highly tailored propositions and increasingly by an assignment of the purchase to the surveillance capitalist’s algorithms.

We are all caught up in this new capitalism either to build new businesses or defend old ones, but my concern is that this has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power. The problems this new capitalism is that it has grown up unregulated for 20 years, at least in part because of the interests of states and particularly of the USA.  The surveillance business model is held to have originated in around 2001 (the year of 9/11) with Google but as a result of 9/11 the USA was less concerned with regulating companies than with using new information technology to improve security and their own surveillance capabilities.

The call for regulation is now gaining momentum and the GDPR will certainly not be the last word on the subject. I think this is long overdue because surveillance capitalism can easily morph into the surveillance state and by that I do not just mean in China. Nat-West it is reported in the FT 13/14 April 2019, has worked with the UK government backed Behavioural Insight Team to “nudge” people to save more money, a laudable ambition or a business opportunity? What products, what controls, what advice? All seem to me to be relevant questions before we allow surveillance capitalists to work too closely with surveillance states.

Why am I concerned? Because a business model that has had free reign for two decades will have made plenty of missteps in the process, we have seen the effects on the banking industry of “light touch” regulation through the scandals it produced. Here we go again is not somewhere I want our industry to be, and I certainly do not want any of the companies whose boards I sit on to be.


* The Age of Surveillance Capitalism by Shoshana Zuboff  (professor emerita at Harvard Business School).

risk emea series
become-a-risk-insights-member-banner-1