New developments and advances in HPC (high performance computing) for credit risk and exposure analytics

New developments and advances in HPC (high performance computing) for credit risk and exposure analytics

By Assad Bouayoun, Director, XVA Senior Quant, Scotiabank.

Assad, can you please tell the Risk Insights readers a little bit about yourself, your experiences and what your current professional focus is?

I am a senior XVA Quantitative Analyst at Scotiabank, with more than 15 years’ experience in leading banks. I have designed industry standard hedging and pricing systems, first as a single asset quant (equity derivative at Commerzbank, credit derivatives at Credit Agricole) then as XVA quant in, in XVA at Lloyds in Model Validation at RBS in Model Development. I have an extensive experience in developing enterprise wide analytics to improve the financial management of derivative portfolios, in particular large scale hybrid Monte-Carlo and Exposure computation. After developing a prototype of XVA platform integrating advanced technologies to enable fast and accurate XVA and sensitivities computation, I now participate to its product-ionisation.

At the Risk EMEA 2018, you will be discussing, ‘New developments and advances in HPC (High performance Computing) for Credit Risk and Exposure analytics’ – Why is this a key talking point in the industry right now?

I will give concrete examples of how to integrate innovations in quantitative research (AAD) and technology (GPU, QPU and Cloud) to take up the challenges of the new business (Resource management) and regulatory(FRTB) environment in financial industry.

Banks are facing several major transformations provoking a necessary rethink of their risk systems:

  •  The new regulations (in particular FRTB) enforces an alignment between front office, xva, market and credit risk management models. This alignment induces the integration of the corresponding analytic libraries and their reorganisation in a more modular framework to cater for different users, data and methodologies.
  • The corresponding sensitivity computation for different stress scenario and the back testing requirements have increased by two orders of magnitude at least the compute power and data capacity needed.
  • Computation must be carried out in a safe environment and data must be delivered with hard-deadline constraints.
  • Computing regulatory capital without trying to actually risk manage through for example stress testing would be a lost opportunity.

Technology and quantitative research and development have progressed quickly offering a wide range of new opportunities all reaching production level quality. In my presentation, I will show that it is possible to improve speed by massive parallelisation using GPU, and by AAD for sensitivities computation. I demonstrated in a precedent presentation that the computation time can  be reduced by 2 to 3 orders of magnitudes on an average portfolio ( 1000 trades, 30 years span on 10 currencies). QPU can help for optimisation. The scalability issue can be addressed by dynamical allocation on the cloud.

Scalability can be addressed using elastic grid within the cloud. Agreements between regulators and cloud providers have made possible their use for critical applications.I see also a convergence toward more robust software. Some parts will have to be bought from specialist companies (numerical algorithms, AAD tools, data transformation). Some other part will have to be “proven mathematically” or written with extremely tight control.

It would be more appropriate to break down in different components the pricing and the risk computation to cater for different workflows, methodologies, users. These components would need to be orchestrated by specialised software that can manage hybrid hardware, their utilisation and the data transfer. The key talking point would be to devise the right combination of innovations that fits the needs of financial firms.

Why is there a need for increasing computing power within regulatory obligation and risk surveillance?

Regulators have asked to compute PV and XVA and all their sensitivities to deduce the capital requirements for a base and a series of stress scenarios every day. A responsible financial institution would also manage its capital requirements by computing their sensitivity to different variables for different combination of different methodologies and for different business units in different regulatory zones. Back testing conditions also necessitate precise sensitivities. It is easy to imagine the explosion of computation power required.

Is there a decent alternative ? Clearly these calculation for some of them are mandatory. But the opportunity cost of not using advanced models has been documented by various consulting firms and often are dwarfing the additional computation cost.

Why might cloud computing be a top talking point within the next 6 months?

Cloud computing is more than only a way to sub contract the managing of the hardware. It forces standardisation of software and enable their interconnection through diverse orchestration software. For example nothing will prevent one day a company to save its trade portfolio to a share on the cloud, pay a cloud data service to get some market data, use a cloud based analytic library with its static data and market convention to compute its prices and exposure, then pass the results to a cloud based database, and letting the users having access to it via a cloud based visualisation service.

It is difficult to predict which service would be developed internally, which one will be provided by a third party. But it reasonable to assert that the cloud is becoming an ecosystem where the cloud provider ensure availability of computing resources, security of data, reliability. It could expend to insuring against disruption, helping monetisation data, enforcing regulations and security policies.

Finally, what challenges do you foresee in the future? And have you got any advice for your peers on how to best handle them?

I see three main challenges:

  • The main challenge is to select carefully a reasonable path through all these mutations and manage all the moving parts without sinking into chaos. Complex organisation tends to let start concurrent and incompatible projects. These issues become apparent always too late. The pressure of justifying the initial cost can also create proper IT monsters.
  • I also feel that the bottleneck is always linked to shortage of relevant skills at strategical roles when the change occurs. Setting a multi-disciplinary team with domain expertise as well as strong quantitative finance and computer science skills  to accompany the changes is a good start.
  • Evolution of risk systems have been difficult. Developing ad hoc systems or buying an of the shelve software proved costly and inefficient as the integration and the coevolution are often overlooked. By building them as a service component on the cloud, and being able to operate them with other services and with an independent orchestration should give a great deal of flexibility and allow for a safer evolution.

 

Assad will also be speaking at the 7th Annual Risk EMEA Summit. Will you be joining us?…