By Saqib Jamshed, Head of Model Risk Governance, The Options Clearing Corporation
What, for you, are the benefits of attending a conference like the ‘Operational Risk Management Congress’ and what have attendees learnt from your session?
You get to interact with your peers and learn from their experiences. There is also an opportunity to engage with the regulators in a more informal setting and you can get a sense of what items they deem more important presently. The vendors also provide an opportunity to provide information about how their technology solutions can potentially help you in some of the issues you face at work.
We looked at the current landscape of AI and ML model development and deployment in the financial industry. What is the promise, what are the major hurdles and how can you approach adoption of AI and ML techniques using a cost/benefit framework.
How can AI and machine learning technology be used within operational risk?
Operational risk is an area that touches most parts of today’s organizations. AI and machine learning processes and models are being actively explored in a lot of those areas. Application of such processes is being marketed as potentially reducing costs, increasing revenues and bringing more efficiency to the enterprise. It is very pertinent to question whether those benefits will come with increased operational risks. I would lean towards saying no. When you automate or simplify a process, you take away the elements of uncertainty and human error. If these processes accomplish that, it will reduce operational risk from an overall perspective. However, we have to be mindful that such processes may introduce idiosyncratic operational risk into the equation also. That is why organizations are being very careful deploying them. Headline risk is just too great here.
What potential does AI and machine learning pose for increased fraud risk and detection of patterns?
If you have a library of scenarios that capture fraud risk or other patterns that pose a threat to the organization, you can “train” your AI or ML algorithms to potentially detect fraudulent activity. A major hurdle in this regard is the availability of appropriate data sets. Companies currently don’t have enough internal data to be able to test their models with enough rigor. Technology focused organizations have dealt with the issue by assembling the data themselves over a significant period of time. Ethical and privacy concerns are potential minefields for adopting that course of action though.
There is potential for companies to collaborate and look at establishing industry standards to speed up potential deployment of AI and ML models. It is an opportunity rather than a roadblock.
What challenges do we face implementing the use of AI in operational risk?
I have mentioned the paucity of data sets before. Another area of concern is the lack of appropriate skill sets in the first line. Data scientists and AI “specialists” are streaming into organizations but the first line is failing to provide resources and a roadmap as to how these approaches can add value to the bottom line quickly. Leadership in this context has to come from the first line. In other words, war is too important to be left to the Generals alone…
How do you see the cyber security being combated in the future?
Cyber security is definitely an area where AI models and techniques can contribute a lot. Intrusions and attacks on IT infrastructure are becoming more and more sophisticated and malevolent actors are now actively collaborating in real time. Countering such threats will require a response that adapts in real time also. Once again, this calls for organizations to co-operate and share data and information. Only then, will the promise be fulfilled.