By Sray Agarwal, Specialist – Data Science & AI, Publicis Sapient
By Sray Agarwal, Specialist – Data Science & AI, Publicis Sapient
What, for you, are the benefits of attending a conference like the ‘X-Tech Europe Summit’ and what have attendees learnt from your session?
Conferences like the X-Tech Europe Summit do a great job of providing a one-stop platform to acquaint oneself with innovative disruptions in the financial world. My session detailed the social implications of such disruptions caused by the advent of innovations in AI and machine learning. In the session, I covered how to incorporate ethics and accountability while being innovative. I spoke about the importance of fairness in financial services and go through a use case to showcase its implementation.
 In your opinion, what are some of the key opportunities AI can provide the financial services landscape?
How can risk professionals detect bias in AI models and why is it so important to remove bias and ensure fairness in AI?Â
While machine learning and AI are technologies often dissociated from human thinking, they are always based on algorithms created by humans. And like anything created by humans, these algorithms are prone to incorporating the biases of their creators. These biases can take a number of forms. In this case, we are talking specifically about the unfair treatment of individuals who are part of a protected group, such as a particular race or gender.
For example, AI has been widely used to assess standardized testing in the US, and recent studies suggest that it could yield unfavorable results for certain demographic groups. It also plays a deciding role in hiring decisions, with up to 72% of résumésin the U.S. never being viewed by a human. And famously, Google’s photo recognition AI led to Black people being misidentified as primates.
Non-discrimination is an important goal for any algorithm. But the needs for fair, bias-free AI and machine learning go further for businesses. That’s because biased algorithms can result in AI that makes costly mistakes, reduces customer satisfaction and ultimately damages a brand’s reputation.
FS firms are using AI for a wide range of operations and customer journeys. For instance, it is already being used to decide mortgage, savings and student loan rates; the outcomes of credit card and loan applications; and insurance policy terms. It also affects other outcomes, such as credit card fraud prediction.[1]AI is also used for virtual assistants that help customers improve their financial health.
If the algorithms used in these financial decisions are subject to bias, they could negatively impact the way millions of consumers and businesses borrow, save and manage their money.
In your opinion what are the current issues surrounding different fairness metrics?
 Can you provide our readers with an overview of the ‘accuracy-cost-fairness’ trade-off?
Whenever fairness constraints are introduced in Machine learning, it’s seen that the accuracy of the model may drop down by a few notches. In such cases, there may be an impact on cost (cost here refers to monetary loss due to False Positive and False Negative predictions) too —be it positive (when the total cost goes down) or negative (when the total goes goes up).
The trade-off method will illustrate which fairness metric to choose and to what degree the fairness constraint (fairness constraint type and constraint strictness) needs to be added so that the impact on the cost and accuracy will be the least. It will also talk about methods by which, in a few cases, it may be possible to tune the fairness constraints to ensure decrease in the overall cost leading to a sight change in the accuracy.
 How do you think AI will impact financial services in the next 12 months?
It’s hard to say what all will happen in the next 12 months given the pace at which AI is being innovated. Nevertheless, I foresee that a myriad of functions in FS would be using AI. As stated earlier, it would be used for not only operational efficiency but also for revenue generation, automation and above all, for enhancing customer experience. However, to top it all, given the strict regulation and recommendation around AI governance and ethics, I feel that FS would start adopting more Safe AI practice across all of their business verticals.
These adoptions may soon become an intrinsic part of their AI and data science lifecycle and would eventually be a must-do before, during and after any modelling process. Also, with customers becoming more aware about privacy and their rights, FS would have to be very careful in their AI modelling process and usability. It would be seen that almost all banking services would have a touch of Safe AI rather than complex AI.