Sunday, 22 December 2024

Standard Chartered's Kumar: “We are eliminating unjust bias for more resilient AI models”

5 min read

By Neeti Aggarwal

Financial institutions are exploring new initiatives for greater transparency in AI owing to concerns regarding potential bias and fairness in its decision outcomes. .

  • Complex machine learning based algorithms could act like a ‘black box’ resulting in biased outcomes.
  • Several banks are exploring new techniques and partnerships to improve AI transparency.
  • Standard Chartered recently collaborated with a US based start-up to improve its credit risk decision AI models and mitigate unfair bias.

Artificial intelligence (AI) and machine learning (ML) is gaining rapid adoption in the financial services especially in predictive analytics, transforming customer engagements and improving efficiency of operations. By 2022, banks are expected to spend about $12.3 billion on AI and cognitive technologies, according to The Asian Banker estimates.

However, with this increase in usethere is also a growing concern regarding lack of transparency of the logic behind the decision-making models that act like a ‘black box’.

Bias in AI leads creates a trust gap

Potential risk of human biases creeping in unintentionally during building of models or on account of flaws in the training data points or algorithm logic could result in inaccurate or biased outcomes. These lead to a trust gap.

Sam Kumar, global head of analytics and data management at Standard Chartered (StanChart) commented, “Because AI and ML offer raw horsepower and access to more sophisticated algorithms than before, increasingly the industry is using wider data sets to build predictive capabilities. This combination leads to increasing opaqueness around the criteria that is driving the outcomes, which adds to the challenge of explaining the outcome.”

In a regulated industry like financial services the implications of any inaccurate prediction and biases can be huge. Monetary Authority of Singapore (MAS) in Singapore is now developing framework around the responsible use of AI technology in credit assessment. While EU’s General Data Protection Regulation (GDPR) stipulates that individuals have the right to meaningful information of the logic of automated decisions concerning them.

Initiatives by banks

Several financial institutions are now exploring initiatives to address this. For instance, Capital One has undertaken research, partnerships and multidisciplinary internal working groups towards managing the risks and biases in AI. Bank of America announced that it is working with Harvard University towards greater transparency and responsible use of AI.

HSBC has developed its principles of ethical use and governance framework around AI and is also working with technology partners to monitor bias in AI algorithms. UOB in Singapore is working with AI company Element AI to develop metrics for banks to assess the extent to which the models are fair in credit scoring evaluation, as part of Veritas consortium led by MAS. Several technology companies including Google, Temenos, IBM, Fiddler Labs are focusing on explainable AI.

Recently Standard Chartered announced that it has  partnered with Truera, a US based start-up to eliminate unjust biases in its decision-making process and develop more resilient and validated models.  The bank had started experimenting with AI four years ago primarily across the client engagement and recommendations and has implemented AI across of 14 of its key retail banking markets.

In the last 18 months, it progressed to explore the use of AI and ML into the credit decisioning space.

Kumar shared the bank does not use data points such as gender or marital status for building its predictive models. “But once the models are developed the inherent challenge is to ensure that unjust bias is not introduced through other variables that can act as proxies for the very data points that we sought to remove,” he added.

“We run the algorithm and data used through a secondary platform that gives us transparency into the identification of the potential build-ups of unjust bias. Once we have the model and have scored the clients, we then map that correlation back against gender, macro factors, other data items (which we do not use in the model) to make sure that  unjust bias does not indirectly creep in,” explained Kumar.

The pilot is so far limited to the credit risk decisioning models but StanChart plans to implement this in their marketing and other machine learning models over the next 12 months.

As AI becomes mainstream especially in prescriptive decision making, it will become increasingly important for institutions to proactively assess and validate their decision-making models as well as devise governance frameworks around it to drive greater trust.



Leave your Comments
Recent Comments