The Asian Banker Sunday, 22 December 2024

CBA shares AI model to combat tech abuse

5 min read

Commonwealth Bank of Australia (CBA) is taking steps to help reduce technology-facilitated abuse internationally by making its artificial intelligence and machine learning (ML) techniques available for free, to any bank in the world.

The AI model helps to identify digital payment transactions that include harassing, threatening or offensive messages - referred to as technology-facilitated abuse.

Angela MacMillan, group customer advocate at CBA said: “Financial abuse occurs when money is used to gain control over a partner and is one of the most powerful ways to keep someone trapped in an abusive relationship. Sadly we see that perpetrators use all kinds of ways to circumvent existing measures such as using the messaging field to send offensive or threatening messages when making a digital transaction.

“We developed this technology because we noticed that some customers were using transaction descriptions as a way to harass or threaten others. By using this model we can scan unusual transactional activity and identify patterns and instances deemed to be high risk so that the bank can investigate these and take action.”

The model detects around 1,500 high-risk cases annually.

“By sharing our source code and model with any bank in the world, it will help financial institutions have better visibility of technology-facilitated abuse. This can help to inform action the bank may choose to take to help protect customers,” said MacMillan.

The use of AI demonstrates how innovative technology can create a safer banking experience for all customers, especially those in vulnerable circumstances.

The model and source code are available this week through the bank’s partnership with H2O.ai on GitHub, the world’s largest platform for hosting source code. The model was built by CBA and the source code was developed in partnership with the bank’s exclusive partner and global AI leader, H2O.ai.

The AI model complements the bank’s automatic block filter introduced in 2020 across its digital banking channels to stop transaction descriptions that include threatening, harassing or abusive language.

In a bid to combat technology-facilitated abuse the bank has implemented an automatic filter that blocks abusive, threatening or offensive words in digital payment transactions. So far it has blocked nearly one million transactions since it was implemented in 2020 AI and ML to detect more insidious forms of abuse in transactions. From here the bank can manually review these instances and take action. The model is fully operational, detecting more than 1,500 cases per year since being implemented in 2021.

This announcement follows the bank’s pilot with the NSW Police earlier this year to refer perpetrators of financial abuse to the police, with customer consent.

Re-disseminated by The Asian Banker

Attend Our Next Events
Japan Innovation Study Tour 2025
17 - 19 February 2025