Banks have used AI for decades, but ChatGPT bias changes everything
Banks may have been using AI for decades, but ChatGPT and its in-built biases present a massive problem for the industry, writes Lewis Z Liu
Several days ago, I had a fascinating conversation with executives at one of the world’s major central banks. We spent considerable time discussing how AI governance needs to evolve in banking and why this shift will fundamentally change how we finance the world.
Here’s what most people don’t realise: banks were using “AI” decades before ChatGPT made headlines. Back in the 1980s and 1990s, they called it “applied statistics”. Later it became “machine learning”. Now it’s been rebranded as “AI”. These systems have been making credit card decisions, calculating FICO scores, detecting fraud and powering automated trading for decades.
Because financial services demand extraordinary precision and operate under intense regulation, banks developed something called “model risk management”, essentially AI governance before anyone called it that. My father actually wrote the playbook for this at Bank of America and now teaches it at Duke University, updated for today’s generative AI world. While not exactly riveting dinner conversation for most, these family discussions have opened my eyes to a looming crisis.
The old rules vs. the new ChatGPT reality
Here’s the problem: traditional banking AI was transparent.........
© City A.M.
