menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Opinion: The Ontology Problem Wall Street Won't Discuss

17 1
12.02.2026

I study how financial institutions govern artificial intelligence. I’ve spent years examining the frameworks, analysing the data, reviewing the models. And I’ve arrived at a conclusion that I cannot shake: We are approaching a moment when financial crime will become definitionally incoherent.

Not harder to detect. Not more sophisticated to prosecute. Impossible to define in the first place.

This isn’t hyperbole. It’s the logical terminus of forces already in motion. And almost no one in a position of authority is willing to say it plainly.

So let me say it.

Every system of financial crime enforcement rests on a single assumption: that legitimate activity and illegitimate activity are different things.

Different in ways we can specify. Different in ways we can detect. Different in ways that, ultimately, a judge or jury can evaluate. The entire apparatus of compliance, investigation, and prosecution exists because we believe that fraud looks different from non-fraud, that money laundering looks different from ordinary movement of funds, that manipulation looks different from trading.

This assumption is so foundational that we rarely examine it. We debate how to catch criminals, not whether the category of criminal will remain stable. We argue about detection methods, not about whether detection is philosophically possible.

It’s time to examine what we’ve taken for granted.

Fraud detection works by establishing what normal looks like, then flagging what doesn’t fit.

Normal transaction volumes. Normal timing patterns. Normal geographic distributions. Normal relationships between accounts. You build a statistical portrait of legitimate behavior, and you watch for deviations. The deviation is your signal. The signal is your case.

This approach has worked for decades because human behavior has structure. People wake at certain hours, spend in certain patterns, move money for certain reasons. Even sophisticated criminals, trying to disguise their activity, leave traces. They’re human. They have habits. They make mistakes. The baseline catches them.

Generative AI doesn’t have habits. It doesn’t make mistakes. It produces outputs optimised against whatever objective function it’s given. And increasingly, that objective function is: look normal.

The most advanced fraud detection models are neural networks trained on massive datasets of legitimate activity. They learn what normal looks like, in extraordinary detail, and they flag what doesn’t match.

Now consider the adversary. A generative AI trained on the same data, or data like it, learning the same patterns, producing synthetic transactions that are statistically indistinguishable from the real thing. Same distributions. Same temporal signatures. Same relational structures.

The fraud doesn’t deviate from the baseline. It is the baseline, regenerated.

How do you detect a fake that is, mathematically, more authentic than the original?

Consider identity itself. Synthetic identity fraud is now the fastest-growing financial crime in the United States. These aren’t stolen identities. They’re constructed ones. A real Social Security number, often belonging to a child or deceased person, combined with fabricated personal details. A name that was never given to anyone. An address history that maps to real locations but no real resident. An employment record that checks out because it was built to check out.

The numbers are staggering. TransUnion reports that synthetic fraud attempts grew 184% between 2019 and 2023. In just six months, from late 2023 to early 2024, incidents surged another 153%. By late 2024, U.S. lenders faced $3.3 billion in exposure to synthetic identities........

© News18