menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Another Dark Side of AI Deepfakes: The Rise of Video ‘Model’ Jobs Powering Online Scams

9 0
16.03.2026

Another Dark Side of AI Deepfakes: The Rise of Video ‘Model’ Jobs Powering Online Scams

Cybercriminals around the world are recruiting models for AI-powered deepfake video chats, linked to multibillion-dollar criminal scams.

BY CHLOE AIELLO, REPORTER @CHLOBO_ILO

Illustration: Inc; Photo: Getty Images

There’s a new category of AI job that’s gaining popularity among criminal recruiters. Recruitment listings for AI face “models” have surged within the past year, and seek applicants who can carry out financial scams via video chat. The models conducting the video chats typically appear as other people thanks to deepfake technology, so their real faces may never be seen.

A Wired investigation found that within the past year alone, dozens of online recruitment channels have popped up on secure messaging app Telegram. These channels seek people—many of them young women—for “AI face model” jobs. The jobs vary, but typically entail making contact with a mark, then messaging them repeatedly to build up a rapport that leads to financial requests, in a practice known as “pig-butchering.” Based on applications for the model roles, reviewed by Wired, some of the individuals know the kind of work they are applying for requires scamming others, and may even have past experience in it. Hotspots for recruiting include countries like Turkey, Russia, Ukraine, Belarus, Cambodia and multiple others in Asia and Southeast Asia.

The scams are often multi-fold: targeting vulnerable workers into exploitative roles that help criminal enterprises extract billions from unsuspecting victims.

The job listings in some cases contain veiled if not explicit references to crypto schemes, gold trading, and romance scams. They may offer applicants contracts of varying lengths, and require workers to send messages, share photos, and make audio and video calls. Ideal candidates speak multiple languages.

How Canva Became the Power Player in the AI Design Wars

The groups behind these listings can be multibillion-dollar criminal enterprises that work their employees in conditions not unlike human trafficking. One post called for “approximately 100 video calls per day,” according to Wired. Some companies take their workers’ passports, advertise 12-hour world days, and offer very little time off. While some people are in fact trafficked into working for the scams, some of the models reportedly have limited power over their working conditions. One victim told Wired, however, that it was difficult to determine whether models were working voluntarily because they were beaten and sexually harassed.

Wired noted that it reported about two dozen channels that sought to recruit AI models, but that the company did not take any of them down. Telegram did not immediately respond to Inc.’s request for comment, but told Wired that scam content is “explicitly forbidden” on the platform. The company noted that with AI model recruitment in particular “there are legitimate reasons one might give their likeness, and so such content must be examined on a case-by-case basis.”

There’s big money on the line. A March report from the Consumer Federation of America (CFA), a Washington, D.C.-based nonprofit, estimated that Americans lose some $119 billion annually to scams. The FBI estimated Americans lost a comparatively more conservative $16.6 billion in 2024. But the agency noted that its figures likely aren’t accurate, as many victims are too ashamed to report their losses, according to NBC News. The most popular social media platforms for scammers who target Americans are Meta’s Facebook, Instagram, and Whatsapp, according to CFA.


© Inc.com