YouTube Is Releasing an AI Tool That Lets You Deepfake Yourself
YouTube Is Releasing an AI Tool That Lets You Deepfake Yourself
Looking to integrate more AI, YouTube is allowing users to create deepfake avatars of themselves.
BY MOSES JEANFRANCOIS, NEWS WRITER @MOSESJEANS
YouTube Shorts is jumping into the AI trend. A new feature is coming to the platform that will allow users to create an AI avatar of themselves and deepfake themselves into videos.
Back in January, YouTube CEO Neal Mohan wrote in a blog post that more AI systems and models would be added to the platform. The latest feature lets users create an AI avatar that looks and sounds like them and that can be placed into a YouTube Short.
The photorealistic avatar will be prompt-based for an eight-second-long generation, but users will be able to create multiple clips back-to-back. The new feature follows a continuation of Google Veo models in YouTube Shorts, which created cinematic video clips from text, image, or video prompts through AI.
The new YouTube feature can bring concerns of deepfakes now that people’s likenesses are involved, an issue Mohan stated will be monitored. “We’re also building on the foundation of Content ID – a system our partners have trusted for well over a decade – to equip creators with new tools to manage the use of their likeness in AI-generated content.”
How Anthropic's Claude AI Became a Co-Founder
According to the rules of the new feature, users must be 18 or older to create and use an avatar for shorts. YouTube rolled out an age estimation AI tool in 2025, which detects the age of users even if they lie about their age when making an account, which it is hoping will head off any underage usage of the new avatar feature.
Consumers are increasingly concerned about deepfakes, with 72 percent expressing daily concern about deepfakes scamming them out of money or sensitive information, according to Jumio, an identity intelligence company. In 2025, users on Elon Musk’s X began using the Grok chatbot to create deepfake sexualized photos and videos of people without consent. This issue has led to multiple investigations in the E.U., the U.K., and the U.S.
According to Mohan, YouTube is committed to protecting “creative integrity” and will continue to “remove any harmful synthetic media that violates our Community Guidelines.”
