menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI deepfakes of girls are flooding schools. Teachers need more training to help stop it

7 0
26.03.2026

AI-generated deepfakes are a crisis in schools. Enactment of the federal Take it Down Act law, which criminalized nonconsensual intimate imagery, hasn’t had much impact. 

In every girls’ bathroom at Canyon Crest Academy high school in San Diego, there is a placard detailing the Title IX rights of sexual assault and harassment victims. But in this digital age, there’s something painfully lost in the fine print: explicit deepfakes.

Explicit deepfakes are artificial intelligence-generated, nonconsensual sexual images and videos and one of the fastest-growing forms of online abuse. With a simple selfie or TikTok dance, perpetrators can use free “nudifying” websites to create hyperrealistic yet entirely fake nudes and sexual videos. These materials can then be shared, sold or used to blackmail, harass and humiliate victims. 

This is a dire issue in high schools full of digital natives: 98% of AI-generated content online is explicit deepfakes, and 40% of high school students know of deepfakes of themselves or their classmates, including AI-generated, explicit content. 

Article continues below this ad

On May 19, 2025, the Take it Down Act was signed into law — the first federal legislation to criminalize nonconsensual intimate imagery, including AI-generated deepfakes. It requires platforms to remove them within 48 hours.

When this act was passed, we were elated. We’d been working on the Center for Gender Equitable AI’s #StopExplicitDeepfakes Campaign to digitally mobilize young people to push Congress to pass the Take it Down Act. Yet, walking back through our school doors, little had changed. There were no new posters. No new school policies.

See more S.F. Chronicle on Google

Looking closer at that bathroom placard, you’ll find mention of “computer-generated images of a sexual nature” buried in the ninth item in a list of 12 examples. The word “deepfakes” never appears. Neither does “AI.” A victim who just discovered a classmate had made a fake photo or video of her may have to read the placard several times before realizing that it might apply to her — if she thought to look at all.

To be clear, the AI approach has been proactive at the San Dieguito Union High School District, where Canyon Crest Academy is located. The district updated its student technology policies in December 2024 to address AI tool use, adopted a formal AI policy in January 2026 and actively filters deepfake-generating websites on school Wi-Fi and district devices. 

Article continues below this ad

Yet while meaningful protections exist, students still have little way of knowing whether they apply to them in the age of AI. 

In Oregon last year, two high school students — Julianne Huang and Richa Pandit — presented about explicit deepfakes to an audience of educators and administrators at the Multnomah Education Service District and Clackamas Education Service District’s AI Steering Committee. Their presentation led one educator to offer the telling reflection that “hearing the student side of how AI can be harmful was helpful.”

Huang and Pandit came to the same realization we had: that the passage of the Take It Down Act hasn’t yet made a big difference in our nation’s schools, many of which still have no idea about the pervasiveness of explicit deepfakes. And those that do know continue to lack the resources and guidance to protect students from it.

The gap stems not from indifference but rather a lack of resources and guidance to address the scope of the deepfake problem among students. Officials at San Dieguito Union High School District, for instance, have expressed openness to exploring deepfake-specific initiatives.

According to a 2024 Education Week Research Center survey of more than 1,100 educators nationwide, 67% of teachers, principals and district leaders believed their students had been misled by a deepfake — yet most had no training to address it. Of those surveyed, 56% had received no training on AI-generated deepfakes. Among those who had, only 8% rated it excellent or good, while 36% said it was mediocre or poor. Notably, of the untrained majority, 31% said that they wanted training but hadn’t received any.

Speaking with administrators and educators motivated Huang and Pandit to team with us to develop a program called STOP that provides resources and training to schools and students to implement kindergarten through 12th-grade school district AI policy reform and explicit deepfake victim recourse.

A big problem with countering the pervasiveness of AI-generated deepfakes is that educators simply do not know of this spreading digital crime or have the resources to protect their students. 

The STOP campaign, which launched in March, implements three core initiatives: advocacy for anti-explicit deepfake school policies, distribution of awareness and resource posters to be displayed in bathrooms and counseling offices, and workshops for explicit deepfake awareness and AI literacy training. 

But the burden shouldn’t rest solely with teachers and administrators. 

Guest opinions in Open Forum and Insight are produced by writers with expertise, personal experience or original insights on a subject of interest to our readers. Their views do not necessarily reflect the opinion of The Chronicle editorial board, which is committed to providing a diversity of ideas to our readership.

Read more about our transparency and ethics policies

Parents, ask your child’s school whether or not it has an explicit deepfakes policy.

Educators, start the conversation on explicit deepfakes in your next meeting. The Title IX placards exist because we agreed that students deserve to know their rights. Explicit deepfakes demand the same.

Emma Le and Stephanie Choi are executives at the Center for Gender Equitable AI, the largest youth-led nonprofit organization for gender equity in artificial intelligence and emerging technologies. They are also AI Safety Policy Fellows at the Berkeley AI Safety Initiative at UC Berkeley.


© San Francisco Chronicle