AI Content regulation in India: When labelling norms are ineffective
The Union finance minister’s images are manipulated through Artificial Intelligence (AI) deepfakes and misused to perpetuate financial fraud — this is, unfortunately, not a hypothetical case. Indeed, many such cases have surfaced lately. Celebrities and non-celebrities alike have fallen victim to AI-enabled deepfakes, often involving sexual imagery and harming their privacy and dignity. Now, there are reports of attempts to manipulate voter choice in the upcoming elections using AI-generated fake images of actors, wherein their likeness endorses or criticises a party. Digital arrest scams are also increasingly using AI deepfake imagery and voice to perpetuate fraud.
The question before us, therefore, is: Can the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — Intermediary Guidelines, hereafter — meet this challenge? Apart from introducing new vocabulary — such as “synthetically generated information” (SGI), an umbrella term that includes AI deepfakes — the guidelines also call for labelling online posts as SGI if any such content is used. The labelling should cover 10% of a visual or be featured in the first 10% of a voice-based post containing SGI. Such compliance is limited to intermediaries offering their computer resources, which enable, permit, or facilitate the creation, generation, modification, or alteration of information as SGI. Such labelling helps primarily in terms of transparency, informing users about........





















Toi Staff
Gideon Levy
Tarik Cyril Amar
Sabine Sterk
Stefano Lusa
Mort Laitner
Mark Travers Ph.d
Ellen Ginsberg Simon
Gilles Touboul
John Nosta
Gina Simmons Schneider Ph.d