Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio.
But there are some caveats. One of them is that the tool is currently only available to “early testers” through a waitlist.
The main catch is that SynthID primarily works for content that’s been generated using a Google AI service – such as Gemini for text, Veo for video, Imagen for images, or Lyria for audio.
If you try to use Google’s AI detector tool to see if something you’ve generated using ChatGPT is flagged, it won’t work.
That’s because, strictly speaking, the tool can’t detect the presence of AI-generated content or distinguish it from other kinds of content. Instead, it detects the presence of a “watermark” that Google’s AI products (and a couple of others) embed in their output through the use of SynthID.
A watermark is a special machine-readable element embedded in an image, video, sound or text. Digital watermarks have been used to ensure that information about the origins or authorship of content travels with it. They have been used to assert authorship in creative works and address misinformation challenges in the media.
SynthID embeds watermarks in the........
© The Conversation
