menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Opinion | Netanyahu Rumours: Why AI's Ability To Make Us Doubt Reality Is A Bigger Problem

29 0
24.03.2026

Opinion | Netanyahu Rumours: Why AI's Ability To Make Us Doubt Reality Is A Bigger Problem

A section of social media users claimed that the Prime Minister of Israel, Benjamin Netanyahu, was dead. And then, the AI games began.

Media theorists, technologists, and philosophers have been talking about the pros and cons of AI-generated content for quite a long time now. Constructing a false narrative, or fake news, is now easy. What was once done by special-effects experts in Hollywood, including the creation of hyper-realistic videos of unreal events and synthetic voice changes, can now be done by anyone using widely accessible software. And people are doing it. Fake war footage and staged political speech visuals frequently go viral, only to be debunked later. Viral clips and news on social media are now met with immediate scepticism.

Last week, another interesting episode occurred, revealing yet another challenge posed by AI. This, too, was a piece of fake news. A section of social media users claimed that the Prime Minister of Israel, Benjamin Netanyahu, was dead. At first, it appeared to be a silly rumour—a piece of conspiracy theory. But soon, the rumour gained traction, and those who circulated the claim began presenting “supporting evidence" for it. The Prime Minister had been absent from public appearances, and his son had remained mysteriously silent on Twitter. These observations, though weak on their own, were framed as if they were sound proof of the claim.

‘It’s Time To Act’: Netanyahu Urges World To Step In Against Iran

'You Blew It': US Envoy Reacts As Grok Claims Video Of Him And Netanyahu Was AI-Generated

Erased two names on the punch card, many more to follow: Netanyahu

"Iran is posing direct threat to Europe": Israeli PM Netanyahu

As the rumour spread, it began to sound credible to many, not because of strong evidence, but mainly because of the conviction of those presenting it, as well as repetition and speculation. In the absence of verified information, speculation took over. As is often the case with social media, uncertainty was exploited by some as fertile ground for spreading misinformation.

Just when the rumour started gaining momentum, Israeli authorities responded by releasing footage of Netanyahu addressing a press conference. Usually, such a video would have been sufficient to quell the rumours. However, this time, the response was different. Some users examined the video using Grok and claimed that it was AI-generated. Self-styled experts, after consulting Grok, also asserted that the footage had been created using AI tools. This line of reasoning was bought and popularised by some. Reality itself was suspected to be artificially generated, while an imagined alternative was presented as the truth.

A second video was released soon after, in which Netanyahu was seen in a coffee shop, casually interacting with people. Sceptics were quick to dismiss this video as well, claiming that it, too, was AI-generated footage. The same claims were repeated about a third video, in which the Prime Minister was seen speaking to people in front of multiple cameras. Another clip came out soon after, showing Netanyahu alongside the US ambassador. The rapid release of one video after another, especially the last one featuring the US ambassador, gradually weakened the narrative. Over time, the rumour lost momentum, and many who had previously asserted his death retreated from their positions. Perhaps they were mistaken. Perhaps the videos were not artificial after all.

The whole episode is interesting, not because it was the first instance of misinformation spread on social media, but because it revealed a deeper and more troubling shift in how we interpret reality. Avid social media users are generally aware of the dangers of AI-generated content. They are familiar with fabricated clips of UFOs and supernatural phenomena, as well as convincing videos of celebrities endorsing fake products. The intention behind most of these videos is clear: to construct an alternate reality and present it as truth. AI tools have certainly simplified things. You no longer need to train yourself in complicated software. You simply need to find the right tool and wording to create a video that brings your imagination to life. The result? The boundary between reality and fabricated content has become increasingly blurred, at least on social media, for now.

The Netanyahu case, however, is different. In this case, AI-generated content was not used to fabricate an alternate reality. Nor was AI directly used to make a false narrative appear real. Instead, reality itself was dismissed as AI-generated content. To maintain the false narrative, authentic footage was labelled as artificially generated. AI was thus used indirectly to undermine reality. This inversion suggests something important: AI need not generate content to make you suspicious of a particular incident. The very idea of AI tools can make people doubt what they see, leading to the formation of false narratives. The influence of AI tools extends beyond the direct creation of falsehoods. AI affects our ability to recognise truth, even in cases where it is not used to generate content.

This raises a deeper concern. The real problem is not the tools that can fabricate hyper-realistic images, but the erosion of trust that these tools bring about. The availability and accessibility of hyper-realistic generative technologies introduce a layer of suspicion about the truth of every piece of information one comes across. Every time one watches a clip or video online, one is compelled to doubt its veracity. Worse, when one dismisses something as fake, one must also question whether that scepticism is misplaced.

The consequences of this may not be apparent now, but they are quite far-reaching if you think about it. The idea of AI, and the potential of the idea, has the capacity to undermine trust in institutions, media, and even interpersonal communication. There are already bad actors using it to make people doubt reality. The final and most essential command of the Party in Orwell’s 1984, that one should reject the evidence of one’s eyes and ears, is coming true in a bizarre way. When one has to doubt visual or auditory evidence, any informed discourse begins to weaken. In such an environment, rumours can thrive, as verification becomes increasingly difficult.

The Indian context presents a particularly complex scenario. In the past decade, we have witnessed a swift expansion in digital access. Affordable smartphones and widespread internet access has brought millions of new users online. While this digital evolution has generated unparalleled prospects for communication, education, and economic advancement, it has also introduced new vulnerabilities.

A significant portion of the population is still developing digital literacy, which is the ability to critically evaluate online content and understand the technologies that produce it. For many, the content produced by artificial intelligence can seem indistinguishable from reality. This makes them vulnerable to manipulation, whether in the form of misinformation, fraudulent schemes, or propaganda.

Another group of users consists of those who are aware of the potential of AI but lack the expertise to consistently differentiate genuine content from fabricated content. This limited awareness presents its own set of challenges. Rather than being easily deceived, these users may become sceptical, distrusting even legitimate information. This creates a different form of vulnerability. It is not naivety, but cynicism.

This emerging condition bears a striking resemblance to the classic Brain in a Vat scenario, though in a more localised and technologically mediated form. In the traditional thought experiment, one cannot distinguish between genuine reality and a simulated one because all experiences are potentially fabricated. Today, while our immediate surroundings remain real, much of what we know about the wider world reaches us through digital networks increasingly saturated with AI-generated and AI-questioned content. As a result, we find ourselves in a comparable epistemic position: unable, in many cases, to decisively determine whether what we are presented with reflects reality or an artificial construction. The Netanyahu episode illustrates this vividly; not as total deception, but as a condition where even authentic evidence can be rendered suspect. In this sense, we are not brains in vats, but perceivers within an informational environment where the distinction between the real and the simulated is no longer reliably accessible.

The writer is a commentator with a research degree in philosophy from the University of Sheffield, focusing on the intersections of culture, history, and politics. Views expressed in the above piece are personal and solely those of the writer. They do not necessarily reflect News18’s views.


© News18