menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Breaking the hypnotic spell of misinformation

7 0
yesterday

A video began circulating on Facebook shortly before the Irish presidential election in October. It was a report by the national broadcaster, RTÉ, with bombshell news.

Frontrunner candidate Catherine Connolly told a campaign event she was bowing out of the race. A crestfallen supporter shouted out “No, Catherine” before the clip cut to a reporter explaining what would come next. The election was off and her leading rival would be acclaimed.

A shocking development only days before the election. Except the whole thing was fake.

Ireland's new President Catherine Connolly was elected in a landslide vote at the end of October, despite the circulation of an AI-generated deepfake video which falsely claimed her withdrawal from the race.Pool/Getty Images

Ms. Connolly slammed the video as “a disgraceful attempt to mislead voters and undermine our democracy.” Meta eventually agreed to take it down and Ms. Connolly went on to win handily. But the video – which can still be seen, though is now branded clearly as being AI-generated – is an example of how dangerous false information can be.

Society can fight back against what has become a hypnotic stream of fakery. Society must. A world in which illusion, fraud and lies are the common currency becomes one in which there is no agreed-upon version of truth, undermining the very concept of reality.

Right around this moment it may be tempting for readers to think, well I’m not on social media, so I’m probably missing the worst of this garbage. Unfortunately, between the rise of generative AI and the viral power of bots, the trash has a way of seeping through to everyone.

Consider the artificial intelligence synopses that appear first when doing a web search. Data show that fewer and fewer people are scrolling down and clicking on links to find the answer they were seeking. But relying on the synopsis is risky given that AI uses the available information, and the source material is increasingly unsound.

The number of phony scientific papers is doubling every 18 months, posing real dangers when AI scrapes up false information and uses it in response to health queries.

A source of deliberately bad information is Russia, which

© The Globe and Mail