Big Tech v. Me
Last April, I was hiding from the world in a whitewashed village among the low mountains north of Valencia, Spain, when I got a call from a lawyer friend of mine back in Vancouver. Reidar Mogerman helped pioneer class-action law in Canada, and he had a proposal for me. He told me how American authors and lawyers had launched a wave of lawsuits against some of the world’s biggest, richest tech companies, alleging that they’d used copyrighted books, without permission or compensation, to develop artificial intelligence.
Reidar saw potential for a similar case on behalf of Canadian writers. He wanted to know if I’d be the representative plaintiff—the person whose name would stand for every wronged writer in the suit. I was skeptical. I’m an author and journalist, but when I read news reports about copyrighted work being used to train AI, I never assumed my writing was included. Surely they couldn’t have taken from everyone.
I asked how he could be sure that my books had helped develop AI models. Because, he said, his colleagues checked. Of the four copyrighted non-fiction books I’ve authored or co-authored, at least three—The 100-Mile Diet, The Once and Future World and The Day the World Stops Shopping—appeared in datasets known to have been used to train some of the world’s biggest large language AI models. These systems analyze the material they’re fed and discern patterns and associations so intricately that they can predict appropriate responses to an incredible array of human inquiries. The result is generative artificial intelligence: AI products that can speak human, such as ChatGPT. The datasets they feed on are huge digital repositories of human expression, containing literature, scientific papers, social media posts and far more. The law professor Edward Lee, from Santa Clara University, has described big tech’s use of these datasets as “eating the world.”
When I learned that my copyrighted work had helped fuel this explosion, I thought of Sex Pistols singer Johnny Rotten’s final words on stage, before his band broke up: “Ever get the feeling you’ve been cheated?” I’d been wronged in ways both personal and universal. I thought about the great care that writers take with others’ intellectual property. If I quote more than a few lines from someone else’s work, I have to seek permission. If I even borrow too heavily from another writer’s ideas, I commit plagiarism. Yet the tech companies consumed copyrighted works with such apparent gusto that Wired magazine described it as “slurping.”
Because they have eaten so many fruits of the human mind, these models “know” far more than any single person—in this sense, they are superhuman. A typical chatbot can dish dating advice, write an essay on the Richard Wagamese novel Medicine Walk, translate “this sword is too heavy” into Old English, rattle off dozens of recipes that call for large amounts of parsley and so, so much more.
AI adoption is growing even faster than cloud computing or mobile apps did during their booms in the 2010s. In Canada, business use of AI has doubled since last year. And though we are not yet three years into AI’s coming of age, nearly 30 per cent of adults in a U.S. Pew Research survey recently said they interact with it multiple times a day. This figure is probably wrong—AI experts estimate the true figure as being close to 80 per cent.
The tech firms’ approach to copyright suggested to me an unnervingly cavalier attitude, even scorn, toward the human project: our species’ evolving expression of ideas and values. It felt like a quiet colonization of that realm—which is also the world of the writer—by something cold, commodified and transactional.
An important distinction: the lawsuits Reidar proposed weren’t about putting AI on trial. They were aimed at big tech, a sector whose past behaviour leads me and many others to doubt it is the best custodian of the tools it creates. The industry already stands accused of designing games and social media to be addictive; of rewarding online hate, conflict and disinformation to boost user engagement; of invading our private lives to harvest our data; of permitting a tsunami of extreme pornography to distort human sexuality; and of creating a world where we have to remind each other to “touch grass.”
Artificial intelligence is the industry’s most transformative technology yet. Depending on who you ask, it could kill us all, or guide us into a glorious future beyond the death of the sun. It feels like we’re encountering a future once limited to science fiction. The questions it raises are new and important; in the words of no less a personage than Melania Trump, “The robots are here.”
By summer’s end, I had signed on as representative plaintiff in national class-action cases against four companies: Meta, Databricks, Nvidia and Anthropic (which is heavily backed by Amazon). These are the purveyors of large language model products we know by approachable names like Claude and Llama. (Less approachable: Nvidia’s NeMo Megatron, which sounds like a giant robot bad guy unleashed by an evil corporation in a Hollywood film.)
There will likely be more such lawsuits. This September, Anthropic agreed in a U.S. case to pay a total of US$1.5 billion to hundreds of thousands of authors to settle their action against the company—though, as of this writing, the deal still needs to be approved by the courts. In Canada, news publishers have launched a case against OpenAI, the company that created ChatGPT. My own suit against Meta has a parallel class action in Quebec, represented there by Montreal author Taras........





















Toi Staff
Gideon Levy
Tarik Cyril Amar
Stefano Lusa
Mort Laitner
Sabine Sterk
Ellen Ginsberg Simon
Mark Travers Ph.d