Will Copyright Doom Generative AI?
New large language models (LLMs) seem to appear every day, yet the controversy surrounding the legality of training them on copyrighted material continues to rage. Big Media is both threatened by generative artificial intelligence (AI) – chatbots that write novels and news copy, image generators that create artwork or music to order in the style of any artist whose work is accessible on the internet – and determined to grab a share of the wealth they generate. The list of their pending lawsuits against the titans of tech is long and their resolution is distant. And the techies are striking back: Meta, Google, OpenAI and others have asked the Trump administration to declare that it’s legally permissible to use copyrighted material to train AI models.
Before we consider whether copyright owners can prevent use of their content to train generative AI, let’s first ask whether they should be able to do so. The question is easier, or at least clearer, if we are prepared to attribute agency to a computer and judge its activities as if they were undertaken by humans. Of course, machines don’t think or create like humans – they just do what we tell them to do. Until very recently, it was easy to see computers as sophisticated tools subservient to human agency, regurgitating pre-loaded content and crunching numbers. Today, we converse with chatbots the way we would with a research or coding assistant, and with image generators the way art directors guide human illustrators and graphic designers.
Much as it discomfits us, generative AI learns and, at some level, “thinks.” Trained on a significant slice of human knowledge, ChatGPT aced the “Turing test” – the famous measure of a machine’s ability to exhibit human-like intelligent behavior – the day it was released. Since then, chatbots have passed the bar and medical licensing exams, solved long-standing math conundrums, and written © The Times of Israel (Blogs)
