Can bigger-is -better ‘scaling laws’ keep AI improving forever? History says we can’t be too sure
OpenAI chief executive Sam Altman – perhaps the most prominent face of the artificial intelligence (AI) boom that accelerated with the launch of ChatGPT in 2022 – loves scaling laws.
These widely admired rules of thumb linking the size of an AI model with its capabilities inform much of the headlong rush among the AI industry to buy up powerful computer chips, build unimaginably large data centres, and re-open shuttered nuclear plants.
As Altman argued in a blog post earlier this year, the thinking is that the “intelligence” of an AI model “roughly equals the log of the resources used to train and run it” – meaning you can steadily produce better performance by exponentially increasing the scale of data and computing power involved.
First observed in 2020 and further refined in 2022, the scaling laws for large language models (LLMs) come from drawing lines on charts of experimental data. For engineers, they give a simple formula that tells you how big to build the next model and what performance increase to expect.
Will the scaling laws keep on scaling as AI models get bigger and bigger? AI companies are betting hundreds of billions of dollars that they will – but history suggests it is not always so simple.
Scaling laws can be wonderful. Modern aerodynamics is built on them, for example.
Using an........





















Toi Staff
Gideon Levy
Sabine Sterk
Tarik Cyril Amar
Stefano Lusa
Mort Laitner
John Nosta
Ellen Ginsberg Simon
Gilles Touboul
Mark Travers Ph.d
Daniel Orenstein