menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

If AI can provide a better diagnosis than a doctor, what’s the prognosis for medics?

12 0
30.11.2024

AI means too many (different) things to too many people. We need better ways of talking – and thinking – about it. Cue, Drew Breunig, a gifted geek and cultural anthropologist, who has come up with a neat categorisation of the technology into three use cases: gods, interns and cogs.

“Gods”, in this sense, would be “super-intelligent, artificial entities that do things autonomously”. In other words, the AGI (artificial general intelligence) that OpenAI’s Sam Altman and his crowd are trying to build (at unconscionable expense), while at the same time warning that it could be an existential threat to humanity. AI gods are, Breunig says, the “human replacement use cases”. They require gigantic models and stupendous amounts of “compute”, water and electricity (not to mention the associated CO2 emissions).

“Interns” are “supervised co-pilots that collaborate with experts, focusing on grunt work”. In other words, things such as ChatGPT, Claude, Llama and similar large language models (LLMs). Their defining quality is that they are meant to be used and supervised by experts. They have a high tolerance for errors because the experts they are assisting are checking their output, preventing embarrassing mistakes from going further. They do the boring work: remembering documentation and navigating references, filling in the details after the broad strokes are defined,........

© The Guardian


Get it on Google Play