The Carney government’s embrace of AI will put lives at risk
The Mark Carney government has made “deploying AI at scale” a cornerstone of its attempt to make government more productive and slash costs by cutting 28,000jobs by 2029. The goal is to achieve savings of $60 billion over several years.
There are many reasons to be skeptical of the government’s AI strategy. Savings projections resulting from digitalization should be taken with a grain of salt. For example, the Phoenix pay system, designed to automate the federal payroll, was supposed to save $70 million per year. Instead, unable to deal with the complexity of paying hundreds of thousands of public servants, it has cost the federal government $4.34 billion and climbing to try to fix it.
However, unrealized savings are the least of the concerns coming from the government’s wholehearted embrace of AI, including algorithmic-based tools. Deploying these technologies as cost-cutting measures will not only result in worse service for Canadians, it will put lives at risk – as has already happened here and in other countries.
If the federal government is intent on exploring the use of AI (however it is defined) in government, it should not do so as a cost-cutting measure, but only after careful, case-by-case deliberation that pays close attention to how this (or any) technology interacts with the people using, and affected by, the tech in question.
The problems begin with the technologies themselves. Simplifying greatly, focus on two general forms of AI.
The first, “generative AI” such as ChatGPT produces probabilistic output predicting what the next word is likely to be in a sequence based on its training data. It produces patterns that look like human thought, but it’s just repeating patterns in its data. As such, it’s prone to producing “hallucinations,” which can involve presenting false information as true.
These are not technically incorrect outputs per se because the program is simply doing what it’s designed to do: provide probabilistically determined strings of words and sentences. The fact this problem cannot be fixed means that its output can never be fully trusted.
Trust in government a key factor in acceptance of greater federal use of AI
Trust in government a key factor in acceptance of........
