Without stronger privacy laws, Australians are guinea pigs in a real-time dystopian AI experiment
Say cheese! A decision last week greenlighting Bunnings’ use of facial recognition technology to routinely monitor customers provides a not-so-happy snap of how ill-prepared Australia is for the coming AI storm.
On its face, the administrative review tribunal decision to overrule the privacy commissioner’s finding that Bunnings’ use of intrusive, high-impact AI was unlawful is a technical call. But the impact will be material.
Expect retailers and others operating in public spaces to scale up the capture of our biometric information, matching it against massive and often inaccurate external databases to make real-time decisions about whether we can access what used to be the commons.
Bunnings argues it was acting on concerns about in-store violence, but it chose this crude tech solution that dehumanises customers and staff in the name of their safety.
Covertly tracking customers has transformed the marketplace from a place of human connection to one of automated checkouts, monitored customers and staff forced to don body-cams to document the inevitable backlash.
Our outdated privacy laws, which have not had a serious makeover in 40 years, are a feature not a bug of this dystopian cycle of automation and pivotal to the broader direction big tech is taking us.
Significant privacy reform had been a priority of the former attorney general Mark Dreyfus, and he managed to get a modest round of changes focused on children’s privacy through the parliament before he fell victim to internal factional machinations after the 2025 election........
