menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Britain is racing to build killer drones, but what if hackers turn them against us?

13 0
13.03.2026

Last week the world watched as US President Donald Trump launched a massive strike against Iran using jets and drones, killing the country’s leadership and plunging the Middle East into war.

Listen to this article

It came 48 hours after a stark warning from the UK’s armed forces minister Al Carns in which he suggested that Britain will have to fight a high-tech war against a major adversary in as little as three years.

While the UK is currently only assisting the US when it comes to defensive strikes, the actions of the US are sure to have accelerated Carns’ timeline somewhat.

Carn’s statement was part of a warning that the country was not ready for the conflict he predicted, and that a lot of its military is stuck in the 80s.

These kinds of acknowledgements are happening across Europe as countries gain an increased awareness of global instability, the scale of the threat they face as well as an understanding of the reduced role the US will play in their defence.

However, in all of this rush to make up for lost time and fill the gap left by America, it is crucial that safety is not sacrificed on the altar of speed.

What I mean is, in the rush to create the next generation of autonomous systems en-masse (something which, to be clear, is needed) we must not risk accidentally creating the tools of our own destruction.

Take autonomous drones for example: the technology to hack and take control of enemy drones is already being developed, and as far back as 2011 there have been warnings that drones can be turned around and used as projectiles by spoofing its GPS system.

Since that time, the use of drones has exploded to a scale that would have been unimaginable fifteen years ago and last year a UN report warned of the threat of terrorists taking control of autonomous drones, cars and trucks to use as weapons.

It’s worth clarifying that drones being turned back on their operators is the worst-case scenario but far from the only threat. Hostile actors could also access highly sensitive information or disable defence capabilities.

I’ve seen first-hand the rate of development of defence-tech and through my own work - building digital infrastructure for machine learning alongside NATO and companies like BAE Systems - I’ve also been part of the push to make sure the tech is secure.

That is especially difficult when it comes to battlefield tech that requires machine learning, because for a device like an autonomous drone to work best on a constantly changing battlefield it requires constantly updating information.

That need for information means that sensitive data is spread across devices, networks and sometimes jurisdictions making it near-impossible to guarantee it is being held safely.

When it ships to a third-party server before returning to the edge (i.e. a smartphone, an electric vehicle or a drone) a window for interception opens, making it a juicy target for hackers looking to steal information and take control.

This is just one aspect of the incredibly tricky conundrum of how to keep our autonomous systems effective and, crucially, on our side.

There are two main ways we can safeguard against these worst-case scenarios: The obvious thing is to make these systems more secure at every level from the network to the sensors. That task is made immeasurably easier if the data stays local, on the devices being trained and updated with fresh information, rather than being sent to a third-party cloud or datacentre.

Scaleout is just one of the firms building AI model training infrastructure that doesn’t require sensitive data to be funnelled through multiple pipelines. A

federated machine learning approach means the information is not centralised. When it is, it risks being a target for malicious actors looking for a treasure trove of sensitive data to use to their advantage. The second is making sure a human stays in the loop.

Even if there is a cyberattack that compromises a drone or causes it to go rogue, a human overseer should always be on hand to take action and shut the system down before it can do any harm.

While again this may sound obvious, having humans in the loop cannot be guaranteed. One must look only to the US ‘Department of War’ and its dispute with Anthropic over the use of its AI systems.

Negotiations broke down in part because the firm wouldn’t agree to wording that would allow the US to use Anthropic’s AI for fully autonomous weapons such a single drone or a swarm of them. Anthropic correctly argues that these weapons shouldn’t exist without human oversight.

We can’t afford to shy away from investing in defence innovation. The way weapons have been deployed in Iran - some of which have been retrofitted and redeployed from the battlefields of Ukraine - illustrates better than any verbal warning that the battlefield is evolving at a blistering pace.

To be left behind is to lose before the war has even begun. But we also can’t afford to be careless with our security.

We need to be building the weapons of the future, but we shouldn’t be building them for the benefit of the enemy.

Andreas Hellander is the co-founder and CEO of AI infrastructure firm Scaleout and an AI and cybersecurity expert - his company has worked with NATO and defence companies like BAE Systems.

LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.

The views expressed are those of the authors and do not necessarily reflect the official LBC position.

To contact us email opinion@lbc.co.uk


© LBC