menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Does ‘federated unlearning’ in AI improve data privacy, or create a new cybersecurity risk?

18 0
13.04.2026

As the capacity of artificial intelligence (AI) increases at an exponential rate, so do concerns about the privacy of user data.

Increasingly, organizations around the world are adopting something called federated unlearning that enables AI training without centralizing sensitive data. This allows hospitals, banks and government agencies to collaborate while keeping data local — an approach that’s regarded as a major advance in privacy.

Federated unlearning promises that user data can be removed from a trained AI system. A hospital, for example, could ask its AI system to forget a patient’s data.

In the European Union, this is defined as the “right to be forgotten.” Similar data deletion rights exist globally, though with different legal strengths and technical interpretations.

But what if the request to forget is not itself trustworthy? Our research shows that while federated unlearning appears to be a natural extension of data rights, it also introduces new hidden security risks that undermine trust in our digital world.

Read more: Silent cyber threats: How shadow AI could undermine Canada’s digital health defences

New stealth vulnerabilities

During a process of federated unlearning, participants train local models on personal data, then send updates for those models to a central server. The server aggregates........

© The Conversation