menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Time to regulate AI

17 7
09.09.2025

Artificial Intelligence is the most transformative revolutionary technology of our era that is shaping every area of human activity, but its immense power is also saddled with immense risks. The opacity of its algorithms enhances this danger – there are numerous instances where it has caused grievous harm to society. These harms can propagate at lightning speed giving little scope for correction or course reversal.

The risks also extend to systemic instability, as evidenced by AI-driven financial “flash crashes” – on 6 May 2010, the Dow Jones fell more than 1,000 points in just 10 minutes, erasing about $1 trillion in equity, though 70 per cent of the losses were recovered by the end of the day. Left unregulated, these risks can undermine trust in institutions and destabilize markets. But much more sinister is their ability to inflict physical violence. In 2020, a UN report indicated that Turkish-made Kargu-2 drones, powered by AI-based image recognition, may have attacked human combatants without direct human oversight, marking what may be the first recorded incident of autonomous lethal force.

Advertisement

Reports from Gaza in 2023–24 suggested that Israel used an AI system known as “Lavender” to automatically generate target lists for bombing campaigns. Such automated decision-making may have caused unintended civilian deaths by lowering the strike threshold, mocking the morality, legality and accountability in using violence against civilians. AI systems are prone to inheriting their creators’ biases and frequently reflect hidden biases within their training data, thereby replicating and intensifying human prejudices while feigning impartiality.

Advertisement

AI’s scale, invisibility, and speed make it more dangerous; once embedded in automated decision-making systems, biased outcomes can affect millions of people together. Such abuses erode trust, deepen inequalities, and perpetuate systemic injustice. A case in point is the COMPAS algorithm used in US courts to predict reoffending risk. The system unfairly labelled Black defendants as “high risk” compared to white defendants even when the latter had worse criminal histories, affecting bail and sentencing. Amazon’s AI hiring tool was scrapped after it was found to discriminate against female applicants.

Apple’s credit card algorithm came under fire in 2019 when women, including high-profile applicants, received significantly lower credit limits than men with identical financial........

© The Statesman