Opinion | India’s AI Safety Norms: Platform Approach Will Make The Nation A Global Leader
Artificial intelligence (AI) is transforming worldwide societies, economies, and governance structures. However, its rapid evolution has raised concerns about critical safety, ethics, and accountability. Governments worldwide have begun implementing regulatory frameworks to ensure AI development aligns with societal interests. India, too, has initiated efforts to establish AI safety norms, but a fundamental question remains: Whose safety are these regulations prioritising?
Are they aimed at protecting citizens, the government, data privacy, or corporate interests? A comparative analysis with global approaches provides insight into India’s stance and implications.
On 30 January, India’s Union Minister for Electronics & Information Technology, Railways, and Information & Broadcasting, Ashwini Vaishnaw, announced the establishment of an AI Safety Institute. According to the release of the government’s PIB, the aim is to address artificial intelligence’s complex challenges through a strategic, multi-institutional, techno-legal approach. Eight projects will work parallel to ensure data privacy, reduce biases, and make AI systems transparent and accountable.
The first of these projects, ’Machine Unlearning’, is being spearheaded by IIT Jodhpur. Since data privacy is paramount, this initiative aims to allow AI systems to “forget" specific data, providing a selective eraser for sensitive or outdated information.
IIT Roorkee’s ‘Synthetic Data Generation’ project creates artificial data that mimics real-world information. This allows AI systems to be trained without compromising individual privacy. It’s a clever workaround that could set a new standard in data protection.
The ‘AI Bias Mitigation Strategy’ being developed at NIT Raipur tackles one of the most pressing concerns in AI ethics. Identifying and reducing biases does not just make AI fairer; it........
© News18
