Global Trends in AI Governance
Many countries are moving from focusing on ethics and human rights to promoting innovation and economic growth through AI.
Only a few countries like China, the EU, Canada, Korea, Peru, and the U.S. have passed AI laws; others like the U.K., Japan, and Brazil have draft Bills pending.
Most countries (around 85 and the African Union) have released national AI strategy documents to guide ethical and inclusive AI development.
India’s Unique Approach
India has no formal AI law or approved national strategy; the 2018 NITI Aayog document remains a recommendation without funding or official backing.
Instead, India is pursuing the IndiaAI mission, which promotes innovation and responsible AI use across seven focus areas.
An advisory group is drafting governance recommendations, but it’s unclear if these will become official policy.
This flexible approach helps India adapt to fast-changing AI developments but lacks long-term vision, milestones, and accountability.
Key Risks and Gaps
AI is growing rapidly in India, but regulations are voluntary and unclear, especially in sectors like healthcare, banking, and education.
There is little public awareness or discussion around AI model fairness, safety, and impact on society, jobs, and privacy.
AI-generated content has already caused social harm in India, highlighting the urgent need for oversight and safeguards.
Learning from Global Models
India can learn from the EU’s General Data Protection Regulation (GDPR)-like data laws and China’s targeted AI regulations.
The DPDP Act, 2023 gives India a base for a centralised governance model for AI.
A short-term goal could be a formal AI policy outlining India’s vision, responsible authority, priority sectors, and ethical guidelines.
Public engagement and open discussions on AI use and impact are essential to build trust and ensure responsible adoption.
COMMENTS