Expansion of AI Surveillance in India
The Indian government has implemented AI-powered facial recognition and surveillance systems, with plans for 50 AI satellites to strengthen policing and surveillance.
These systems risk creating dragnet surveillance, where personal data is indiscriminately collected from all citizens, not just suspects.
India’s ambitious plans raise concerns about balancing security needs with privacy rights.
Legal and Constitutional Concerns
Article 21 of the Constitution guarantees the right to privacy, and AI surveillance could infringe upon this fundamental right.
The K.S. Puttaswamy judgment (2017) stressed that privacy, including informational privacy, must be protected through strong legal safeguards.
The Digital Personal Data Protection Act (DPDPA) (2023) includes problematic provisions:
Section 7(g) allows the government to process personal data without consent during medical emergencies, such as epidemics, potentially enabling unchecked surveillance.
Section 7(i) exempts the government from needing consent to process data related to employment, which is concerning given the government's role as the largest employer in India.
Section 15(c) mandates that citizens provide accurate data and not suppress any information, which could lead to punitive actions for minor errors or outdated information, increasing privacy risks.
Regulatory Gaps and Global Comparisons
India lacks comprehensive AI regulation, with no public guidelines on how surveillance data is collected, processed, or stored.
The EU’s Artificial Intelligence Act takes a risk-based approach, restricting high-risk surveillance activities, like real-time biometric identification, unless justified by serious crime or imminent threat.
In contrast, India’s AI technologies, including facial recognition and CCTV surveillance, are deployed with minimal oversight, putting citizens' data at significant risk of misuse.
Need for a Comprehensive Regulatory Framework
A transparent regulatory framework is essential for AI surveillance in India to ensure data collection practices are disclosed, with explicit consent mechanisms.
Narrow exemptions should apply, with independent judicial oversight to ensure surveillance does not violate constitutional rights.
Risk-based categorization, similar to the EU's approach, would help manage AI activities that pose significant privacy and civil liberty risks.
Provisions like Section 7(g), 7(i), and 15(c) in the DPDPA need to be reconsidered to prevent the misuse of personal data, ensuring stronger privacy protections for citizens.
Transparent data practices and third-party audits are critical to safeguard against abuses and uphold the constitutional right to privacy.
COMMENTS