The Great Indian ‘AaI’ Tamasha: Navigating the Gold Rush Without Losing Our Pyjamas

In 1868, when the British introduced the first telegraph line between Kolkata and London, India's communication landscape transformed overnight. Messages that once took months by sea mail could suddenly traverse continents in hours. Local merchants, initially skeptical, soon found themselves dependent on these lightning-fast transmissions for trading commodities. The colonial government, caught off guard by the technology's rapid adoption, scrambled to enact the Indian Telegraph Act—a framework that would, remarkably, govern Indian telecommunications for the next 137 years despite being conceived in an era of horse-drawn carriages and steam engines.
History, they say, doesn't repeat itself, but it often rhymes. Today, we find ourselves in a strikingly similar position. In 2025, India stands at the forefront of global AI adoption, with 96% of professionals already incorporating AI tools into their daily workflows, according to a report. The nation's AI market is projected to reach 28.8 billion USD by year's end, with startups attracting unprecedented investment and the government pledging infrastructure commitments like "10,000 GPUs by mid-2025."
Small and medium businesses report 78% revenue growth after AI integration, and India has topped global charts for AI skill penetration. Yet beneath this technological renaissance lies a troubling regulatory vacuum—one that threatens to undermine the very benefits this digital revolution promises.
Picture this: a bustling metro at peak hour, where everyone's scrambling to board the AI express, but no one's quite sure what line of the metro it is and where it's headed. While we're jugaad-ing AI into every corner of life—from restaurant menus to agricultural forecasting—our regulatory framework remains as sturdy as a Bollywood villain's disguise. It's a Dangal-level victory with a Kalki level trophy.
India's AI regulation today is shockingly inadequate for the technologies it aims to govern. The IT Act (2000), enacted when "dial-up" was cutting-edge technology, serves as the primary legal instrument for controlling advanced machine learning systems. The 2021 "Principles for Responsible AI" and subsequent advisories on issues like deepfakes represent well-intentioned but ultimately toothless measures. The government's latest "AI Governance Guidelines" report emphasizes harm minimization but relies primarily on "voluntary commitments" from industry players—an approach that history suggests is insufficient for meaningful protection.
The fundamental issue isn't a dearth of ideas but rather a lack of enforceability. Consider the March 2024 advisory mandating platforms to label AI-generated content: without specified penalties or oversight mechanisms, compliance remains optional. Meanwhile, the existing legal framework strains under conceptual tensions, such as the increasingly blurred line between "intermediaries" and "publishers" when applied to generative AI systems.
The regulatory gaps are substantial and concerning. Over 78% of Indian AI systems demonstrate algorithmic bias, resulting in real-world discrimination that affects employment, financial services, and healthcare. The NITI Aayog's principles acknowledge fairness as a goal, but without required audits or compliance metrics, these remain aspirational rather than operational.
Privacy protections fare no better. The Digital Personal Data Protection Bill represents progress but fails to address AI-specific data collection challenges. When 93% of businesses cite AI-driven efficiency improvements, who's verifying that sensitive information, including Aadhaar details, receives appropriate protection? The deepfake crisis further illustrates regulatory inadequacy—recent incidents involving prominent public figures have demonstrated the potential for widespread misinformation, yet the official response has been limited to labeling recommendations without enforcement mechanisms.
Perhaps most troubling is the accountability vacuum. When automated systems make consequential decisions—rejecting job applicants, denying loans, or influencing healthcare—liability remains dangerously undefined. Is responsibility with the developer, the deployer, or some distributed combination? Current legislation offers little clarity, leaving affected individuals with limited recourse.
To address these challenges, India requires a comprehensive regulatory approach that balances innovation with meaningful oversight. First, we need dedicated legislation rather than piecemeal advisories. A standalone AI Act, drawing lessons from the EU's approach while adapting to India's unique context, would provide clarity and coherence. This legislation should establish tiered risk categories with corresponding requirements, recognizing that applications in healthcare or public services demand stricter standards than entertainment uses.
Second, mandatory algorithmic impact assessments must replace voluntary commitments. Companies deploying high-risk AI systems should be required to evaluate potential biases, privacy implications, and security vulnerabilities before market introduction. These assessments should be verified by the proposed AI Safety Institute, which must be properly resourced and empowered to act as a genuine watchdog rather than a ceremonial body.
For deepfakes and synthetic media, we need a three-pronged strategy: mandatory watermarking technology, platform responsibility for detection, and meaningful penalties for malicious distribution. Blockchain-based metadata systems offer promising avenues for origin tracing, but they require regulatory backing to achieve widespread adoption.
The liability framework demands immediate clarification. Following the EU model, legislation should establish that both developers and deployers share responsibility, with proportionality based on knowledge and control. The IT Act must be updated to define AI-specific roles and obligations, providing courts with clear guidance for dispute resolution.
Finally, India must dramatically strengthen institutional capacity. MeitY officials require specialized training in AI ethics and governance, research funding should prioritize safety and auditability techniques, and regulatory expertise should be cultivated through academic partnerships and international collaboration.
India's AI journey presents tremendous opportunity. With appropriate guardrails, we can harness technological advancement while preventing harmful outcomes. The current regulatory landscape recalls a classic scene from 3 Idiots—running fast without direction leads to cliff edges. By establishing clear boundaries and expectations, we don't inhibit innovation; we channel it productively. Our digital future depends not just on accelerating AI adoption but on ensuring these powerful tools serve the broader public interest. As we stand at this critical juncture, the choices we make today will determine whether India's AI revolution becomes a model of balanced progress or a cautionary tale of technology outpacing governance.