India’s government has introduced a series of amendments and proposals to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, significantly strengthening its ability to regulate and swiftly remove online content. These changes, rolled out primarily in 2025–2026, target the rapid spread of misinformation, deepfakes, AI-generated synthetic media, and content deemed a threat to national security, public order, or decency. While officials describe them as essential for curbing harmful material in a country with over a billion internet users, critics argue they risk over-censorship, reduced due process, and a chilling effect on free speech.
Faster Takedown Timelines for Unlawful Content
One of the most immediate changes came through amendments notified in February 2026, effective from February 20. Social media platforms and intermediaries, including Meta, YouTube, and X, must now remove or disable access to “unlawful content” within three hours of receiving a government or court order—down sharply from the previous 36-hour window. For non-consensual intimate imagery, morphed photos, or certain deepfake content, the deadline is even tighter at two hours (previously 24 hours).
The rules also address synthetically generated information (SGI)—audio, video, or images created or altered using AI or computer tools to appear authentic. Platforms are required to:
- Deploy automated tools where feasible to proactively detect and block illegal synthetic content.
- Clearly label AI-generated material and add traceable metadata.
- Prevent users from removing or tampering with these labels.
These measures aim to prevent viral spread of deepfakes used for harassment, impersonation, scams, or election-related misinformation. Compliance with these obligations does not jeopardize the platform’s “safe harbour” protection under Section 79 of the IT Act, which shields intermediaries from liability for user-generated content if they exercise due diligence.
Proposed Expansion of Blocking Powers
On March 30, 2026, the Ministry of Electronics and Information Technology (MeitY) released the draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Second Amendment Rules, 2026 for public comments (deadline: April 14, 2026). The proposals seek to close what the government calls regulatory gaps in user-generated news and current affairs content.
Key elements include:
- Extending oversight mechanisms—specifically Rules 14 (Inter-Departmental Committee), 15 (blocking procedures), and 16 (emergency blocking)—to intermediaries and to individual users or creators who are “not publishers” but host, share, or upload news and current affairs content. This could allow direct takedown or blocking notices to influencers, YouTubers, or ordinary social media users posting news clips or commentary.
- Making compliance with MeitY’s clarifications, advisories, directions, standard operating procedures, or guidelines a formal due diligence requirement. Failure to follow them could risk loss of safe harbour protection, exposing platforms to legal liability for user content.
- Strengthening enforcement by giving statutory backing to government directives, reducing ambiguity in implementation.
These changes would bring a wider range of user-generated content under the three-tier grievance and oversight system previously focused more on established digital news publishers.
Decentralizing Blocking Authority
Parallel discussions are underway to decentralize powers under Section 69A of the IT Act, which currently allows the government to block content threatening sovereignty, security, public order, or related grounds. Proposals would empower multiple ministries—including Home Affairs, Defence, External Affairs, and Information & Broadcasting—to issue blocking orders directly, rather than routing everything through a single nodal authority in MeitY. This aims to enable faster responses to emerging threats like deepfakes or coordinated misinformation campaigns.
Complementing this is the Ministry of Home Affairs’ Sahyog portal, which automates takedown requests from central, state, and even district-level agencies under Section 79(3)(b). Platforms that fail to act expeditiously risk losing safe harbour. Hundreds of such requests have already been processed through the portal in recent years.
Government’s Rationale
Officials maintain that these layered reforms are necessary in the age of AI-driven content. Viral deepfakes and synthetic media can cause real harm—ranging from personal defamation and gender-based violence to threats against public order and national security—within minutes. Shorter timelines and broader enforcement tools allow quicker intervention without waiting for full judicial processes in every case. The focus remains on protecting users while ensuring platforms fulfill their due diligence obligations. India’s massive digital ecosystem demands agile regulation, and safe harbour continues to be available for compliant intermediaries.
Concerns Over Free Speech and Due Process
Digital rights groups, including the Internet Freedom Foundation, have raised strong objections. They describe the draft amendments as a “massive expansion of executive control” that could enable unconstitutional censorship. Key worries include:
- Over-removal of legitimate content: Three-hour deadlines leave little room for platforms to assess context, potentially leading to hasty takedowns of satire, criticism, or factual reporting.
- Chilling effect on users and creators: Bringing ordinary individuals sharing news under blocking rules blurs the line between publishers and users, possibly discouraging open discourse on current affairs.
- Reduced oversight: Decentralized powers and binding advisories may multiply orders with fewer checks, while emergency blocking provisions already allow interim action with limited immediate safeguards.
- Transparency and accountability: Critics question whether sufficient judicial review exists, especially when platforms may prioritize compliance to protect their operations in India’s large market.
Platforms face significant technical and operational burdens, and some have highlighted practical challenges in meeting ultra-short timelines consistently.
A Continuing Trend
These developments form part of an ongoing evolution of India’s digital regulatory framework since the 2021 Rules. Earlier 2025 amendments refined procedures for takedowns to involve senior officials and emphasize reasoned orders. The 2026 changes build on that foundation, responding to the explosion of AI tools and the speed of online information flows.
The draft Second Amendment remains open for stakeholder feedback, and proposals on Section 69A decentralization are still under inter-ministerial discussion. Outcomes could shift based on inputs from platforms, civil society, and the public.
Ultimately, India’s new digital rules reflect a broader global push by governments to assert greater authority over online platforms amid concerns over disinformation and harmful content. The central tension—balancing genuine harms against the risk of excessive control over expression—will likely continue to shape debates and potential legal challenges in the coming months. For the official texts and latest updates, refer to MeitY notifications and analyses from independent sources.