The Takedown Machine, How India’s Home Ministry Is Blocking 290 Online Content Pieces Every Day
Within a Year of Being Empowered to Directly Issue Takedown Notices, the Government Has Blocked Over 1.1 Lakh Pieces of Content—Raising Questions About Transparency, Accountability, and the Balance Between Regulation and Free Speech
Within a year of being empowered to directly issue takedown notices for online content, the Union Home Ministry issued an average of 290 such notices every day, according to the Ministry’s data. On March 13, 2024, the Indian Cyber Crime Coordination Centre (I4C) was designated as the Ministry’s agency to perform the functions under Section 79(3)(b) of the Information Technology Act, 2000.
According to the Ministry’s annual report for 2024-25, published on Wednesday, till March 31, 2025, “1,11,185 suspicious online content have been blocked under Section 79(3)(b) of IT Act.” This is not a minor administrative exercise. It is a significant expansion of the state’s power to control online speech, and it raises fundamental questions about the balance between the need to curb harmful content and the right to free expression.
The Legal Framework
Though Section 79(1) of the IT Act shields online platforms and social media intermediaries from legal liability for content posted by users, Section 79(3)(b) of the IT Act says that the shield will not apply if they fail to take down the content despite being flagged by government authorities. In other words, platforms that do not comply with government takedown notices lose their “safe harbour” protection and become liable for the content posted by their users.
The provision is designed to give the government a mechanism to remove content that is illegal or harmful. But its application has been controversial. Critics argue that it gives the government sweeping powers to silence dissent, that it lacks adequate safeguards against abuse, and that it places an impossible burden on platforms to comply with notices that may be legally questionable.
The Scale of Takedowns
The numbers are striking. Over 1.1 lakh pieces of content were blocked in a single year. That averages to 290 per day. These are not just random posts; they are pieces of content that the government has determined, through a process that is not fully transparent, are “suspicious” and must be removed.
What kind of content is being blocked? The Ministry’s report does not specify. The data is aggregated, with no breakdown by type of content, by platform, or by the grounds for blocking. This lack of transparency makes it difficult to assess whether the power is being used appropriately.
Previous reporting has shed some light. The Hindu reported on March 29, 2025, that nearly a third of the 66 takedown notices sent to X (formerly Twitter) by the I4C sought removal of content about Union Ministers and Central government agencies. This suggests that a significant portion of the takedown notices target political speech.
The Three-Hour Deadline
Social media platforms and other intermediaries are required to remove unlawful content within three hours of the receipt of an order of a court of competent jurisdiction or a reasoned limitation by the Appropriate Government or its agency, the Ministry of Electronics and Information Technology informed Parliament recently.
Three hours is an extremely short window. For a platform with millions of users and billions of posts, identifying and removing a specific piece of content within three hours is a significant operational challenge. The requirement places platforms in a difficult position: comply quickly or risk losing their legal protection.
The short deadline also makes meaningful review of the government’s order difficult. Platforms have little time to assess whether the content is actually unlawful, whether the order is legally sound, or whether the government has followed the proper procedures. The rational response is to take down the content first and ask questions later.
The Sahyog Portal and the Karnataka High Court
Social media platform X had challenged this provision and the Sahyog portal, which enables police across the country to send such notices through a common platform, in the Karnataka High Court, but the petition was turned down by the court in 2025.
The Sahyog portal is a government platform that automates the process of sending takedown notices to intermediaries. It allows law enforcement agencies across the country to issue notices through a common interface, streamlining the process and making it easier for the government to demand removals.
X’s challenge argued that the portal bypassed procedural safeguards that need to be mandatorily followed by government authorities before blocking public access to online content under Section 69A of the IT Act. The company argued that the government was using the “safe harbour” regime to nudge social media intermediaries into blocking content and restricting free speech and expression.
The Karnataka High Court’s rejection of X’s petition means that the Sahyog portal remains operational, and the government’s streamlined takedown process continues.
The Rise in Cybersecurity Incidents
Separately, on March 24, the Ministry informed the Lok Sabha that the number of cybersecurity incidents reported in India has risen sharply over the past five years, according to data tracked by the Indian Computer Emergency Response Team (CERT-In), the national agency responsible for responding to cyber threats.
CERT-In, which functions under provisions of Section 70B of the Information Technology Act, 2000, recorded 29.44 lakh cyber security incidents in 2025, the highest figure in the last five years. The number of such incidents was 20.41 lakh in 2024. As per CERT-In, the highest number of cyber incidents reported is from the National Capital Territory of Delhi.
The rise in cybersecurity incidents is a genuine concern. Cyber threats—from phishing and ransomware to data breaches and cyber espionage—are increasing. The government has a legitimate interest in protecting citizens from these threats.
But the connection between cybersecurity and content takedowns is not always clear. Much of the content being blocked under Section 79(3)(b) is not directly related to cybersecurity. It includes political speech, criticism of the government, and other content that may be uncomfortable for those in power.
The Balance Between Regulation and Free Speech
The government’s power to block online content is not unlimited. The Supreme Court’s 2015 judgment in Shreya Singhal v. Union of India established important safeguards. The Court held that blocking orders must be for reasons prescribed in Article 19(2) of the Constitution, must be given in writing, and must be subject to review by a committee. It also held that both the intermediary and the originator of content must be heard before a blocking decision.
But the Shreya Singhal framework applies to blocking under Section 69A, not to takedowns under Section 79(3)(b). The legal framework for takedowns is less clear, and the safeguards are weaker.
The result is a system where the government can demand the removal of content with little oversight, where platforms have little time to assess the legality of the demands, and where the public has no way of knowing what content has been removed or why.
The Transparency Deficit
One of the most troubling aspects of the current system is the lack of transparency. The government does not publish the content it blocks. It does not provide regular reports on the types of content removed or the grounds for removal. It does not give the public a way to know whether content they were trying to access has been taken down by government order.
This transparency deficit makes it difficult to hold the government accountable. It also makes it difficult for platforms to know whether they are complying appropriately. And it makes it impossible for citizens to know when their right to free speech has been curtailed.
The Need for Reform
The scale of takedowns—over 1.1 lakh pieces of content in a year—suggests that the current system is being used extensively. Whether it is being used appropriately is harder to determine.
Several reforms could help. First, the government should provide regular, detailed reports on the content it blocks, including the types of content, the grounds for blocking, and the platforms affected. Second, the three-hour deadline should be extended to allow for meaningful review. Third, there should be a clear mechanism for content creators to challenge takedowns and to seek restoration of content that was wrongly removed. Fourth, the legal framework for takedowns should be aligned with the safeguards established by the Supreme Court for blocking orders.
Conclusion: A Powerful Tool in Need of Oversight
The government’s power to issue takedown notices is a powerful tool. It can be used to remove genuinely harmful content—child sexual abuse material, content that incites violence, material that threatens national security. But it can also be used to silence dissent, to suppress criticism, to control the flow of information.
The numbers—290 notices a day, 1.1 lakh pieces of content blocked—suggest that the tool is being used extensively. Whether it is being used wisely is a question that requires more transparency, more accountability, and more oversight.
In a democracy, the government’s power to control speech must be exercised with restraint, and it must be subject to scrutiny. The current system, with its short deadlines, lack of transparency, and weak safeguards, falls short of that standard.
Q&A: Unpacking the Government’s Content Takedown Powers
Q1: What is the legal basis for the government’s content takedown powers?
A: Under Section 79(3)(b) of the Information Technology Act, 2000, social media platforms lose their “safe harbour” protection (immunity from liability for user content) if they fail to take down content flagged by government authorities. On March 13, 2024, the Indian Cyber Crime Coordination Centre (I4C) was designated as the Home Ministry’s agency to issue such takedown notices. Platforms must remove flagged content within three hours of receiving an order.
Q2: How many content takedown notices has the government issued?
A: According to the Home Ministry’s annual report for 2024-25, 1,11,185 pieces of “suspicious online content” were blocked under Section 79(3)(b) of the IT Act from March 13, 2024, to March 31, 2025. This averages to approximately 290 takedown notices per day. The report does not specify the types of content blocked or the grounds for blocking.
Q3: What concerns have been raised about the takedown process?
A: Critics raise several concerns: the three-hour deadline is extremely short, making meaningful review of the government’s order difficult; the process lacks transparency—the government does not publish what content is blocked or why; platforms may err on the side of removing content rather than risk losing legal protection; and the safeguards established by the Supreme Court in Shreya Singhal for blocking under Section 69A do not clearly apply to takedowns under Section 79(3)(b).
Q4: What was the Sahyog portal, and what did the Karnataka High Court rule?
A: The Sahyog portal is a government platform that automates sending takedown notices to intermediaries, enabling police across the country to issue notices through a common interface. Social media platform X challenged the provision and the portal in the Karnataka High Court, arguing it bypassed procedural safeguards. The court rejected the petition in 2025, allowing the portal to remain operational.
Q5: What is the trend in cybersecurity incidents in India?
A: According to CERT-In data, cybersecurity incidents have risen sharply: 14,02,809 in 2021; 13,91,457 in 2022; 15,92,917 in 2023; 20,41,360 in 2024; and 29,44,248 in 2025—the highest in five years. The National Capital Territory of Delhi reported the highest number of incidents. CERT-In is the national agency responsible for responding to cyber threats under Section 70B of the IT Act. However, the relationship between these cybersecurity incidents and content takedowns is not always clear.
