The Third Way, India’s Ambitious Vision for AI Governance at a Crossroads

As the AI Impact Summit Convenes in New Delhi, the World Watches Whether a Middle Path Can Succeed

The AI Impact Summit now underway in New Delhi brings together world leaders, technology experts, and policymakers at a moment of profound contradiction. Artificial intelligence is advancing at a pace that defies comprehension, reshaping industries, societies, and the very fabric of human interaction. Yet the governance frameworks meant to guide this transformation remain fragmented, contested, and uncertain.

The European Union has pursued a compliance-heavy regime, codifying risks and requiring extensive documentation from AI developers. The United States has largely adopted a hands-off approach, trusting markets and voluntary commitments to steer innovation in beneficial directions. China has built a centralised state model, integrating AI development into broader strategies of technological sovereignty and social control.

Each of these approaches reflects the economic contexts, policy traditions, and political systems from which they emerged. None transfers neatly to the global majority—the countries of the Global South that are grappling with how to harness AI’s potential while protecting their populations from its harms.

India, as the host of the Summit, has positioned itself as offering something different. It calls this a “Third Way” for AI governance—a path that recognises opportunities for countries to enter AI markets while acknowledging that existing governance strategies were designed for different circumstances. The question is whether this Third Way can succeed where others have fallen short, and whether India can build the coordination and capacity needed to make it real.

The India AI Mohatty: A Framework Takes Shape

In November 2025, the Indian government released its AI Mohatty, a set of guidelines that represent a distinctive approach to AI governance. One of the framework’s architects, reflecting in a recent Techlawtopia essay, described it as not merely a regulatory framework but a governance framework encompassing adoption, diffusion, diplomacy, and capacity-building.

This distinction matters. Regulation is about rules—what is permitted, what is prohibited, what must be disclosed. Governance is broader. It encompasses how technology is developed, how it spreads through society, how countries negotiate its international dimensions, and how institutions build the capacity to manage its effects. The India AI Mohatty attempts to address all of these dimensions, not just the narrow question of compliance.

The framework prioritises scaling AI for inclusive development. It identifies key sectors where AI can make a difference: healthcare, where diagnostic tools can reach remote populations; agriculture, where precision farming can increase yields; education, where personalised learning can adapt to students’ needs; and public administration, where efficiency gains can improve service delivery. In each of these domains, the goal is not innovation for its own sake but innovation that serves human development.

At the same time, the framework works through existing legal structures. It does not propose a radical overhaul of India’s regulatory apparatus but seeks to adapt and extend what already exists. This pragmatic approach recognises that governance cannot be built from scratch; it must be layered onto institutions that have developed over decades. The challenge is to make those institutions capable of addressing problems they were never designed to handle.

The framework is designed to be agile and forward-looking. It translates high-level principles into practical guidelines while allowing room for evolution as the technology matures. This is essential in a field where change is constant and prediction is impossible. A rigid framework would be obsolete before it was implemented. An agile framework can adapt as circumstances demand.

The February 10 Amendments: A First Test

On February 10, 2026, just days before the AI Impact Summit, the government announced amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. These amendments make it mandatory for intermediaries to label AI-generated information and impose a three-hour takedown window for harmful content.

This is the first instance of any government mandating disclosure of AI-generated content. It represents a significant intervention in the digital ecosystem, requiring platforms to identify when content has been created by algorithms rather than humans. The logic is straightforward: if people cannot distinguish between human and machine-generated content, they cannot make informed judgments about what they see and hear. Disclosure restores some of that capacity.

The three-hour takedown window is more controversial. It requires platforms to remove content deemed harmful within a very short timeframe—a sharp reduction from previous requirements. The concern is that such rapid removal may preclude meaningful review, leading platforms to err on the side of censorship rather than risk penalties. When the cost of keeping content up is legal liability, and the cost of taking it down is merely the displeasure of users, the rational choice is to take it down.

The challenge of implementation and enforcement at scale is immense. India must apply these rules to tech behemoths with global operations, vast resources, and sophisticated legal teams. It must do so in ways that respect human rights and democratic norms, not just in letter but in spirit. And it must coordinate with other countries, because AI does not respect borders and neither will the companies that develop it. Without international coordination, even the most well-designed rules will be circumvented by those who can move their operations to more permissive jurisdictions.

The Global South Dimension

For the Global South, the stakes of AI governance could not be higher. The concentration of AI investment in a handful of countries and companies creates an uneven landscape for AI diffusion. Countries that lack the resources to develop their own AI systems become dependent on external providers. That dependence brings risks: loss of control over critical infrastructure, exposure to surveillance and manipulation, and the imposition of values and priorities that may not align with local contexts.

These risks are not merely hypothetical. They are already playing out in domains ranging from agriculture to healthcare to education. When a country relies on AI systems developed elsewhere, it inherits the biases embedded in those systems—biases that reflect the data and assumptions of their creators. It also inherits the vulnerabilities: if a system fails, if it is hacked, if it is used for purposes its users did not intend, the consequences are borne locally while the control remains elsewhere.

India’s approach—emphasising strategic autonomy, public-private partnerships, and governance tailored to local context—offers an alternative path. It recognises that countries need not choose between complete dependence and total autarky. They can build their own capabilities while engaging with the global system. They can develop standards that reflect their own values while coordinating with others. They can participate in AI development without being dominated by it.

This requires research infrastructure across middle powers. No single country, not even one as large as India, can assess all the risks that AI presents. Shared safety evaluation frameworks allow countries to pool their expertise and compare their findings. Collaborative research networks enable scientists and engineers to work together across borders. Mechanisms to pool expertise on risks that no single country can assess alone create collective capacity that exceeds what any nation could achieve individually.

Given its size, scale, and leading role in AI infrastructure, India is uniquely positioned to convene this coordination. It has the technical talent, the institutional experience, and the diplomatic relationships to bring middle powers together around shared approaches to AI governance. The AI Impact Summit is an opportunity to begin that work.

The Critical Gap: Protecting People

Yet governance coordination means little if the framework itself has gaps. A governance approach that accelerates AI adoption while providing no protection for workers being displaced is not a balanced model for others to follow. The benefits of AI will not be distributed automatically; they will be captured by those with the power and position to claim them. Workers who lose their jobs to automation, communities that are disrupted by technological change, and populations that are harmed by algorithmic decisions need protection that markets will not provide.

Without a shared understanding of the minimum measures needed to mandate transparency and accountability from AI developers, the risks will accumulate. Without protections for whistleblowers who expose wrongdoing, the public will never know what is being done in its name. Without safeguards for vulnerable populations who are most at risk from adverse harms, the costs of AI will be borne by those least able to bear them. Without public awareness and agency, citizens will be subjects of technological change rather than participants in shaping it.

Even well-meaning coordination is likely to fall flat if these dimensions are neglected. The people on whom innovation depends—workers, consumers, citizens—must be central to any governance framework that claims to serve the public interest. A Third Way that forgets this is not a third way at all; it is just another variant of top-down control.

What Inclusive AI Governance Could Look Like

The AI Impact Summit represents a genuine opportunity to shape what inclusive AI governance coordination could look like. The elements are beginning to come into focus.

First, robust public-private partnerships across the technology stack. AI development is not a single activity but a chain of activities, from foundational research to application development to deployment and maintenance. Partnerships that engage public institutions and private firms at each stage can ensure that gains are distributed more equitably than they would be if left to market forces alone.

Second, positioning India as a hub for agile collective governance among middle powers. The countries that are neither dominant players nor passive recipients of technology have common interests in shaping AI’s development. They need forums where they can coordinate their approaches, share their experiences, and amplify their influence. India, with its democratic institutions, technical capacity, and diplomatic reach, is well-placed to host such forums.

Third, building the research infrastructure that enables informed governance. Safety evaluation, risk assessment, and impact analysis require scientific capacity that most countries lack individually. By pooling resources and sharing findings, middle powers can develop the evidence base they need to make good policy.

Fourth, embedding protection for workers and vulnerable populations in the governance framework itself. This means not just safety nets after displacement occurs, but proactive measures to ensure that workers have the skills and opportunities to participate in the AI economy. It means designing systems that are accountable to those they affect, not just to those who deploy them.

The Next Twelve Months

The coming year will determine whether India’s model can successfully integrate innovation, security, and human welfare—or whether the gaps create the very instability that governance is meant to prevent.

The challenges are formidable. The technology is evolving faster than any governance process can keep pace. The interests at stake are enormous, and the actors involved are powerful. The international landscape is fragmented, with competing approaches pulling in different directions. And the domestic political environment is complex, with multiple constituencies demanding attention.

Yet the opportunity is also real. India has positioned itself at the centre of global conversations about AI governance. It has developed a framework that reflects its own circumstances while offering lessons for others. It has convening power that can bring diverse actors together. And it has a stake in the outcome that is large enough to motivate sustained effort.

The choices India makes now will determine whether the Third Way becomes a model worth following or a missed opportunity. They will shape not only India’s own AI future but the options available to other countries in the Global South. They will influence whether AI development proceeds along paths that concentrate power and wealth or along paths that distribute them more broadly.

The AI Impact Summit is a moment of possibility. What comes after will be a test of whether that possibility can be realised.

Q&A: Unpacking India’s Third Way for AI Governance

Q1: What is the “Third Way” for AI governance that India is proposing?

A: The Third Way refers to India’s distinctive approach to AI governance, which differs from the three dominant models: the EU’s compliance-heavy regime, the U.S.’s hands-off market approach, and China’s centralised state model. India’s framework, articulated in the November 2025 AI Mohatty, encompasses not just regulation but broader governance including adoption, diffusion, diplomacy, and capacity-building. It prioritises scaling AI for inclusive development in sectors like healthcare, agriculture, education, and public administration, while working through existing legal structures. The approach is designed to be agile and forward-looking, allowing room for evolution as technology matures.

Q2: What are the key features of the February 10, 2026 amendments to IT Rules?

A: The amendments make two significant changes. First, they mandate that intermediaries label AI-generated information, requiring disclosure when content is created by algorithms rather than humans. This is the first instance of any government mandating such disclosure. Second, they impose a three-hour takedown window for harmful content, sharply reducing the previous timeframe. While intended to address the spread of harmful AI-generated content, this rapid takedown requirement raises concerns about platforms erring on the side of censorship rather than conducting meaningful review. Implementation and enforcement at scale, against global tech companies and in ways that respect human rights, will require international coordination.

Q3: Why is India’s approach particularly significant for the Global South?

A: The concentration of AI investment in a few countries and companies creates an uneven landscape for AI diffusion globally. Countries of the Global South face dependence on external AI systems, which brings risks of lost control over critical infrastructure, exposure to surveillance, and imposition of foreign values and priorities. India’s emphasis on strategic autonomy, public-private partnerships, and governance tailored to local context offers an alternative path. By building research infrastructure, shared safety evaluation frameworks, and collaborative networks among middle powers, India can help create collective capacity that allows Global South countries to participate in AI development rather than being dominated by it.

Q4: What is the “critical gap” in current AI governance approaches that India must address?

A: The critical gap is the protection of people—workers being displaced, vulnerable populations at risk of harm, and citizens who need awareness and agency. A governance framework that accelerates AI adoption while providing no protection for those affected is not balanced or sustainable. Without shared understanding of minimum measures to mandate transparency and accountability from AI developers, protect whistleblowers, safeguard vulnerable populations, and encourage public engagement, even well-meaning coordination will fail. The Third Way must embed these protections proactively, not as afterthoughts.

Q5: What will determine whether India’s Third Way succeeds in the next 12 months?

A: Several factors will be decisive: whether India can successfully implement and enforce the new IT Rules against global tech companies while respecting rights and norms; whether it can build effective international coordination among middle powers through shared research infrastructure and safety evaluation frameworks; whether public-private partnerships can distribute gains equitably rather than concentrating them; and whether the governance framework can evolve as technology changes. The choices India makes now will determine whether its model becomes a path worth following for other nations or whether gaps create the instability that governance is meant to prevent.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form