Military AI and the Urgency of Guardrails, Why India’s Abstention Matters
Just days before the India AI Impact Summit, India abstained from signing a pledge to govern the deployment of artificial intelligence in warfare at the third global summit on Responsible Artificial Intelligence in the Military Domain (REAIM). The governance of military AI often falls outside many conversations on AI regulation, but given its national security implications, it must become a higher priority.
About a third of the participating countries signed the ‘Pathways to Action’ Declaration. The United States, India, and China were among those that did not. The previous summit saw 60 countries sign a document outlining a blueprint for action. This year, that number decreased considerably, with only 35 of 85 countries signing the declaration. The drop indicates some of the challenges in governing military AI that affect states’ commitments. These challenges need to be considered as India navigates how to govern military AI without curbing its own technological development.
The Strategic Reluctance
The challenges are multifold. The first issue with governing military AI is the nature of the technology itself. AI is a dual-use technology—it has both civilian and military applications that are being developed in parallel. This makes it hard to verify compliance with any military AI-related constraints, since it can be difficult to discern the end to which research and development is directed.
Typically, in the context of arms control, technologies seen as ‘game-changing’ and offering widespread benefits have been harder to restrict. As its use cases expand, AI is increasingly gaining this reputation, with applications ranging from logistics and management to direct combat functions. The perceived military advantage discourages regulation. Furthermore, states that have already invested heavily in AI can utilise civilian-sector R&D for military purposes, making them reluctant to commit to measures that could curb their growth.
AI is already used for a range of benign purposes across the Indian subcontinent, such as maintenance, data analysis, and streamlining logistics. However, the elephant in the room is the more complex question of what to do about lethal autonomous weapons systems (LAWS), one of the most controversial use cases of AI. The UN Convention on Certain Conventional Weapons’ Group of Governmental Experts convened twice last year, but failed to reach any conclusions or issue concrete recommendations. This stems from challenges with AI itself, which are further magnified in the higher-stakes case of LAWS, as well as from independent conundrums that arise.
The Definitional Deadlock
There is no international consensus on the definition of LAWS. Countries with limited AI investments and less pressing strategic concerns are keen to have a legally binding instrument in place. On the other hand, those focused on AI or those with strategic commitments have maintained ambiguous positions, such as India, or have opposed binding frameworks, such as Israel.
Technologically advanced states also tend to push for definitions with a higher ‘threshold’ as to what constitutes LAWS to maximise their freedom of action, while states that lack that capacity push for more restrictive definitions. This is a classic negotiating dynamic: those who have the technology want to preserve their advantage; those who do not want to constrain it.
While there may be a widespread sense that LAWS need some form of regulatory framework, the finer details have become mired in definitional conundrums, hindering any agreement. The absence of a specific definition makes it difficult to establish binding terms, as ideas about what constitutes autonomy vary. Does autonomy mean the system can select and engage targets without human intervention? Or does it require some higher level of decision-making capability? The answers to these questions fundamentally shape what would be regulated.
India’s Calculated Stance
India’s position on military AI is complex and reflects both its economic focus on AI R&D and security compulsions. While it continues to align with broader ideas such as the need for ‘responsible’ use, it has not signed either the 2024 Blueprint for Action in Korea or the ‘Pathways to Action’ declaration.
India has also maintained that a legally binding instrument on LAWS would be “premature.” Given the security concerns in its immediate neighbourhood, these decisions can be seen as a means to avoid curbing its own development. India faces threats from state and non-state actors across its borders, and AI-enabled defence systems could provide crucial advantages in surveillance, targeting, and response.
The assertion that a binding instrument is premature makes sense, given the limited publicly known use of military AI. Most military AI applications remain in development or are used in supporting roles rather than direct combat. Rushing into binding agreements before the technology has matured could lock in disadvantages or prove unenforceable.
Moral arguments that call for a ban are unlikely to succeed, considering the lack of strong norms against military AI. Unlike chemical or biological weapons, which have been widely stigmatised, AI does not yet carry that baggage. The international community has not developed the same visceral aversion to autonomous systems that it has to poison gas or germ warfare.
The Path Forward
However, concerns about accountability and widespread discomfort with the idea of technology being responsible for the loss of human lives make it ripe for a non-binding mechanism to be established. While it is not easy to reach such an agreement, the following provisions could help ensure transparency and the safe deployment of military AI.
First, AI-augmented autonomous decision-making should not be used alongside any country’s nuclear forces. The stakes are simply too high. Nuclear command and control requires human judgment, human accountability, and human fallibility in a way that machines cannot replicate. Accidents or miscalculations could have catastrophic consequences.
Second, given the complexities of verifying compliance, there should be voluntary confidence-building mechanisms in place that allow states to share data on their development of military AI. Transparency reduces suspicion. If states know what their potential adversaries are doing, they are less likely to assume the worst and more likely to engage in dialogue.
Third, given the lack of a clear definition, an accepted risk hierarchy of military AI use cases should be created. This could serve as a starting point for states to develop their own military AI frameworks. Some uses are less controversial than others. Logistics and maintenance AI raise fewer ethical concerns than autonomous targeting. By creating a hierarchy, states could agree to regulate the highest-risk applications while allowing lower-risk ones to proceed.
The Way Forward
Arguably, India should utilise the opportunity to push for a non-binding framework rooted in its principles of accountability and aligned with its interests. Such a framework could build norms gradually, without the rigidity of a binding treaty that might be ignored or violated.
India has often positioned itself as a bridge between the developed and developing worlds, between different interests and perspectives. On military AI, it could play a similar role—advocating for responsible use while preserving the space for technological development; engaging with both those who want binding restrictions and those who resist them; and helping to build the consensus that will eventually be necessary.
The governance of military AI is too important to be left to the technology’s pace alone. It requires deliberate, sustained, and inclusive international dialogue. India’s abstention from the REAIM declaration is not a rejection of that dialogue but a statement that the terms must be right. The challenge now is to work toward terms that can command broad support.
Q&A: Unpacking Military AI Governance
Q1: Why did India abstain from signing the REAIM ‘Pathways to Action’ declaration?
India, along with the US and China, did not sign the declaration. This reflects a strategic reluctance to commit to binding frameworks that could curb technological development in a dual-use field where military and civilian applications are deeply intertwined. Given India’s security concerns in its neighbourhood and its investments in AI R&D, it seeks to preserve flexibility while aligning with broader principles of responsible use.
Q2: What makes governing military AI particularly challenging?
Several factors contribute: AI is dual-use, making verification of compliance difficult; it is perceived as ‘game-changing’, creating military advantages that states are reluctant to constrain; and there is no international consensus on defining lethal autonomous weapons systems (LAWS). Technologically advanced states push for higher-threshold definitions to preserve freedom of action, while others seek more restrictive definitions, creating a definitional deadlock.
Q3: What is the definitional deadlock around LAWS?
There is no agreed definition of what constitutes a lethal autonomous weapons system. Does autonomy mean selecting and engaging targets without human intervention? Does it require higher-level decision-making capability? The answers fundamentally shape what would be regulated. Technologically advanced states favour narrow, high-threshold definitions to maximise their freedom; others push for broader, more restrictive definitions. This deadlock has prevented any binding agreement at UN forums.
Q4: Why has India called a legally binding instrument on LAWS “premature”?
Given the limited publicly known use of military AI, binding agreements risk locking in disadvantages before the technology has matured. India also has pressing security concerns that AI-enabled defence systems could address. Moral arguments for a ban are unlikely to succeed given the absence of strong norms against military AI. India prefers to preserve development space while engaging in dialogue on responsible use.
Q5: What non-binding mechanisms could help govern military AI?
Three provisions could help: (1) a commitment not to use AI-augmented autonomous decision-making alongside nuclear forces, given catastrophic risks; (2) voluntary confidence-building mechanisms allowing states to share data on military AI development, reducing suspicion through transparency; (3) an accepted risk hierarchy of military AI use cases, distinguishing less controversial applications (logistics) from higher-risk ones (autonomous targeting) to focus regulation where it matters most.
