The Future Of Wars: Armies And Algorithms
The recent military flare-up between India and Pakistan has evoked familiar images: jets scrambling, radar systems on alert, and missile arsenals primed. Yet, a deeper transformation is quietly reshaping the nature of conflict. Today, AI-powered drones are already deployed on battlefields worldwide. When I was invited to judge the India qualifying rounds of the International Criminal Court moot court competition earlier this year, the proposition included a unique aspect: deploying automated border security systems to halt unlawful migration. Such exposures, whether real or hypothetical, pique my curiosity. Perhaps, in the near future, war may not commence with the roar of engines or missile launches, but with a few lines of code.
AI is fundamentally changing warfare, subtly yet profoundly. Unlike conventional weapons, AI is accessible, decentralised, and increasingly deployable. A machine-learning model developed for one purpose can be readily repurposed for surveillance, misinformation, or even targeted attacks. The military monopoly once exclusive to nation-states is eroding. Power, once concentrated in institutions, is diffusing into data centres and cloud servers.
From Border Tensions to Information Warfare
AI tools are manipulating digital spaces with real-world consequences. Deepfake videos can incite unrest. Social media bots can inflame communal tensions. Facial recognition models can become targeting systems. Cyberattacks powered by machine intelligence can cripple vital infrastructure – electric grids, banking networks, even hospital systems – without physical border crossings.
Consequently, conflict transcends conventional geography. The battlefield now encompasses our information systems, public discourse, and civilian infrastructure. The greatest threat may no longer be a missile launched across a border, but an algorithm subtly embedded in everyday digital platforms. Even more concerning is the blurring line between peace and conflict. Traditional warfare followed a predictable pattern: soldiers, borders, and defined battlefields. Today, conflicts can emerge in less visible, more insidious ways.
A Legal and Security Vacuum
This rapid transformation raises challenging questions about accountability and regulation. If an autonomous drone mistakenly targets civilians due to faulty AI input, who bears responsibility? The deploying commander? The code’s developer? Or the machine itself? I often, during speaking engagements or lectures, emphasise how the core issues concerning any AI system remain consistent when assessed from a regulatory or accountability standpoint.
International humanitarian law is ill-equipped to address these complexities. The Geneva Conventions were drafted for a world of soldiers and battlefields, not algorithms and cloud servers. Even domestic laws in India and elsewhere have yet to adapt to AI’s dual-use nature – where the same technology can power medical breakthroughs or military strikes.
Furthermore, internal threats now rival external ones. A politically motivated group operating domestically can weaponise AI to disrupt elections, sow discord, or cripple digital infrastructure without possessing any physical weaponry. This accessibility makes AI not just a strategic tool, but a potential force multiplier for both democratic destabilisation and state repression.
AI and the ‘Uneasy’ Question of Power
It is crucial to consider how AI impacts power itself.
While currently dominated by large corporations and governments, AI possesses qualities that challenge traditional capitalist logic. It can generate outputs – text, images, decisions – at near-zero marginal cost, without human labour. Theoretically, this could undermine capitalism’s reliance on labour exploitation and scarcity. Moreover, open-source AI movements, where powerful tools are publicly available, undermine the monopolistic control typical of capitalist systems.
However, this potential does not guarantee positive outcomes. The same tools challenging capitalism’s foundations can also reinforce them. Surveillance capitalism thrives on AI. So does gig economy exploitation. In warfare, AI could further concentrate power in the hands of tech-military elites unless democratically regulated.
Therefore, AI is a double-edged sword. It can decentralise power yet also concentrate it more effectively. It can democratise innovation yet also militarise public life. Whether it becomes anti-capitalist, hyper-capitalist, or something else entirely depends on the legal and political frameworks we establish now.
India Must Lead in Responsible AI
India, with its robust IT sector, strategic geopolitical position, and strong democratic tradition, must lead in shaping global AI norms. This includes banning fully autonomous lethal weapons, establishing clear lines of accountability, and strengthening cyber-resilience in critical sectors.
Just as India championed nuclear disarmament and non-alignment, it must now advocate for ethical AI use in international forums. This requires urgently updating domestic laws concerning AI, data governance, and algorithmic transparency to meet the challenge’s scale. Readers may refer to my earlier opinion piece on why India must strive for AI sovereignty.
It is equally crucial for our armed forces to cultivate a deep, practical understanding of complex AI systems. As military operations increasingly rely on algorithmic decision-making, whether for autonomous drones, predictive surveillance, or battlefield logistics, simply procuring advanced technologies is insufficient. Strategic advantage will hinge on how well military personnel understand, interpret, and critically evaluate these systems, including their limitations, biases, and vulnerabilities. Without this competence, there is a genuine risk of over-reliance on opaque ‘black box’ technologies, susceptible to malfunction or adversarial manipulation.
A technologically literate military leadership, supported by interdisciplinary expertise in law, ethics, and computer science, is essential to ensure AI tools serve national security without compromising democratic accountability or operational integrity.The next war may not commence with a siren but a software update. Its battlefield may not be in the mountains or skies, but in servers, platforms, and minds. To preserve peace, national sovereignty, and democratic integrity, we must prepare not only our armies but also our laws, policies, and public understanding.
We are not merely building machines; we are building futures. The question is: who controls them, and to what end?