7 March 2026
/ 6.03.2026

When warfare comes through artificial intelligence

From target analysis to autonomous drones, the conflict with Iran shows how AI is changing the battlefield. But international law lags behind

The reignited conflict between the United States, Israel and Iran is a geopolitical clash and, at the same time, a window into the future of warfare. Missiles guided by algorithms, analytics systems that select targets, drones that make increasingly autonomous decisions: artificial intelligence is already inside the battlefield.

While missiles are falling in Tehran, thousands of miles away diplomats, jurists and researchers are meeting in Geneva to discuss an increasingly pressing issue: how to regulate the use of AI in armed conflict. The pace of technology, however, runs much faster than politics.

According to political scientist Michael Horowitz of the University of Pennsylvania, artificial intelligence systems are already being used by the U.S. military to analyze intelligence, coordinate logistics and support military decisions. Among the most frequently cited tools is the Maven system, a platform that uses automatic image and data analysis to suggest military targets.

The illusion of “more accuratewarfare

One of the most recurring promises of military artificial intelligence is accuracy. Faster algorithms capable of analyzing huge amounts of data should reduce errors and, in theory, civilian casualties as well.

Recent conflicts-from Ukraine to Gaza-have already seen the use of AI to identify targets or guide drones. Despite this, civilian casualties remain high. As noted in Nature, for Craig Jones, a political geographer at Newcastle University, there is no evidence that these systems really reduce this kind of “error.” In some cases, he argues, they may even amplify it.

The problem is structural: algorithms reason about incomplete data and complex contexts, just like humans. But without moral experience or direct responsibility.

The red line of autonomous weapons

The most controversial point concerns so-called fully autonomous weapons: systems that can detect, select and hit a target without human intervention.

For many researchers, this perspective is not only technologically risky but also ethically unacceptable. Indeed, international humanitarian law requires a distinction between combatants and civilians, an assessment that requires context, interpretation, and judgment.

According to political scientist Toni Erskine of the Australian National University, completely replacing humans in the military decision chain could even exacerbate conflicts instead of containing them.Her research group published a set of policy recommendations in 2025 to limit the use of AI in weapon systems.


Meanwhile, employees of several tech companies are signing internal petitions to prevent their models from being used for surveillance systems or lethal autonomous weapons.

An international law that is still incomplete

Diplomatically, the issue has been on the table for years. The UN Convention on Certain Conventional Weapons (CCW) has long discussed the possibility of limiting or banning autonomous weapon systems.

The problem is political before it is legal. The states most advanced in the military development of AI-the United States, Israel and China-are also the least supportive of new restrictions.In other words, those leading the technological race are unlikely to agree to slow it down.

The result is a paradoxical situation: algorithmic warfare is advancing rapidly, while the global rules that should govern it still remain in the draft stage.

Reviewed and language edited by Stefano Cisternino
SHARE

continue reading