You are reading: AI & Warfare: a new era of conflict 2 min read

AI & Warfare: a new era of conflict

2 min read
AI & Warfare: a new era of conflict

In a world where artificial intelligence increasingly influences aspects and habits of our lives, the latest developments in the recently erupted conflict between Iran and the United States-Israel alliance have proven that AI has even opened a new era of warfare, sparking new debates on ethics in the military field.

In conducting the latest attacks against Tehran, the Pentagon utilized fruitful collaborations, securing agreements with the multinationals OpenAI and Palantir, and a massive use of artificial intelligence in the strategic field. This resulted in more than 900 strikes on Iranian soil in less than 12 hours, with decision-making times reduced from several weeks to just a few hours. It has also been reported that the most advanced systems are even capable of examining the legal foundations for an attack through an analysis of international law and the laws of war.

However, the most recent and debated events of the conflict have highlighted the major risks of this general delegation: with thousands of military targets flagged by databases, the risk arises that military authorities may place excessive trust in algorithms, reducing rational human oversight to a mere formal endorsement. The tragic consequences were observed in the bombing of the Minab school, an attack in which several children perished. The strike occurred because the school was classified as a military base, a classification later found to be based on observations dating back to the 2013-2016 period.

This considerable oversight then triggered a long series of communication errors in the fields of intelligence and diplomacy: immediately after the attack, in fact, the CIA denied U.S. involvement, basing its claim on videos that apparently demonstrated that the missile used was not a Tomahawk, a missile used only by the United States and very close allies, only to contradict itself the following day due to videos that, on the contrary, confirmed the use of a Tomahawk. The diplomatic blunder continued, however, after President Donald Trump declared that the missile used in the bombing was Iranian, despite his previous confirmation that a Tomahawk had been employed.

This general lack of oversight in the implementation of artificial intelligence is now observable not only in large military operations but also in simple daily life actions: 63,8% of young Europeans have used generative AI in 2025, and approximately 78% of companies have delegated at least one job function to AI. Furthermore, consulting the scientific field, studies have shown that repeated and prolonged use of AI can cause various long-term harmful effects, such as a greater tendency to forget information, slower cognitive processes, growing dependence on digital tools, and a decrease in attention spans. All these data points denote a risk of decision-making dependence on algorithms, both in our mental daily lives and in the most delicate and risky processes of recent war strategy.

Did you enjoy this article?

One thumbs up per browser. The count is saved for each article.

0 likes
Filippo Setti

Filippo Setti

mi chiamo Filippo Setti, nato nel settembre 2006 a Reggio Emilia, attualmente vivo a Venezia per il PISE. Coltivo da anni una grande passione per la scrittura, motivo principale che mi ha spinto a partecipare a questo progetto, e ho autopubblicato nel settembre 2025 una raccolta di poesie.