Editorial call for April 2026
Can a machine control who lives and who dies?
In December 2024, the UN General Assembly adopted a resolution calling for a ban on lethal autonomous weapons systems. Major military powers rejected it.
As AI enters State structures, a critical line comes into view: the automation of sovereign functions. At its extreme, this means delegating lethal force to algorithmic systems.
If algorithms decide who lives and who dies, how do we preserve democratic accountability, political responsibility, and moral judgment?
Angles to explore
Philosophy
If an autonomous system commits a war crime, who is accountable: the programmer, the commander, the political authority that approved its deployment?
Politics
Just war theory rests on human intention and moral agency. Can an algorithm be “guilty”? Can it ever be “justified”?
Economics
AI military integration creates structural dependence on private technology providers. What are the strategic risks of outsourcing core sovereign capabilities? Can profit-driven models align with long-term state interests?