You are reading: No Algorithm Can Decide Who Lives and Who Dies 3 min read

No Algorithm Can Decide Who Lives and Who Dies

3 min read
No Algorithm Can Decide Who Lives and Who Dies

No algorithm can decide who lives and who dies. No machine is truly autonomous. Every autonomous machine hides something in the fetishism of its autonomy.

In his masterpiece Liquid Modernity, Zygmunt Bauman describes how the exterritorial capital of modernity has subdued governments to deregulation, and populations to extreme individualization and pervasive insecurity. Later on, he summarized his work in a straightforward message: “No one is in control. That is the major source of contemporary fear”. In May 2025, the NGO Human Rights Watch supported the emergency to ban unmanned bellic equipment for the very reason that “there is no human control” and that “it is very difficult to hold anyone accountable”. As Bauman remarks throughout his work, there does exist someone in control, as there is also someone accountable for the damages of LAWS (lethal autonomous weapon systems).

It is foremost crucial to elucidate that both remote/automated and autonomous weapons require sensor inputs and to be activated. Autonomous weapons, once activated, understand goals and their environment to act independently, with predictable overall behaviour but variable specific actions. Our first critic should, therefore, be addressed to questions hypothesizing the fully autonomous functioning of a vehicle. It is wrong to anthropomorphize the machine and detach the sociality behind its activation, whether seen as a lobby of imperialist warmongers, or as software in absence of which the hardware cannot exist. As Carl Schmitt argues, “war has its own grammar (military-technical laws), but politics remains its brain. It does not have its own logic”. Unsurprisingly, in 2017 the British Ministry of Defence asserted that “The UK … has no intention of developing autonomous weapon systems as it wants commanders and politicians to act as decision makers and to retain responsibility”.

Secondly, we shall take a look at those states that support LAWS: a 2023 CCW compilation shows a sudden change in US and UK definitions from 2022 to 2023. From ensuring that “the autonomous functions in weapons systems must not be designed to be used to conduct attacks that would not be the responsibility of the human command” to a general “recognizing” that AI research may produce “novel and more sophisticated weapons”. Surely that research was doing great on the Palestinian ground, as in Ukraine. Regarding the latter: in 2022, in the same CCW compilation, Russia reportedly asserted that LAWS are “fully autonomous unmanned technical means… carrying out combat… without any involvement of the operator”. What is to be grasped from these declarations? Though nothing is explicit, recognizing the willingness of countries that make massive use of LAWS to use the latter as a shield for their crimes, a tendency and an interest in supporting the full autonomy of the machine to foster ambiguity, to create a world in which no one is accountable, i.e., such a level of ambiguity that people stop asking questions. An example? Bombing the Shajareh Tayyebeh girls’ elementary school in Iran, and employing what we could call a “lucid ambiguity” to take the side of those who are carrying out investigations to ascertain the matter, and, eventually, claiming “it was done by Iran” itself. Of course, the perpetrators are known, yet our focus must be directed toward this desire for ambiguity.

The fetishism of machine autonomy, in the Marxian sense, is an additional step toward a structured sensibility about war. Memes about US interventions in oil-rich areas or regime change in Venezuela, featuring comical backgrounds like the Macarena sound, are an all-too-expectable reaction from those who witness this theater of madness and seek refuge by making it an object of laughter. On the other hand, it is no coincidence the White House chose the Macarena as the soundtrack for its media campaign, which also serves as a backdrop for the operation “Epic Fury” in Iran. The symbolic violence of laughter, in Pierre Bourdieu’s terms, may, in the future, divide the “normal” from the “frenetic” conspiracy theorist. The fetishism of autonomy fuels this. Will we imagine, in the future, an AI tribunal, or will we be able to direct our doubts beyond the protective shield of those who control it?

Did you enjoy this article?

One thumbs up per browser. The count is saved for each article.

0 likes
Filippo Giorgio Canestri

Filippo Giorgio Canestri

Hi! I'm Filippo, and I've been in Venice since this year as a PISE student. From what I've studied so far, I'm interested in moral philosophy, the history of sociology, and the history of the anti-colonial struggle in the second half of the twentieth century.