Inside the messy ethics of making war with machines

Source: MIT Technology


In this article published by Arthur Holland Michel in the MIT Technology Review, the use of intelligent autonomous weapons (weapons capable of identifying and destroying targets without human intervention) during conflict is explored through anticipatory stories set in a city transformed into a battlefield or in a command center. 

 

In the first story, soldiers are helped to move from building to building by a technological device: a targeting system integrated into their helmets.

In the second story, a commander receives an alert from a conversational agent (such as Cortana, the AI in Halo). The conversational agent relays information gathered from satellites, such as abnormal movements of missile launchers in the field. Upstream, the conversational agent has already ordered the artillery to target the potential vehicles because the bot has statically concluded that this movement of vehicles is a threat. Etc.

 

But the temporality of its anticipatory narratives is not so far away. It's closer than we think. In May 2023, the UNODA Convention on Certain Conventional Weapons (CCW) took place. This UN convention demonstrates that the way war is waged has changed, far beyond the use of remotely piloted weapons. This profound change lies in the fact that, from now on, humans will no longer be the only ones making decisions on the battlefield, as they begin to be replaced by AI. This technological evolution in the art of warfare leads us to rethink the concept of responsibility.



Previous
Previous

The UN wants to regulate military AI but comes up against the principle of competitive reality

Next
Next

Asymmetric warfare: how to respond to hybrid threats