Defense

In the 19th century, battlefields were located in open fields hidden from public scrutiny. In the 20th century, and especially in the 1930s, the battlefield moved into cities, where victory or defeat was shaped  by urban buildings and architecture. In the 21st century, the battlefield has moved on to cyberspace where it is no longer constrained by physical realities. Aggressor and defender are no longer engaged face to face, but on screens thousands of miles apart.

 

Augmented soldiers, whether wearing exoskeletons or an AR helmets, can gather and transmit real-time information, helping them  to better identify their targets.

The Russian-Ukrainian conflict has shown that the art of warfare is undergoing yet another  technological evolution. We are certainly not in the massive use of autonomous military robots like the Terminator,but in a theater of operations using semi-autonomous drones that can accurately destroy their targets. Certain drones are also capable of launching surprise attacks on targets without the targets hearing them or being able to defend themselves. 

When Artificial Intelligence (AI) is used in conjunction with semi-autonomous lethal weapons or drones, it is used offensively. AI is also deployed defensively, as only an AI can stop an AI, or in well-defined cases to improve defensive capabilities and protect soldiers.

Not often mentioned but equally important, is the principle of responsibility in the use of autonomous lethal weapons. By responsibility, we mean who, between the AI and the soldier (regardless of rank), is responsible for taking a human life. 

 

To what point (the point of no return for humanity) are armies willing to use autonomous lethal weapons without them turning against their creators? 

As discussed in this podcast, some Western armies are incorporating science fiction authors into their military programs to anticipate this point of no return.

As I was writing this, I was reminded of Robert A. Heinlein's book Starship Troopers, in which soldiers are equipped with an exoskeleton to multiply their strength on the battlefield. The book was published in… 1959

 

The intelligent autonomous weapons once seen in movies, science fiction stories and even video games (Battlefield or Call of Duty) are no longer imaginary weapons. Even if, for the moment, their size is insignificant compared to their fictional counterparts. The development of weapons is well underway.

We explore the extent to which artificial intelligence pretexts a third revolution in modern warfare.

A global test case for the wartime use of AI has been seeded deep in the drama and pain of the current Ukrainian conflict. The direct protagonists, as well as those that have remained on the sidelines, are leveraging AI in hopes of decisively turning the tide.

As the days of conflict turn into months, we uncover ample proof of the need to take the time to assess the role, the impact, and the limits of AI in supporting human decision-making.

There are multiple takeaways in the various contributions below. Slaughterbots, Turkish drones, anomaly detection software, Russian troll farms, and deep fakes are just some of the applications that are being tested and refined on the battlefield.

Yet, in spite of the diversity of these examples, we can ask why AI hasn’t been used even more extensively in the present crisis.

In which conditions can AI work with humanity, and to what degree, if any, AI for war can be considered ethical?