Hello, fellow humans,
It's Tom here, deputizing for captain Tristan on today's voyage through cyberspace.
I don't want to sound alarmist, but it seems that we've now entered the era of AI warfare.
Militaries have, of course, been testing lethal autonomous weapons (LAWs) for years. The landmine is commonly considered to be the first LAW. But some trace their origins all the way back to 162 BC, when the Syriac-Greek army sent 30 drunken elephants rampaging through a battlefield.
Last month, however, the systems reached a terrifying new landmark: a conflict that's been described as the "first AI war."
The title was granted to Israel's devastating attack in Gaza. During the operation, AI was used to intercept rockets fired by Hamas, select locations to attack, and find rocket launchers.
"This is the first time [AI] was used broadly across an operation," a senior Israeli officer told Nikkei.
The officer acknowledged that the systems didn't work perfectly. But he added that the conflict had been a useful testing ground.
"This is an opportunity for us to train our algorithms using real-time data."
Israel is far from alone in developing AI for the military. In January, Robert Work, a former deputy secretary of defense who now co-leads the US National Security Commission for AI, said his country has a “moral imperative” to explore LAWs.
The US has been testing military AI systems across land, sea, and skies. A notable recent example involved pitting fighter jets controlled by algorithms against Air Force pilots in simulated dogfights.
China has conducted similar trials. In both countries, the AI systems defeated the human pilots.
Smaller nations are also rushing to bring AI to the battlefield. Britain just used AI in an army operation for the first time, while France recently tested Spot the robot dog in a series of military exercises.
Military bigwigs say the systems can reduce human casualties — as long as they don't malfunction, of course.
I'd like to think that we'll eventually leave the robots to duke it out between themselves, while we hop on a spaceship to Mars. But for now, it seems more likely that superpowers will deploy LAWs to subjugate smaller adversaries— as well as their own citizens.