The use of drones to kill suspected terrorists is controversial, but as long as a human being decides whether to fire the missile, it is not a radical shift in how humanity wages war. Since the first archer fired the first arrow, warriors have been inventing ways to strike their enemies while removing themselves from harm’s way.
Soon, however, military robots will be able to pick out human targets on the battlefield and decide on their own whether to go for the kill. An Air Force report predicted two years ago that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems.” A 2011 Defense Department road map for ground-based weapons states: “There is an ongoing push to increase autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy.”
The Pentagon still requires autonomous weapons to have a “man in the loop.” The robot or drone can train its sights on a target, but a human operator must decide whether to fire. But full autonomy with no human controller would have clear advantages. As other nations develop this capacity, the United States will feel compelled to stay ahead. A robotic arms race seems inevitable unless nations collectively decide to avoid one.
I have heard few discussions of robotic warfare without someone joking about the Matrix or Terminator; the danger of delegating warfare to machines has been a central theme of modern science fiction. Now science is catching up to fiction. And one doesn’t have to believe the movie version of autonomous robots becoming sentient to be troubled by the prospect of their deployment on the battlefield.
After all, the decisions ethical soldiers must make are extraordinarily complex and human. Could a machine soldier distinguish as well as a human can between combatants and civilians, especially in societies where combatants don’t wear uniforms and civilians are often armed? Would we trust machines to determine the value of a human life, as soldiers must do when deciding whether firing on a lawful target is worth the loss of civilians nearby? Could a machine recognize surrender? And if a machine breaks the law, who will be held accountable?
Some argue that these concerns can be addressed if we program war-fighting robots to apply the Geneva Conventions. Machines would prove more ethical than humans on the battlefield, this thinking goes, never acting out of panic or anger or a desire for self-preservation. But most experts believe it is unlikely that advances in artificial intelligence could ever give robots an artificial conscience, and even if that were possible, machines that can kill autonomously would almost certainly be ready before the breakthroughs needed to “humanize” them.
Of course, human soldiers can also be “programmed” to commit unspeakable crimes. But because most human beings also have inherent limits – rooted in morality, empathy, capacity for revulsion, loyalty to community or fear of punishment – tyrants cannot always count on human armies to do their bidding. Think of the leaders who did not seize, or stay, in power because their troops would not fire on their people.
Nations have succeeded before in banning classes of weapons – chemical, biological and cluster munitions, landmines, blinding lasers. It should be possible to forge a treaty banning offensive weapons capable of killing without human intervention. A choice must be made before the technology proliferates.
• Tom Malinowski is Washington, D.C., director at Human Rights Watch.