Dangerous future of autonomous killer robots
The use of drones to kill suspected terrorists is controversial, but as long as a human being decides whether to fire the missile, it is not a radical shift in how humanity wages war. Since the first archer fired the first arrow, warriors have been inventing ways to strike their enemies while removing themselves from harm’s way.
Soon, however, military robots will be able to pick out human targets on the battlefield and decide on their own whether to go for the kill. An Air Force report predicted two years ago that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems.” A 2011 Defense Department road map for ground-based weapons states: “There is an ongoing push to increase autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy.”
The Pentagon still requires autonomous weapons to have a “man in the loop.” The robot or drone can train its sights on a target, but a human operator must decide whether to fire. But full autonomy with no human controller would have clear advantages. As other nations develop this capacity, the United States will feel compelled to stay ahead. A robotic arms race seems inevitable unless nations collectively decide to avoid one.
If you have any technical difficulties, either with your username and password or with the payment options, please contact us by e-mail at firstname.lastname@example.org