The Daily Pennsylvanian is a student-run nonprofit.

Please support us by disabling your ad blocker on our site.

Last week I attended a seminar entitled “How to NOT build a Terminator” by Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology . The talk explored how roboticists should approach the ethics of robots with lethal autonomy, especially in light of increased military interest in robotics. Advocacy groups around the world are calling for preemptive actions ranging from a moratorium on robots capable of deadly force to a total ban on robotics research.

Especially over the past year, drones have been a constant source of both excitement and fear. From Amazon Prime Air to “signature strikes” in Pakistan, drones have captured the public’s attention.

But as a roboticist, it frustrates me that public conversations surrounding “killer robots” have little to do with actual robotics. So, I’d like to address some common concerns and misconceptions about robots to help the discussion be more productive.

First of all, I’m tired of reading headlines like “When will drones stop killing innocent people in Yemen?” or “U.S. drone struck a wedding convoy, killing 14” or “Drones Kill Civilians using NSA data.” While true in the most technical sense, headlines like these confuse the weapon with the soldier. Consider how strange it would be to see a headline like “When will guns stop killing civilians in Pakistan?”

We must stop assigning moral agency to UAVs or any similarly non-sentient tools. Drones cannot be morally culpable for their actions. Using language attributing the actions of the operator to the machine needlessly distracts from the legitimate moral and legal concerns surrounding drone strikes.

So what about machines that can actually “decide” to kill a human?

Some groups like the Campaign to stop Killer Robots have been pushing for international bans on “systems that, once activated, can select and engage targets without further intervention by a human.”

I think their hearts are in the right place, but their efforts are being misdirected. Let’s examine their largest “problems with killer robots”:

1. How do we maintain control over fully autonomous weapons?

Arguably, robots are much easier to control than human soldiers. Autonomous robots act according to very rigid standards, unlike humans who have the capability to disobey orders.

2. Robots lack human judgment required to distinguish between soldiers and civilians.

Two words: land mines. Mines are very simple killer robots; they detect their environments, make a decision and actuate with lethal force. My point is not that either is ethical, but the idea of deploying weapons which cannot discriminate between friend and foe is nothing new to military ethics. To direct this critique solely at robots ironically misses the real target.

If land mines are too simple an example, consider cruise missiles. Cruise missiles are the quintessential killer robot of the modern arsenal. The commander on the field gives them a kill mission, and they autonomously navigate to, track and destroy their target.

Clearly, we already deploy killer robots which don’t discriminate at all. It seems to me a step in the right direction to introduce robots which can decide not to kill.

3. Replacing human troops with machines makes going to war easier, and hence more likely.

I sympathize with this very real concern, but again it’s nothing new. The invention of the musket was greeted in much the same way, as were most new warfare technologies. Focusing all our efforts on banning the tool won’t remove the underlying moral considerations or make warfare more just.

Besides, it’s impossible to separate core technologies from their potential for military use. We can’t have GPS without the ability to do targeted remote strikes. We can’t have the internet (a military technology) without the capability for domestic surveillance. The capabilities Dr. Kumar is developing for search and rescue quadrotors at Penn are the same skills required by seek-and- destroy robots. New technologies can always be misused.

Collin Boots is a master’s student from Redwood Falls, Minn., studying robotics. Email him at cboots@seas.upenn.edu or follow him @LotofTinyRobots.

Comments powered by Disqus

Please note All comments are eligible for publication in The Daily Pennsylvanian.