If a robot soldier commits a war crime, who is held accountable?
You can’t punish a collection of parts and coding algorithms. But can you blame a human commander, who gave a legal order only to see the robot carry it out incorrectly? And what about the defense manufacturers, which are often immune from the kind of lawsuits that would plague civilian outfits if their products cost lives.
The culpability question is one of a host of thorny moral dilemmas presented by lethal robots. On the one hand, if effective, robot soldiers could replace ground troops and prevent thousands of American casualties. And robots aren’t susceptible to many of the weaknesses that plague humans: exhaustion, sickness, infection, emotion, indecision.
But even if robot warriors can keep American lives out of danger, can they be trusted with the complicated combat decisions now left to human judgment?
Rep. Jim McGovern thinks not.
The Massachusetts Democrat is part of a crusade for an international ban on killer robots — machines that can decide without human input whom to target and when to use force.
The only way to stop killer robots, said McGovern and a series of panelists he assembled for a Capitol Hill briefing this week, is to ban them before they even exist. Much like drones, once someone gets a killer robot, it’s only a matter of time before everyone else is racing to catch up. And despite some countries’ commitment to evaluating the technology responsibly, good intentions never won an arms race.
“The only thing harder than getting a ban in place is getting a ban in place after something is developed,” McGovern said.
McGovern is racing technology, but he believes he has time: He thinks it will take another two to three decades before the technology would be available.
McGovern’s Tuesday panel is part of an ongoing effort by anti-robot activists to raise awareness about the issue. They hope lawmakers will share their concerns and join their push for a worldwide ban. “The U.S. should show leadership on this,” said the Human Rights Watch’s Steve Goose. “If the U.S. were able to get out in front “¦ it would lead the way for many other nations.”
So why is it so important that robots never see the battlefield? For some of the panelists, the issue is a moral one. “Do we really want to establish a precedent where it’s OK for machines to take the lives of human beings?” said Dr. Peter Asaro, a founder of the International Committee for Robot Arms Control.
For most, though, the chief worry is judgment, and humans’ innate ability to read context. “Soldiers have to rely on intention or subtle clues,” said Bonnie Docherty, an arms expert at Human Rights Watch and a lecturer at Harvard Law. “We have serious concerns that a fully autonomous weapon could ever reach that level.”
Especially in battlefields where soldiers aren’t always wearing distinguishing uniforms, the ability to recognize actions from other humans becomes important. Even in cases where a robot can tell friend from foe, it might have trouble recognizing if the enemy is surrendering or is wounded.
Media depictions like Terminator have anthropomorphized warrior robots, which “implies a level of cognitive ability that these machines do not have,” said Paul Scharre, who has worked on the Defense Department’s autonomous-weapon policies. “Images from science fiction are not very accurate or very helpful.”
Killer robots won’t look like humans, and they probably won’t act like them either. “What [robots] really lack is a meaningful understanding of context and situation,” Asaro said. “It’s hard to believe that a machine could be making those kinds of meaningful choices about life and death.”
Other concerns include the possibility of malicious hackers taking over a robot army. And then there’s the possibility of a “flash war” starting over a mistake. If one robot malfunctions and fires, robots on the other side could return fire automatically, starting a conflict at the speed of circuitry before a human could intervene.
The arms-race worry is very real, Asaro said. Unlike nuclear weapons, which require extreme technical sophistication, killer robots won’t be hard to replicate. “Once these technologies are in existence, they’ll proliferate widely,” he said. “There are even software systems that could be implemented through the Internet.”
Despite all these concerns, robot advocates say the rush to ban the technology outright is ill-conceived. While preaching caution on development, they also say it’s important to test the systems’ limits before crafting policy.
They fear a ban based on imaginations of an android toting a machine gun could interfere with lifesaving technologies like rapid-response air-defense missiles. And while context recognition remains a huge challenge, advocates say it’s at least worth exploring whether robot warriors can actually reduce civilian casualties in some circumstances.
There’s also the challenge of enforcement. Even if a ban were enacted, it would be hard to tell if a drone fired a missile on its own or some other weapons system was operating under the commands of a human or an algorithm.
It’s not that anyone has killer-robot plans just yet. In fact, the panelists agreed the U.S. has been thoughtful and responsible in approaching the issue. The Defense Department even issued a policy statement on the technology in late 2012 that established a five- to 10-year moratorium on developing killer robots.
But an American stand-alone policy might not be enough. According to Scharre, at least 23 countries have joined the race to build armed drones. It’s not hard to imagine a similar push to build machines that could replace combat soldiers — with or without U.S. involvement.
Meanwhile, the issue will get more and more tricky. We won’t make the jump from a flesh-and-blood soldier to a T-1000, but some combat systems could gradually phase in more and more autonomy.
Some robots will have in-the-loop systems, where human operators monitor actions and can override at any point. The longer-term prospect is an out-of-the-loop robot, one that carries out missions with minimal supervision and no possibility for human control.
Panelists agreed that the best chance for a ban will probably come wrapped in language other than “robot ban.” They hope to persuade countries to agree to something in more positive language — that their autonomous weapons will have a human operator monitoring them and ready to take over at any time.
Regardless of just what is allowed, it’s important that militaries know where to draw the line before they have the technology to build killer robots. A treaty “frees up weapons developers to know what they’re allowed to do,” Scharre said.
As robots get more complex — and better able to read and respond to human cues — it’s likely some advocates will argue they deserve a more prominent place in combat. But for McGovern and his allies, such weapons would have to meet a challenge they now deem impossible: Can you build a robot not only with a brain but with a soul?
What We're Following See More »
"As Donald Trump captures the mantle of presumptive Republican nominee, a new poll finds he begins his general election campaign well behind Democratic front-runner Hillary Clinton. The new CNN/ORC Poll, completed ahead of Trump's victory last night, found Clinton leads 54% to 41%, a 13-point edge over the New York businessman, her largest lead since last July. Clinton is also more trusted than Trump on many issues voters rank as critically important, with one big exception. By a 50% to 45% margin, voters say Trump would do a better job handling the economy than Clinton would."
In an editorial, the Wall Street Journal sets out to relieve conservatives of the temptation to back a third-party candidate over Donald Trump. "The thought is more tempting this year than most, but it’s still hard to see how this would accomplish more than electing Hillary Clinton and muddling the message from a Trump defeat. ... The usual presidential result is that the party that splinters hands the election to the other, more united party." But in the Weekly Standard, Bill Kristol is having none of it: "Serious people, including serious conservatives, cannot acquiesce in Donald Trump as their candidate. ... Donald Trump should not be president of the United States. The Wall Street Journal cannot bring itself to say that. We can say it, we do say it, and we are proud to act accordingly."
- Nate Cohn, New York Times: "There have been 10-point shifts over the general election season before, even if it’s uncommon. But there isn’t much of a precedent for huge swings in races with candidates as well known as Mr. Trump and Mrs. Clinton. A majority of Americans may not like her, but they say they’re scared of him."
- Roger Simon, PJ Media: "He is particularly fortunate that his opposition, Hillary Clinton, besides still being under threat of indictment and still not having defeated Bernie Sanders (go figure), is a truly uninspiring, almost soporific, figure. ... She's not a star. Trump is. All attention will be on him in the general election. The primaries have shown us what an advantage that is. What that means for American politics may not all be good, but it's true."
- The editors, The Washington Examiner: "At the very least, Trump owes it to the country he boasts he will 'make great again' to try to demonstrate some seriousness about the office he seeks. He owes this even to those who will never consider voting for him. He can start by swearing off grand displays of aggressive and apparently deliberate ignorance. This is not too much to ask."
Humana announced it plans to "exit certain statewide individual markets and products 'both on and off [Obamacare] exchange,' the insurer said in its financial results released Monday." The company also said price hikes may be forthcoming, "commensurate with anticipated levels of risk by state." Its individual-market enrollment was down 21% in the first quarter from a year ago.