One Congressman’s Crusade to Save the World From Killer Robots

Robots could one day keep troops out of combat, but opponents say deploying them will have dangerous consequences.

National Journal
Alex Brown
Add to Briefcase
See more stories about...
Alex Brown
July 17, 2014, 4:15 p.m.

If a ro­bot sol­dier com­mits a war crime, who is held ac­count­able?

You can’t pun­ish a col­lec­tion of parts and cod­ing al­gorithms. But can you blame a hu­man com­mand­er, who gave a leg­al or­der only to see the ro­bot carry it out in­cor­rectly? And what about the de­fense man­u­fac­tur­ers, which are of­ten im­mune from the kind of law­suits that would plague ci­vil­ian out­fits if their products cost lives.

The culp­ab­il­ity ques­tion is one of a host of thorny mor­al di­lem­mas presen­ted by leth­al ro­bots. On the one hand, if ef­fect­ive, ro­bot sol­diers could re­place ground troops and pre­vent thou­sands of Amer­ic­an cas­u­al­ties. And ro­bots aren’t sus­cept­ible to many of the weak­nesses that plague hu­mans: ex­haus­tion, sick­ness, in­fec­tion, emo­tion, in­de­cision.

But even if ro­bot war­ri­ors can keep Amer­ic­an lives out of danger, can they be trus­ted with the com­plic­ated com­bat de­cisions now left to hu­man judg­ment?

Rep. Jim McGov­ern thinks not.

The Mas­sachu­setts Demo­crat is part of a cru­sade for an in­ter­na­tion­al ban on killer ro­bots — ma­chines that can de­cide without hu­man in­put whom to tar­get and when to use force.

The only way to stop killer ro­bots, said McGov­ern and a series of pan­el­ists he as­sembled for a Cap­it­ol Hill brief­ing this week, is to ban them be­fore they even ex­ist. Much like drones, once someone gets a killer ro­bot, it’s only a mat­ter of time be­fore every­one else is ra­cing to catch up. And des­pite some coun­tries’ com­mit­ment to eval­u­at­ing the tech­no­logy re­spons­ibly, good in­ten­tions nev­er won an arms race.

“The only thing harder than get­ting a ban in place is get­ting a ban in place after something is de­veloped,” McGov­ern said.

McGov­ern is ra­cing tech­no­logy, but he be­lieves he has time: He thinks it will take an­oth­er two to three dec­ades be­fore the tech­no­logy would be avail­able.

McGov­ern’s Tues­day pan­el is part of an on­go­ing ef­fort by anti-ro­bot act­iv­ists to raise aware­ness about the is­sue. They hope law­makers will share their con­cerns and join their push for a world­wide ban. “The U.S. should show lead­er­ship on this,” said the Hu­man Rights Watch’s Steve Goose. “If the U.S. were able to get out in front “¦ it would lead the way for many oth­er na­tions.”

So why is it so im­port­ant that ro­bots nev­er see the bat­tle­field? For some of the pan­el­ists, the is­sue is a mor­al one. “Do we really want to es­tab­lish a pre­ced­ent where it’s OK for ma­chines to take the lives of hu­man be­ings?” said Dr. Peter As­aro, a founder of the In­ter­na­tion­al Com­mit­tee for Ro­bot Arms Con­trol.

For most, though, the chief worry is judg­ment, and hu­mans’ in­nate abil­ity to read con­text. “Sol­diers have to rely on in­ten­tion or subtle clues,” said Bon­nie Docherty, an arms ex­pert at Hu­man Rights Watch and a lec­turer at Har­vard Law. “We have ser­i­ous con­cerns that a fully autonom­ous weapon could ever reach that level.”

Es­pe­cially in bat­tle­fields where sol­diers aren’t al­ways wear­ing dis­tin­guish­ing uni­forms, the abil­ity to re­cog­nize ac­tions from oth­er hu­mans be­comes im­port­ant. Even in cases where a ro­bot can tell friend from foe, it might have trouble re­cog­niz­ing if the en­emy is sur­ren­der­ing or is wounded.

Me­dia de­pic­tions like Ter­min­at­or have an­thro­po­morph­ized war­ri­or ro­bots, which “im­plies a level of cog­nit­ive abil­ity that these ma­chines do not have,” said Paul Scharre, who has worked on the De­fense De­part­ment’s autonom­ous-weapon policies. “Im­ages from sci­ence fic­tion are not very ac­cur­ate or very help­ful.”

Killer ro­bots won’t look like hu­mans, and they prob­ably won’t act like them either. “What [ro­bots] really lack is a mean­ing­ful un­der­stand­ing of con­text and situ­ation,” As­aro said. “It’s hard to be­lieve that a ma­chine could be mak­ing those kinds of mean­ing­ful choices about life and death.”

Oth­er con­cerns in­clude the pos­sib­il­ity of ma­li­cious hack­ers tak­ing over a ro­bot army. And then there’s the pos­sib­il­ity of a “flash war” start­ing over a mis­take. If one ro­bot mal­func­tions and fires, ro­bots on the oth­er side could re­turn fire auto­mat­ic­ally, start­ing a con­flict at the speed of cir­cuitry be­fore a hu­man could in­ter­vene.

The arms-race worry is very real, As­aro said. Un­like nuc­le­ar weapons, which re­quire ex­treme tech­nic­al soph­ist­ic­a­tion, killer ro­bots won’t be hard to rep­lic­ate. “Once these tech­no­lo­gies are in ex­ist­ence, they’ll pro­lif­er­ate widely,” he said. “There are even soft­ware sys­tems that could be im­ple­men­ted through the In­ter­net.”

Des­pite all these con­cerns, ro­bot ad­voc­ates say the rush to ban the tech­no­logy out­right is ill-con­ceived. While preach­ing cau­tion on de­vel­op­ment, they also say it’s im­port­ant to test the sys­tems’ lim­its be­fore craft­ing policy.

They fear a ban based on ima­gin­a­tions of an an­droid tot­ing a ma­chine gun could in­ter­fere with lifesav­ing tech­no­lo­gies like rap­id-re­sponse air-de­fense mis­siles. And while con­text re­cog­ni­tion re­mains a huge chal­lenge, ad­voc­ates say it’s at least worth ex­plor­ing wheth­er ro­bot war­ri­ors can ac­tu­ally re­duce ci­vil­ian cas­u­al­ties in some cir­cum­stances.

There’s also the chal­lenge of en­force­ment. Even if a ban were en­acted, it would be hard to tell if a drone fired a mis­sile on its own or some oth­er weapons sys­tem was op­er­at­ing un­der the com­mands of a hu­man or an al­gorithm.

It’s not that any­one has killer-ro­bot plans just yet. In fact, the pan­el­ists agreed the U.S. has been thought­ful and re­spons­ible in ap­proach­ing the is­sue. The De­fense De­part­ment even is­sued a policy state­ment on the tech­no­logy in late 2012 that es­tab­lished a five- to 10-year morator­i­um on de­vel­op­ing killer ro­bots.

But an Amer­ic­an stand-alone policy might not be enough. Ac­cord­ing to Scharre, at least 23 coun­tries have joined the race to build armed drones. It’s not hard to ima­gine a sim­il­ar push to build ma­chines that could re­place com­bat sol­diers — with or without U.S. in­volve­ment.

Mean­while, the is­sue will get more and more tricky. We won’t make the jump from a flesh-and-blood sol­dier to a T-1000, but some com­bat sys­tems could gradu­ally phase in more and more autonomy.

Some ro­bots will have in-the-loop sys­tems, where hu­man op­er­at­ors mon­it­or ac­tions and can over­ride at any point. The longer-term pro­spect is an out-of-the-loop ro­bot, one that car­ries out mis­sions with min­im­al su­per­vi­sion and no pos­sib­il­ity for hu­man con­trol.

Pan­el­ists agreed that the best chance for a ban will prob­ably come wrapped in lan­guage oth­er than “ro­bot ban.” They hope to per­suade coun­tries to agree to something in more pos­it­ive lan­guage — that their autonom­ous weapons will have a hu­man op­er­at­or mon­it­or­ing them and ready to take over at any time.

Re­gard­less of just what is al­lowed, it’s im­port­ant that mil­it­ar­ies know where to draw the line be­fore they have the tech­no­logy to build killer ro­bots. A treaty “frees up weapons de­velopers to know what they’re al­lowed to do,” Scharre said.

As ro­bots get more com­plex — and bet­ter able to read and re­spond to hu­man cues — it’s likely some ad­voc­ates will ar­gue they de­serve a more prom­in­ent place in com­bat. But for McGov­ern and his al­lies, such weapons would have to meet a chal­lenge they now deem im­possible: Can you build a ro­bot not only with a brain but with a soul?

What We're Following See More »
Clinton Reaching Out to GOP Senators
25 minutes ago

If you need a marker for how confident Hillary Clinton is at this point of the race, here's one: CNN's Jeff Zeleny reports "she's been talking to Republican senators, old allies and new, saying that she is willing to work with them and govern."

Filipino President Duterte Threatens to Expel U.S. Troops
25 minutes ago

On Tuesday, President Rodrigo Duterte of the Philippines threatened to kick U.S. troops out of the country, adding that if he remains president for more than one term he will move to terminate all military deals with America. Last week, Duterte called for a separation between the two countries, though other government officials immediately said he did not mean that literally.

Trump Admits He’s Behind
25 minutes ago
Ron Klain in Line to Be Clinton’s Chief of Staff?
26 minutes ago

Sources tell CNN that longtime Democratic operative Ron Klain, who has been Vice President Biden's chief of staff, is "high on the list of prospects" to be chief of staff in a Clinton White House. "John Podesta, the campaign chairman, has signaled his interest in joining the Cabinet, perhaps as Energy secretary."

Trump TV: It’s Already Happening
26 minutes ago

Last night, the Trump campaign kicked off a Facebook Live talk show "that will air on the candidate’s Facebook page every night at 6:30 pm ET" from Trump Tower. "The show will be hosted by Boris Epshteyn, a senior adviser to the campaign, Tomi Lahren, a conservative commentator for Glenn Beck’s TheBlaze, and Cliff Sims, another Trump adviser." Last night's episode saw them interviewing campaign manager KellyAnne Conway and spokesperson/adviser Jason Miller. Epshteyn said it's not a test drive for Trump TV; it's simply a way for the campaign to get its message out without going through the "liberal media."


Welcome to National Journal!

You are currently accessing National Journal from IP access. Please login to access this feature. If you have any questions, please contact your Dedicated Advisor.