Soon, Drones May Be Able to Make Lethal Decisions on Their Own

A NASA Global Hawk unmanned aerial vehicle, or drone aircraft, takes off during a Hurricane and Severe Storm Sentinel, or HS3, mission at NASA's Wallops Flight Facility in Wallops Island, Virginia, on September 10, 2013.
National Journal
Joshua Foust, Defense One
Oct. 8, 2013, 5:53 a.m.

Sci­ent­ists, en­gin­eers and poli­cy­makers are all fig­ur­ing out ways drones can be used bet­ter and more smartly, more pre­cise and less dam­aging to ci­vil­ians, with longer range and bet­ter stay­ing power. One meth­od un­der de­vel­op­ment is by in­creas­ing autonomy on the drone it­self.

Even­tu­ally, drones may have the tech­nic­al abil­ity to make even leth­al de­cisions autonom­ously: to re­spond to a pro­grammed set of in­puts, se­lect a tar­get and fire their weapons without a hu­man re­view­ing or check­ing the res­ult. Yet the idea of the U.S. mil­it­ary de­ploy­ing a leth­al autonom­ous ro­bot, or LAR, is spark­ing con­tro­versy. Though autonomy might ad­dress some of the cur­rent down­sides of how drones are used, they in­tro­duce new down­sides poli­cy­makers are only just learn­ing to grapple with.

The ba­sic con­ceit be­hind a LAR is that it can out­per­form and out­think a hu­man op­er­at­or. “If a drone’s sys­tem is soph­ist­ic­ated enough, it could be less emo­tion­al, more se­lect­ive and able to provide force in a way that achieves a tac­tic­al ob­ject­ive with the least harm,” said Purdue Uni­versity Pro­fess­or Samuel Liles. “A leth­al autonom­ous ro­bot can aim bet­ter, tar­get bet­ter, se­lect bet­ter, and in gen­er­al be a bet­ter as­set with the linked ISR [in­tel­li­gence, sur­veil­lance, and re­con­nais­sance] pack­ages it can run.”

Though the pace for drone strikes has slowed down — only 21 have struck Pakistan in 2013, versus 122 in 2010 ac­cord­ing to the New Amer­ica Found­a­tion — un­manned vehicles re­main a staple of the Amer­ic­an coun­ter­insur­gency toolkit. But drones have built-in vul­ner­ab­il­it­ies that mil­it­ary plan­ners still have not yet grappled with. Last year, for ex­ample, an aerospace en­gin­eer told the House Home­land Se­cur­ity Com­mit­tee that with some in­ex­pens­ive equip­ment he could hack in­to a drone and hi­jack it to per­form some rogue pur­pose.

Drones have been hack­able for years. In 2009, de­fense of­fi­cials told re­port­ers that Ir­a­ni­an-backed mi­li­tias used $26 of off-the-shelf soft­ware to in­ter­cept the video feeds of drones fly­ing over Ir­aq. And in 2011, it was re­por­ted that a vir­us had in­fec­ted some drone con­trol sys­tems at Creech Air Force Base in Nevada, lead­ing to se­cur­ity con­cerns about the se­cur­ity of un­manned air­craft.

It may be that the only way to make a drone truly se­cure is to al­low it to make its own de­cisions without a hu­man con­trol­ler: if it re­ceives no out­side com­mands, then it can­not be hacked (at least as eas­ily). And that’s where LARs, might be the most at­tract­ive.

Though they do not yet ex­ist, and are not pos­sible with cur­rent tech­no­logy, LARs are the sub­ject of fierce de­bate in aca­demia, the mil­it­ary and policy circles. Still, many treat their de­vel­op­ment as in­ev­it­ab­il­ity. But how prac­tic­al would LARs be on the bat­tle­field?

Heath­er Roff, a vis­it­ing pro­fess­or at the Uni­versity of Den­ver, said many con­flicts, such as the civil war in Syr­ia, are too com­plex for LARs. “It’s one thing to use them in a con­ven­tion­al con­flict,” where large mil­it­ar­ies fight away from cit­ies, “but we tend to fight asym­met­ric battles. And in­ter­ven­tions are only mil­it­ary cam­paigns — the ci­vil­ian ef­fects mat­ter.”

Roff says that be­cause LARs are not soph­ist­ic­ated enough to mean­ing­fully dis­tin­guish between ci­vil­ians and mil­it­ants in a com­plex, urb­an en­vir­on­ment, they prob­ably would not be ef­fect­ive at achiev­ing a con­struct­ive mil­it­ary end— if only be­cause of how a ci­vil­ian pop­u­la­tion would likely re­act to self-gov­ern­ing ma­chines fir­ing weapons at their city. “The idea that you could solve that crisis with a ro­bot­ic weapon is naïve and dan­ger­ous,” she said.

Any autonom­ous weapons sys­tem is un­likely to be used by the mil­it­ary, ex­cept in ex­traordin­ary cir­cum­stances, ar­gued Will Mc­Cants, a fel­low at the Brook­ings Saban Cen­ter and dir­ect­or of its pro­ject on U.S. Re­la­tions with the Is­lam­ic World. “You could ima­gine a scen­ario,” he says, “in which LAR planes hunted sur­face-to-air mis­siles as part of a cam­paign to des­troy Syr­ia’s air de­fenses.” It would re­move the risk to U.S. pi­lots while ex­clus­ively tar­get­ing war equip­ment that has no ci­vil­ian pur­pose.

But such a cam­paign is un­likely to ever hap­pen. “Ul­ti­mately, the na­tion­al se­cur­ity staff,” he said, re­fer­ring to per­son­nel that make up the of­fi­cials and ad­visers of the Na­tion­al Se­cur­ity Coun­cil, “does not want to give up con­trol of the con­flict.” The polit­ics of the de­cision to de­ploy any kind of autonom­ous weaponry mat­ters as much as the cap­ab­il­ity of the tech­no­logy it­self. “With an autonom­ous sys­tem, the con­sequences of fail­ure are worse in the pub­lic’s mind. There’s something about hu­man er­ror that makes people more com­fort­able with col­lat­er­al dam­age if a per­son does it,” Mc­Cants said.

That’s not to say any­one is truly com­fort­able with col­lat­er­al dam­age. “They’d rather own these kinds of de­cisions them­selves and be able to chalk it up to hu­man er­ror,” Mc­Cants said. Polit­ic­al is­sues aside, B.J. Strawser, as­sist­ant pro­fess­or at the Nav­al Post­gradu­ate School, says that LARs simply could not be used ef­fect­ively in a place like Syr­ia. “You’d need ex­ceed­ingly care­ful and re­strict­ive ROEs [rules of en­gage­ment], and I worry that any­one could carry that out ef­fect­ive, autonom­ous weapon or not,” he said.

“I don’t think any act­or, hu­man or not, is cap­able of car­ry­ing out the re­fined, pre­cise ROEs that would en­able an armed in­ter­ven­tion to be help­ful in Syr­ia.”

×