Late Sunday, 116 entrepreneurs, including Elon Musk, released a letter to the United Nations warning of the dangerous âPandoraâs Boxâ presented by weapons that make their own decisions about when to kill. Publications including The Guardian and The Washington Post ran headlines saying Musk and his cosigners had called for a âbanâ on âkiller robots.â
Those headlines were misleading. The letter doesnât explicitly call for a ban, although one of the organizers has suggested it does. Rather, it offers technical advice to a UN committee on autonomous weapons formed in December. The groupâs warning that autonomous machines âcan be weapons of terrorâ makes sense. But trying to ban them outright is probably a waste of time.
Thatâs not because itâs impossible to ban weapons technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons, for example. An international agreement blocking use of laser weapons intended to cause permanent blindness is holding up nicely.
Weapons systems that make their own decisions are a very different, and much broader, category. The line between weapons controlled by humans and those that fire autonomously is blurry, and many nationsâincluding the USâhave begun the process of crossing it. Moreover, technologies such as robotic aircraft and ground vehicles have proved so useful that armed forces may find giving them more independenceâincluding to killâirresistible.
A recent report on artificial intelligence and war commissioned by the Office of the Director of National Intelligence concluded that the technology is set to massively magnify military power. Greg Allen, coauthor of the report and now an adjunct fellow at nonpartisan think tank the Center for New American Security, doesnât expect the US and other countries to be able to stop themselves from building arsenals of weapons that can decide when to fire. âYou are unlikely to achieve a full ban of autonomous weapons,â he says. âThe temptation for using them is going to be very intense.â
The US Department of Defense has a policy to keep a âhuman in the loopâ when deploying lethal force. Pentagon spokesperson Roger Cabiness said that the US has declined to endorse a ban on autonomous weapons, noting that the departmentâs Law of War Manual specifies that autonomy can help forces meet their legal and ethical obligations. âFor example, commanders can use precision-guided weapon systems with homing functions to reduce the risk of civilian casualties,â said Cabiness. In 2015, the UK government responded to calls for a ban on autonomous weapons by saying there was no need for one, and that existing international law was sufficient.
You donât have to look far to find weapons already making their own decisions to some degree. One is the AEGIS ship-based missile and aircraft-defense system used by the US Navy. It is capable of engaging approaching planes or missiles without human intervention, according to a CNAS report.
Other examples include a drone called the Harpy, developed in Israel, which patrols an area searching for radar signals. If it detects one, it automatically dive-bombs the signalâs source. Manufacturer Israeli Aerospace Industries markets the Harpy as a ââFire and Forgetâ autonomous weapon.â
Musk signed an earlier letter in 2015 alongside thousands of AI experts in academia and industry that called for a ban on offensive use of autonomous weapons. Like Sundayâs letter, it was supported, and published, by the Future of Life Institute, an organization that ponders long-term effects of AI and other technologies, and to which Musk has gifted $10 million.
Toby Walsh, an AI professor at the University of New South Wales, coordinated the latest letter, a spokesperson for the university told WIRED late Monday. The spokesperson said that listing autonomous weapons under the United Nations Convention on Certain Conventional Weapons would âeffectivelyâ be a ban. The full name of the convention describes it as providing for both prohibitions and restrictions on the use of weapons.
Rebecca Crootof, a researcher at Yale Law School, says people concerned about autonomous weapons systems should consider more constructive alternatives to campaigning for a total ban.
âThat time and energy would be much better spent developing regulations,â she says. International laws such as the Geneva Convention that restrict the activities of human soldiers could be adapted to govern what robot soldiers can do on the battlefield, for example. Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example by killing civilians.
UPDATE, August 22, 11:45 am ET: This story has been updated to include comments from a representative of the University of New South Wales.
UPDATE, August 24, 12:45 pm ET: This story has been updated to include comments from the US Defense Department.