�
by Nafeez Ahmed
12 May 2016
from
Medium Website
�
Dr
Nafeez Ahmed is an
investigative journalist, bestselling author and
international security scholar.
A former Guardian
writer, he writes the 'System Shift' column for VICE's
Motherboard, and is a weekly columnist for Middle East
Eye.
He is the winner of a 2015 Project Censored Award for
Outstanding Investigative Journalism for his Guardian
work, and was twice selected in the Evening Standard's
top 1,000 most globally influential Londoners, in 2014
and 2015.
Nafeez has also written and reported for The
Independent, Sydney Morning Herald, The Age, The
Scotsman, Foreign Policy, The Atlantic, Quartz,
Prospect, New Statesman, Le Monde diplomatique, New
Internationalist, The Ecologist, Alternet, Counterpunch,
Truthout, among others. |
�
�
�
�
�
Imagine one of these
giant robot dog things being weaponized
and chasing you
through the jungle because you turned up
on a Pentagon kill
list after posting angry stuff on social media
�
�
�
Official US defence and NATO
documents
confirm that autonomous weapon
systems
will kill targets, including
civilians, based on
tweets, blogs and Instagram.
�
An unclassified 2016 Department of Defense (DoD) document,
the
Human Systems Roadmap Review,
reveals that the US military plans to create artificially
intelligent (AI)
autonomous weapon systems, which will use predictive social media
analytics to make decisions on lethal force with minimal human
involvement.
Despite official insistence that humans will retain a "meaningful"
degree of control over autonomous weapon systems, this and other
Pentagon documents dated from 2015 to 2016 confirm that US military
planners are already developing technologies designed to enable
swarms of "self-aware" interconnected robots to design and execute
kill operations against robot-selected targets.
More alarmingly, the documents show that the DoD believes that
within just fifteen years, it will be feasible for mission planning,
target selection and the deployment of lethal force to be
delegated entirely to autonomous weapon systems in
air, land and sea.
�
The Pentagon expects AI threat
assessments for these autonomous operations to be derived from
massive data sets including blogs, websites, and multimedia posts on
social media platforms like,
The raft of Pentagon documentation
flatly contradicts Deputy Defense Secretary Robert Work's
denial that the DoD is planning to develop killer robots.
In a widely reported March
conversation with Washington Post
columnist David Ignatius, Work said that this may change as
rival powers work to create such technologies:
"We might be going up against a
competitor that is more willing to delegate authority to
machines than we are, and as that competition unfolds we will
have to make decisions on how we best can compete."
But, he insisted,
"We will not delegate lethal
authority to a machine to make a decision," except for "cyber or
electronic warfare."
He lied.
Official US defence and NATO documents dissected by
INSURGE intelligence reveal that
Western governments are already planning to develop autonomous
weapons systems with the capacity to make decisions on lethal force
- and that such systems, in the future, are even expected to make
decisions on acceptable levels of "collateral damage."
�
�
�
Behind public
talks, a secret arms race
Efforts to create autonomous robot killers have evolved over the
last decade, but have come to a head this year.
A National Defense Industry Association (NDIA) conference on
Ground Robotics Capabilities in March hosted government officials
and industry leaders confirming that the Pentagon was developing
robot teams that would be able to use lethal force without direction
from human operators.
In April, government representatives and international NGOs convened
at the United Nations in Geneva to discuss the legal and ethical
issues surrounding lethal autonomous weapon systems (LAWS).
That month, the UK government launched a parliamentary inquiry into
robotics and AI.
�
And earlier in May, the White House
Office of Science and Technology announced a series of public
workshops on the wide-ranging social and economic implications of
AI.
�
�
Prototype Terminator
Bots?
�
Most media outlets have reported the fact that so far, governments
have not ruled out the long-term possibility that intelligent robots
could be eventually authorized to make decisions to kill human
targets autonomously.
But contrary to Robert Work's claim, active research and
development efforts to explore this possibility are already
underway. The plans can be gleaned from several unclassified
Pentagon documents in the public record that have gone unnoticed,
until now.
Among them is a document released in February 2016 from the
Pentagon's
Human Systems Community of Interest
(HS COI -
Human Systems COI - Defense Innovation
Marketplace).
�
�
�
The document shows not only that the Pentagon is actively creating
lethal autonomous weapon systems, but that a crucial component of
the decision-making process for such robotic systems will include
complex Big Data models, one of whose inputs will be public social
media posts.
�
�
�
�
Robots that kill
'like people'
The HSCOI is a little-known multi-agency research and development
network seeded by the Office of the Secretary of Defense (OSD),
which acts as a central hub for a huge plethora of science and
technology work across US military and intelligence agencies.
The document is a 53-page presentation prepared by HSCOI chair, Dr.
John Tangney, who is Director of the Office of Naval
Research's Human and Bioengineered Systems Division.
�
Titled Human Systems Roadmap Review, the
slides were presented at the NDIA's Human Systems Conference in
February.
The document says that one of the five "building blocks" of the
Human Systems program is to,
"Network-enable, autonomous weapons
hardened to operate in a future Cyber/EW [electronic warfare]
Environment." This would allow for "cooperative weapon concepts
in communications-denied environments."
But then the document goes further,
identifying a "focus areas" for science and technology development
as,
"Autonomous Weapons: Systems that
can take action, when needed", along with "Architectures for
Autonomous Agents and Synthetic Teammates."
The final objective is the establishment
of "autonomous control of multiple unmanned systems for military
operations."
�
�
�
Such autonomous systems must be capable of selecting and engaging
targets by themselves - with human "control" drastically minimized
to affirming that the operation remains within the parameters of the
Commander's "intent."
The document explicitly asserts that these new autonomous weapon
systems should be able to respond to threats without human
involvement, but in a way that simulates human behavior and
cognition.
The DoD's HSCOI program must,
"bridge the gap between high
fidelity simulations of human cognition in laboratory tasks and
complex, dynamic environments."
Referring to the "Mechanisms of
Cognitive Processing" of autonomous systems, the document highlights
the need for:
"More robust, valid, and integrated
mechanisms that enable constructive agents that truly think and
act like people."
�
�
The Pentagon's ultimate goal is to develop,
"Autonomous control of multiple
weapon systems with fewer personnel" as a "force multiplier."
The new systems must display,
"highly reliable autonomous
cooperative behavior" to allow "agile and robust mission
effectiveness across a wide range of situations, and with the
many ambiguities associated with the 'fog of war.'"
�
�
�
�
Resurrecting
the human terrain
The HSCOI consists of senior officials from the US Army, Navy,
Marine Corps, Air Force, Defense Advanced Research Projects
Agency (DARPA);
and is overseen by the Assistant Secretary of Defense for Research &
Engineering and the Assistant Secretary of Defense for Health
Affairs.
HSCOI's work goes well beyond simply creating autonomous weapons
systems. An integral part of this is simultaneously advancing
human-machine interfaces and predictive analytics.
The latter includes what a HSCOI brochure for the technology
industry, 'Challenges, Opportunities and Future Efforts', describes
as creating,
"models for socially-based threat
prediction" as part of "human activity ISR."
This is short-hand for intelligence,
surveillance and reconnaissance of a population in an 'area of
interest', by collecting and analyzing data on the behaviors,
culture, social structure, networks, relationships, motivation,
intent, vulnerabilities, and capabilities of a human group.
The idea, according to the brochure, is to bring together open
source data from a wide spectrum, including social media sources, in
a single analytical interface that can,
"display knowledge of beliefs,
attitudes and norms that motivate in uncertain environments; use
that knowledge to construct courses of action to achieve
Commander's intent and minimize unintended consequences; [and]
construct models to allow accurate forecasts of predicted
events."
The Human Systems Roadmap Review
document from February 2016 shows that this area of development is a
legacy of the Pentagon's controversial "human terrain" program.
The Human Terrain System (HTS) was a US Army Training and
Doctrine Command (TRADOC) program established in 2006, which
embedded social scientists in the field to augment counterinsurgency
operations in theaters like Iraq and Afghanistan.
The idea was to use social scientists and cultural anthropologists
to provide the US military actionable insight into local populations
to facilitate operations - in other words, to weaponize social
science.
The $725 million program was
shut down in September 2014 in the
wake of growing controversy over its sheer incompetence.
The HSCOI program that replaces it includes social sciences but the
greater emphasis is now on combining them with predictive
computational models based on Big Data.
�
The brochure puts the projected budget
for the new human systems project at $450 million.
The Pentagon's Human Systems Roadmap Review demonstrates that far
from being eliminated, the HTS paradigm has been upgraded as part of
a wider multi-agency program that involves integrating Big Data
analytics with human-machine interfaces, and ultimately autonomous
weapon systems.
�
�
�
�
The new science of
social media crystal ball gazing
The 2016 human systems roadmap explains that the Pentagon's "vision"
is to use "effective engagement with the dynamic human terrain to
make better courses of action and predict human responses to our
actions" based on,
"predictive analytics for
multi-source data."
�
Are those 'soldiers'
in the photo human�
or are they really
humanoid (killer) robots?
�
In a slide entitled, 'Exploiting Social Data, Dominating Human
Terrain, Effective Engagement,' the document provides further detail
on the Pentagon's goals:
"Effectively evaluate/engage social
influence groups in the op-environment to understand and exploit
support, threats, and vulnerabilities throughout the conflict
space. Master the new information environment with capability to
exploit new data sources rapidly."
The Pentagon wants to draw on massive
repositories of open source data that can support,
"predictive, autonomous analytics to
forecast and mitigate human threats and events."
�
�
This means not just developing,
"behavioral models that reveal
sociocultural uncertainty and mission risk", but creating
"forecast models for novel threats and critical events with
48-72 hour timeframes", and even establishing technology that
will use such data to "provide real-time situation awareness."
According to the document,
"full spectrum social media
analysis" is to play a huge role in this modeling, to support
"I/W [irregular warfare], information operations, and strategic
communications."
This is broken down further into three
core areas:
"Media predictive analytics;
Content-based text and video retrieval; Social media
exploitation for intel."
The document refers to the use of social
media data to forecast future threats and, on this basis,
automatically develop recommendations for a "course of action" (CoA).
Under the title 'Weak Signal Analysis & Social Network Analysis for
Threat Forecasting', the Pentagon highlights the need to:
"Develop real-time understanding of
uncertain context with low-cost tools that are easy to train,
reduce analyst workload, and inform COA [course of action]
selection/analysis."
In other words, the human input into the
development of course of action "selection/analysis" must be
increasingly reduced, and replaced with automated predictive
analytical models that draw extensively on social media data.
This can even be used to inform soldiers of real-time threats using
augmented reality during operations. The document refers to "Social
Media Fusion to alert tactical edge Soldiers" and "Person of
Interest recognition and associated relations."
The idea is to identify potential targets - 'persons of interest'
- and their networks, in real-time, using social media data as
'intelligence.'
�
�
�
Meaningful
human control without humans
Both the US and British governments are therefore rapidly attempting
to redefine "human control" and "human intent" in the context of
autonomous systems.
Among the problems that emerged at the UN meetings in April is the
tendency to dilute the parameters that would allow describing an
autonomous weapon system as being tied to "meaningful" human
control.
A separate Pentagon document dated March 2016 - a set of
presentation slides for that month's IEEE Conference on Cognitive
Methods in Situation Awareness & Decision Support - insists that
DoD policy is to ensure that autonomous systems ultimately operate
under human supervision:
"[The] main benefits of autonomous
capabilities are to extend and complement human performance, not
necessarily provide a direct replacement of humans."
Unfortunately, there is a 'but'.
The March document, Autonomous Horizons: System Autonomy in the
Air Force, was authored by Dr. Greg Zacharias, Chief
Scientist of the US Air Force. The IEEE conference where it was
presented was sponsored by two leading government defense
contractors, Lockheed Martin and United Technologies Corporation,
among other patrons.
Further passages of the document are revealing:
"Autonomous decisions can lead to
high-regret actions, especially in uncertain environments."
In particular, the document observes:
"Some DoD activity, such as force
application, will occur in complex, unpredictable, and contested
environments. Risk is high."
The solution, supposedly, is to design
machines that basically think, learn and problem solve like humans.
�
An autonomous AI system should,
"be congruent with the way humans
parse the problem" and driven by "aiding/automation knowledge
management processes along lines of the way humans solve problem
[sic]."
A section titled 'AFRL [Air Force
Research Laboratory] Roadmap for Autonomy' thus demonstrates how by
2020, the US Air Force envisages "Machine-Assisted Ops compressing
the kill chain."
�
The bottom of the slide reads:
"Decisions at the Speed of
Computing."
This two-staged "kill chain" is broken
down as follows:
-
firstly, "Defensive system mgr
[manager] IDs threats & recommends actions"
-
secondly, "Intelligence analytic
system fuses INT [intelligence] data & cues analyst of
threats"
In this structure, a lethal autonomous
weapon system draws on intelligence data to identify a threat, which
an analyst simply "IDs", before recommending "action."
�
�
�
The analyst's role here is simply to authorize the kill, but in
reality the essential importance of human control - assessment of
the integrity of the kill decision - has been relegated to the end
of an entirely automated analytical process, as a mere
perfunctionary obligation.
By 2030, the document sees human involvement in this process as
being reduced even further to an absolute minimum.
�
While a human operator may be kept "in
the loop" (in the document's words) the Pentagon looks forward to a
fully autonomous system consisting of:
"Optimized platform operations
delivering integrated ISR [intelligence, surveillance and
reconnaissance] and weapon effects."
The goal, in other words, is a single
integrated lethal autonomous weapon system combining full spectrum
analysis of all data sources with "weapon effects" - that is,
target selection and execution.
The document goes to pains to layer this vision with a sense of
human oversight being ever-present.
�
�
�
AI "system
self-awareness"
Yet an even more blunt assertion of the Pentagon's objective is laid
out in a third document, a set of slides titled DoD Autonomy Roadmap
presented exactly a year earlier at the NDIA's Defense Tech Expo.
The document authored by Dr. Jon Bornstein, who leads the
DoD's Autonomy Community of Interest (ACOI), begins by framing its
contents with the caveat:
"Neither Warfighter nor machine is
truly autonomous."
Yet it goes on to call for machine
agents to develop:
"Perception, reasoning, and
intelligence allow[ing] for entities to have existence, intent,
relationships, and understanding in the battle space relative to
a mission."
This will be the foundation for two
types of weapon systems:
In the near term, machine agents will be
able,
"to evolve behaviors over time based
on a complex and ever-changing knowledge base of the battle
space� in the context of mission, background knowledge, intent,
and sensor information."
However, it is the Pentagon's "far term"
vision for machine agents as "self-aware" systems that is
particularly disturbing:
"Far Term:
-
Ontologies adjusted through
common-sense knowledge via intuition
-
Learning approaches based on
self-exploration and social interactions
-
Shared cognition
-
Behavioral stability through
self-modification
-
System self-awareness"
�
�
�
It is in this context of the
"self-awareness" of an autonomous weapon system that the document
clarifies the need for the system to autonomously develop forward
decisions for action, namely:
"Autonomous systems that
appropriately use internal model-based/deliberative planning
approaches and sensing/perception driven actions/control."
The Pentagon specifically hopes to
create what it calls "trusted autonomous systems", that is, machine
agents whose behavior and reasoning can be fully understood, and
therefore "trusted" by humans:
"Collaboration means there must be
an understanding of and confidence in behaviors and decision
making across a range of conditions. Agent transparency enables
the human to understand what the agent is doing and why."
Once again, this is to facilitate a
process by which humans are increasingly removed from the nitty
gritty of operations.
In the "Mid Term", there will be "Improved methods for sharing of
authority" between humans and machines.
�
In the "Far Term", this will have
evolved to a machine system functioning autonomously on the basis of
"Awareness of 'commanders intent'" and the "use of indirect feedback
mechanisms."
�
�
�
This will finally create the capacity to deploy,
"Scalable Teaming of Autonomous
Systems (STAS)", free of overt human direction, in which
multiple machine agents display "shared perception, intent and
execution."
Teams of autonomous weapon systems will
display,
-
"Robust self-organization,
adaptation, and collaboration"
-
"Dynamic adaption, ability to
self-organize and dynamically restructure"
-
"Agent-to-agent collaboration"
Notice the lack of human collaboration.
The "far term" vision for such "self-aware" autonomous weapon
systems is not, as Robert Work claimed, limited to cyber or
electronic warfare, but will include:
These operations might even take place
in tight urban environments,
"in close proximity to other manned
& unmanned systems including crowded military & civilian areas."
�
�
The document admits, though, that the Pentagon's major challenge is
to mitigate against unpredictable environments and emergent
behavior.
Autonomous systems are,
"difficult to assure correct
behavior in a countless number of environmental conditions" and
are "difficult to sufficiently capture and understand all
intended and unintended consequences."
�
�
Terminator
teams, led by humans
The Autonomy roadmap document clearly confirms that the Pentagon's
final objective is to delegate the bulk of military operations to
autonomous machines, capable of inflicting "Collective Defeat of
Hard and Deeply Buried Targets."
�
�
�
One type of machine agent is the "Autonomous Squad Member (Army)",
which,
"Integrates machine semantic
understanding, reasoning, and perception into a ground robotic
system", and displays "early implementation of a goal reasoning
model, Goal-Directed Autonomy (GDA) to provide the robot the
ability to self-select new goals when it encounters an
unanticipated situation."
Human team members in the squad must be
able "to understand an intelligent agent's intent, performance,
future plans and reasoning processes."
Another type is described under the header, 'Autonomy for Air Combat
Missions Team (AF).'
Such an autonomous air team, the document envisages,
"Develops goal-directed reasoning,
machine learning and operator interaction techniques to enable
management of multiple, team UAVs."
This will achieve:
"Autonomous decision and team
learning enable the TBM [Tactical Battle Manager] to maximize
team effectiveness and survivability."
�
�
TBM refers directly to a battle management autonomy software for
unmanned aircraft.
The Pentagon still, of course, wants to ensure that there remains a
human manual override, which the document describes as enabling a
human supervisor,
"to 'call a play' or manually
control the system."
�
�
Targeting evil
antiwar bloggers
Yet the biggest challenge, nowhere acknowledged in any of the
documents, is ensuring that automated AI target selection actually
selects real threats, rather than generating or pursuing false
positives.
According to the Human Systems roadmap document, the Pentagon has
already demonstrated extensive AI analytical capabilities in
real-time social media analysis, through a NATO live exercise last
year.
During the exercise, Trident Juncture - NATO's largest exercise in
a decade - US military personnel,
"curated over 2M [million] relevant
tweets, including information attacks (trolling) and other
conflicts in the information space, including 6 months of
baseline analysis."
They also,
"curated and analyzed over 20K [i.e.
20,000] tweets and 700 Instagrams during the exercise."
�
�
The Pentagon document thus emphasizes that the US Army and Navy can
now already,
"provide real-time situation
awareness and automated analytics of social media sources with
low manning, at affordable cost", so that military leaders can
"rapidly see whole patterns of data flow and critical pieces of
data" and therefore "discern actionable information readily."
�
�
The primary contributor to the Trident Juncture social media
analysis for NATO, which occurred over two weeks from late October
to early November 2015, was a team led by information scientist
Professor Nitin Agarwal of the University of Arkansas, Little
Rock.
Agarwal's project was funded by the US Office of Naval Research, Air
Force Research Laboratory and Army Research Office, and conducted in
collaboration with NATO's Allied Joint Force Command and NATO
Strategic Communications Center of Excellence.
Slides from a
conference presentation about the
research show that the NATO-backed project attempted to identify a
hostile blog network during the exercise containing "anti-NATO and
anti-US propaganda."
Among the top seven blogs identified as key nodes for anti-NATO
internet traffic were websites run by,
-
Andreas Speck, an antiwar
activist
-
War Resisters International (WRI)
-
Egyptian democracy campaigner
Maikel Nabil Sanad,
...along with some Spanish language
anti-militarism sites.
Andreas Speck is a former staffer at WRI, which is an
international network of pacifist NGOs with offices and members in
the UK, Western Europe and the US. One of its funders is the Joseph
Rowntree Charitable Trust.
The WRI is fundamentally committed to nonviolence, and campaigns
against war and militarism in all forms.
Most of the blogs identified by Agarwal's NATO project are
affiliated to the WRI, including for instance nomilservice.com,
WRI's Egyptian affiliate founded by Maikel Nabil, which campaigns
against compulsory military service in Egypt.
�
Nabil was nominated for the Nobel Peace
Prize and even supported by the White House for his conscientious
objection to Egyptian military atrocities.
The NATO project urges:
"These 7 blogs need to be further
monitored."
The project was touted by Agarwal as a
great success:
it managed to extract 635 identity
markers through metadata from the blog network, including 65
email addresses, 3 "persons", and 67 phone numbers.
This is the same sort of metadata that
is routinely used to help identify human targets for drone strikes
- the
vast majority of whom are not terrorists,
but civilians.
Agarwal's conference slides list three Pentagon-funded tools that
his team created for this sort of social media analysis:
Flagging up an Egyptian democracy
activist like Maikel Nabil as a hostile entity promoting anti-NATO
and anti-US propaganda demonstrates that when such automated AI
tools are applied to war theatres in complex environments (think
Pakistan, Afghanistan and Yemen), the potential to identify
individuals or groups critical of US policy as terrorism threats is
all too real.
This case demonstrates how deeply flawed the Pentagon's automation
ambitions really are.
�
Even with the final input of independent
human expert analysts, entirely peaceful pro-democracy campaigners
who oppose war are relegated by NATO to the status of potential
national security threats requiring further surveillance.
�
�
�
Compressing
the kill chain
It's often assumed that DoD Directive 3000.09 issued in 2012,
'Autonomy in Weapon Systems', limits kill decisions to human
operators under the following stipulation in clause 4:
"Autonomous and semi-autonomous
weapon systems shall be designed to allow commanders and
operators to exercise appropriate levels of human judgment over
the use of force."
After several paragraphs underscoring
the necessity of target selection and execution being undertaken
under the oversight of a human operator, the Directive goes on to
open up the possibility of developing autonomous weapon systems
without any human oversight, albeit with the specific approval of
senior Pentagon officials:
"Autonomous weapon systems may be
used to apply non-lethal, non-kinetic force, such as some forms
of electronic attack, against materiel targets�
�
Autonomous or semi-autonomous weapon
systems intended to be used in a manner that falls outside the
policies in subparagraphs 4.c.(1) through 4.c.(3) must be
approved by,
-
the Under Secretary of
Defense for Policy (USD - P)
-
the Under Secretary of
Defense for Acquisition, Technology, and Logistics (USD
- AT&L)
-
the CJCS before formal
development and again before fielding"
Rather than prohibiting the development
of lethal autonomous weapon systems, the directive simply
consolidates all such developments under the explicit authorization
of the Pentagon's top technology chiefs.
Worse, the directive expires on 21st November 2022 - which is
around the time such technology is expected to become operational.
Indeed, later that year, Lieutenant Colonel Jeffrey S. Thurnher,
a US Army lawyer at the US Naval War College's International Law
Department, published a
position paper in the National
Defense University publication, Joint Force Quarterly.
�
�
If these puppies became self-aware,
would they be cuter?
�
He recommended that there were no substantive legal or ethical
obstacles to developing fully autonomous killer robots - as long
as such systems are designed in such a way as to maintain a
semblance of human oversight through "appropriate control measures."
In the conclusions to his paper, titled No One At The Controls:
Legal Implications of Fully Autonomous Targeting, Thurnher
wrote:
"LARs [lethal autonomous robots]
have the unique potential to operate at a tempo faster than
humans can possibly achieve and to lethally strike even when
communications links have been severed.
�
Autonomous targeting technology will
likely proliferate to nations and groups around the world. To
prevent being surpassed by rivals, the United States should
fully commit itself to harnessing the potential of fully
autonomous targeting.
�
The feared legal concerns do not
appear to be an impediment to the development or deployment of
LARs. Thus, operational commanders should take the lead in
making this emerging technology a true force multiplier for the
joint force."
Lt. Col. Thurnher went on to become a
Legal Advisor for NATO Rapid Deployable Corps in Munster, Germany.
�
In this capacity, he was a contributor
to a little-known 2014 official policy guidance document for NATO
Allied Command Transformation,
Autonomy in Defence Systems.
The NATO document, which aims to provide expert legal advice to
government policymakers, sets out a position in which the deployment
of autonomous weapon systems for lethal combat - in particular the
delegation of targeting and kill decisions to machine agents - is
viewed as being perfectly legitimate in principle.
�
�
�
�
�
It is the responsibility of specific states, the document concludes,
to ensure that autonomous systems operate in compliance with
international law in practice - a caveat that also applies for the
use of autonomous systems for law-enforcement and self-defence.
In the future, though, the NATO document points to the development
of autonomous systems that can,
"reliably determine when foreseen
but unintentional harm to civilians is ethically permissible."
Acknowledging that currently only humans
are able to make a,
"judgment about the ethical
permissibility of foreseen but unintentional harm to civilians
(collateral damage)", the NATO policy document urges states
developing autonomous weapon systems to ensure that eventually
they "are able to integrate with collateral damage estimation
methodologies" so as to delegate targeting and kill decisions
accordingly.
The NATO position is particularly
extraordinary given that international law - such as the Geneva
Conventions - defines foreseen deaths of civilians caused by a
military action as intentional, precisely because they were foreseen
yet actioned anyway.
The Statute of the International Criminal Court (ICC)
identifies such actions as "war crimes", if a justifiable and direct
military advantage cannot be demonstrated:
"� making the civilian population or
individual civilians, not taking a direct part in hostilities,
the object of attack; launching an attack in the knowledge that
such attack will cause incidental loss of civilian life, injury
to civilians or damage to civilian objects which would be
clearly excessive in relation to the concrete and direct
military advantage anticipated;� making civilian objects, that
is, objects that are not military objectives, the object of
attack."
And
customary international law
recognizes the following acts as war crimes:
"�launching an indiscriminate attack
resulting in loss of life or injury to civilians or damage to
civilian objects; launching an attack against works or
installations containing dangerous forces in the knowledge that
such attack will cause excessive incidental loss of civilian
life, injury to civilians or damage to civilian objects."
In other words, NATO's official policy
guidance on autonomous weapon systems sanitizes the potential for
automated war crimes.
�
The document actually encourages states
to eventually develop autonomous weapons capable of inflicting
"foreseen but unintentional" harm to civilians in the name of
securing a 'legitimate' military advantage.
�
�
�
�
�
�
�
�
�
Yet the NATO document does not stop there.
�
It even goes so far as to argue that
policymakers considering the development of autonomous weapon
systems for lethal combat should reflect on the possibility that
delegating target and kill decisions to machine agents would
minimize civilian casualties.
�
�
�
�
�
Skynet,
anyone?
A new report (Autonomous
Weapons and Operational Risk) by Paul Scharre, who
led the Pentagon working group that drafted DoD Directive 3000.09
and now heads up the future warfare program at the Center for New
American Security in Washington DC, does not mince words about the
potentially "catastrophic" risks of relying on autonomous weapon
systems.
"With an autonomous weapon," he
writes, "the damage potential before a human controller is able
to intervene could be far greater�
"In the most extreme case, an autonomous weapon could continue
engaging inappropriate targets until it exhausts its magazine,
potentially over a wide area.
�
If the failure mode is replicated in
other autonomous weapons of the same type, a military could face
the disturbing prospect of large numbers of autonomous weapons
failing simultaneously, with potentially catastrophic
consequences."
�
Source
�
�
Scharre points out that,
"autonomous weapons pose a novel
risk of mass fratricide, with large numbers of weapons turning
on friendly forces," due to any number of potential reasons,
including "hacking, enemy behavioral manipulation, unexpected
interactions with the environment, or simple malfunctions or
software errors."
Noting that in the software industry,
for every 1,000 lines of code, there are between 15 and 50 errors,
Scharre points out that such marginal, routine errors could easily
accumulate to create unexpected results that could be missed even by
the most stringent testing and validation methods.
The more complex the system, the more difficult it will be to verify
and track the system's behavior under all possible conditions:
"� the number of potential
interactions within the system and with its environment is
simply too large."
The documents discussed here show that
the Pentagon is going to pains to develop ways to mitigate these
risks.
But as Scharre concludes,
"these risks cannot be eliminated
entirely. Complex tightly coupled systems are inherently
vulnerable to 'normal accidents.' The risk of accidents can be
reduced, but never can be entirely eliminated."
As the trajectory toward AI autonomy and
complexity accelerates, so does the risk that autonomous weapon
systems will, eventually, wreak havoc...
�
�
� |