Contact us
[email protected] | |
3275638434 | |
Paper Publishing WeChat |
Useful Links
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Article
Lethal Autonomous Weapon Systems and Responsibility Gaps
Author(s)
Anne Gerdes
Full-Text PDF XML 429 Views
DOI:10.17265/2159-5313/2018.05.004
Affiliation(s)
The University of Southern Denmark
ABSTRACT
This
paper argues that delegation of lethal decisions to autonomous weapon systems
opens an unacceptable responsibility gap, which cannot be effectively countered unless we enforce a
preemptive ban on lethal autonomous weapon systems (LAWS). Initially, the
promises and perils of artificial intelligence are brought forward in pointing
out (1) that it remains an open question whether moral decision making,
understood as situated ethical judgement, is computationally tractable, and (2)
that the kind of artificial intelligence, which would be required to cause
ethical reasoning, would imply a system capable of operating as an independent
reasoner in novel contexts (sec. 2). In continuation thereof, issues of
responsibility are discussed (sec. 3 and 3.1) and it is claimed that
unacceptable responsibility gaps may occur since unpredictability would presumably follow full system autonomy. These
circumstances call for a strong precautionary principle, in the form of a
preemptive ban.
KEYWORDS
LAWS, artificial intelligence (AI), responsibility
Cite this paper
References