Paper Status Tracking
Contact us
[email protected]
Click here to send a message to me 3275638434
Paper Publishing WeChat

Article
Affiliation(s)

The University of Southern Denmark

ABSTRACT

This paper argues that delegation of lethal decisions to autonomous weapon systems opens an unacceptable responsibility gap, which cannot be effectively countered unless we enforce a preemptive ban on lethal autonomous weapon systems (LAWS). Initially, the promises and perils of artificial intelligence are brought forward in pointing out (1) that it remains an open question whether moral decision making, understood as situated ethical judgement, is computationally tractable, and (2) that the kind of artificial intelligence, which would be required to cause ethical reasoning, would imply a system capable of operating as an independent reasoner in novel contexts (sec. 2). In continuation thereof, issues of responsibility are discussed (sec. 3 and 3.1) and it is claimed that unacceptable responsibility gaps may occur since unpredictability would presumably follow full system autonomy. These circumstances call for a strong precautionary principle, in the form of a preemptive ban.

KEYWORDS

LAWS, artificial intelligence (AI), responsibility

Cite this paper

References

About | Terms & Conditions | Issue | Privacy | Contact us
Copyright © 2001 - David Publishing Company All rights reserved, www.davidpublisher.com
3 Germay Dr., Unit 4 #4651, Wilmington DE 19804; Tel: 1-323-984-7526; Email: [email protected]