1,532
Views
16
CrossRef citations to date
0
Altmetric
Original Articles

A model of pathways to artificial superintelligence catastrophe for risk and decision analysis

&
Pages 397-414
Received 28 Aug 2015
Accepted 01 May 2016
Published online: 23 May 2016
 

Abstract

An artificial superintelligence (ASI) is an artificial intelligence that is significantly more intelligent than humans in all respects. Whilst ASI does not currently exist, some scholars propose that it could be created sometime in the future, and furthermore that its creation could cause a severe global catastrophe, possibly even resulting in human extinction. Given the high stakes, it is important to analyze ASI risk and factor the risk into decisions related to ASI research and development. This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement. The model uses the established risk and decision analysis modelling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe, as well as intervention options that could decrease risks. The events and conditions include select aspects of the ASI itself as well as the human process of ASI research, development and management. Model structure is derived from published literature on ASI risk. The model offers a foundation for rigorous quantitative evaluation and decision-making on the long-term risk of ASI catastrophe.

Acknowledgments

Thanks to Daniel Dewey, Nate Soares, Luke Muehlhauser, Miles Brundage, Kaj Sotala, Roman Yampolskiy, Eliezer Yudkowsky, Carl Shulman, Jeff Alstott, Steve Omohundro, Mark Waser, and two anonymous reviewers for comments on an earlier version of this paper, and to Stuart Armstrong and Anders Sandberg for helpful background discussion. Any remaining errors are the responsibility of the authors. Work on this paper was supported in part by a grant from the Future of Life Institute Fund. Any opinions, findings or recommendations in this document are those of the authors and do not necessarily reflect views of the Global Catastrophic Risk Institute, the Future of Life Institute, nor of others.

Funding

This work was supported by the Future of Life Institute Fund [2015-143911].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
EUR 50.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
EUR 282.00 Add to cart

* Local tax will be added as applicable
 

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.