Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.
1186
Views
3
CrossRef citations
Altmetric
be0ef6915d1b2200a248b7195d01ef22
research article
The errors, insights and lessons of famous AI predictions – and what they mean for the future
Stuart Armstrong The Future of Humanity Institute & Oxford Martin School, University of Oxford, Suite 8, Littlegate House, 16/17 St Ebbe's Street, Oxford, OX1 1PT, UKCorrespondencestuart.armstrong@philosophy.ox.ac.uk, Kaj Sotala Machine Intelligence Research Institute, 2721 Shattuck Avenue #1023, Berkeley, CA 94705, USA & Seán S. Ó hÉigeartaigh The Future of Humanity Institute & Oxford Martin School, University of Oxford, Suite 8, Littlegate House, 16/17 St Ebbe's Street, Oxford, OX1 1PT, UK
Pages 317-342
Received 28 Mar 2013
Accepted 25 Apr 2013
Published online: 25 Apr 2014
Articles
The errors, insights and lessons of famous AI predictions – and what they mean for the future
Stuart Armstrong The Future of Humanity Institute & Oxford Martin School, University of Oxford, Suite 8, Littlegate House, 16/17 St Ebbe's Street, Oxford, OX1 1PT, UKCorrespondencestuart.armstrong@philosophy.ox.ac.uk, Kaj Sotala Machine Intelligence Research Institute, 2721 Shattuck Avenue #1023, Berkeley, CA 94705, USA & Seán S. Ó hÉigeartaigh The Future of Humanity Institute & Oxford Martin School, University of Oxford, Suite 8, Littlegate House, 16/17 St Ebbe's Street, Oxford, OX1 1PT, UK
People also read
research article
Autonomous technology and the greater human goodSteve Omohundro
Journal of Experimental & Theoretical Artificial Intelligence
Published online: 24 Jun 2014
Vincent C. Müller et al.
Journal of Experimental & Theoretical Artificial Intelligence
Published online: 24 Jun 2014
research article
Limitations and risks of machine ethicsMiles Brundage
Journal of Experimental & Theoretical Artificial Intelligence
Published online: 24 Jun 2014
research article
Ethics of brain emulationsAnders Sandberg et al.
Journal of Experimental & Theoretical Artificial Intelligence
Published online: 24 Jun 2014
research article
The path to more general artificial intelligenceTed Goertzel
Journal of Experimental & Theoretical Artificial Intelligence
Published online: 24 Jun 2014
research article
Utility function security in artificially intelligent agentsRoman V. Yampolskiy
Journal of Experimental & Theoretical Artificial Intelligence
Published online: 24 Jun 2014