15
Views
0
CrossRef citations
Altmetric
 
Translator disclaimer

Abstract

John Searle has used his Chinese room example to attack the idea of computationally reproducing intelligence. His arguments have variously assumed or (more recently) asserted that consciousness and intelligence are necessarily interdependent. This stance has allowed him to apply intuitive arguments about what could or could not be conscious to the issue of what could or could not be intelligent. I present a variety of arguments, theoretical and intuitive, to show that Searle is conflating mentality and semantics. By maintaining that distinction we can then address how to generate the semantics that intelligence requires. In Stevan Hamad's approach to symbol-grounding we have a plausible candidate for finding referential semantics without taking detours through an unanalysable consciouness. Artificial intelligence as normally construed does not require that philosophical problems about consciousness be resolved, let alone that consciousness should be computationally definable: Searle's arguments against strong AI are irrelevant to real-world AI.