Skip to content

alt text

Does the Chinese Room Thought Experiment disprove true AI and mind uploading?

by Angel Okoro

What about the Chinese Room thought experiment by Searle, doesn't that disprove true AI and mind uploading?

The Chinese Room Argument is a thought experiment designed by philosopher John Searle and published in his article, Can Computers Think? This experiment is in opposition to strong-artificial intelligence (AI), specifically to claims that computers may someday be able to have cognitive states. Searle argues that cognitive states must have semantic content, yet programs are purely syntactic, and that computers are constrained by structures that disallow them from creating their own meaning.

To make this argument, Searle imagines a “Chinese Room” in which a person receives a string of Chinese characters, and using a computer, returns the appropriate response in Chinese. The person has no understanding of the Chinese language yet using syntactic and instructions, the person is able to mimic an understanding of Chinese. Searle demonstrates that an ability to follow formal instructions and produce appropriate responses does not equate to an understanding, due to the lack of semantic content. Similarly, a computer that substitutes for said person, would not understand Chinese.

Criticisms of the Chinese Room argument fall into five categories: The Systems Reply, The Virtual Minds Reply, The Robot Reply, and The Brain Simulator Reply, and The Intuition Reply.

The Systems Reply: Some critics argue that while the person in the room does not understand Chinese, the room as an entire system, which includes the database, the instructions, and the processing mechanisms, might be able to create its own understanding. Creators of this reply argue that the intelligence is a causal effect rather than an innate possession. Comparatively, the human brain does not carry intelligence on its own, but rather it gains intelligence through its induction to novel concepts.

The Virtual Minds Reply: Like the Systems Reply, the Virtual Minds reply argues that understanding can be created by the system, even if the person in the room does not understand. However the Virtual Minds reply expands on this adding that the system may create its own sub-systems, distinct virtual agents, separate from the entire system and the room operator, that can understand Chinese independently. Philosophers of this thought argue the claim of strong AI should not be whether or not the computer understands Chinese, but rather if in the process of running the computer, an understanding of Chinese is created.

The Robot Reply: The Robot Reply proposes that if the computer was to be embedded in a robotic body, which has access to the physical world, it could be stimulated by a variety of sensors and motors, similar to the human brain. In incorporating sensorimotor cues, the robot would be able to create understanding.

The Intuition Reply: Searle’s argument is based on the intuition that the computer cannot think or have understanding. Philosophers of this reply argue that a human-centric definition of understanding should not be relied upon to judge computers. The fact that humans cannot perceive understanding and semantic content in computers does not in itself negate their existence. Searle often makes arguments based on his own intuition and biases about the capabilities of inanimate objects, stating that “we find it natural to make metaphorical attributions of intentionality to them[objects]; but I take it no philosophical ice is cut by such examples.” However, his arguments stop short of containing neutral, universal facts. Yet Searle often uses this same lack of impartiality to oppose the arguments of others. When debating the Systems Reply, Searle contends that, “the systems reply simply begs the question by insisting without argument that the system must understand Chinese.” To Searle, insistence should only be admonished when applied to others.

The Brain Simulator Reply: This reply considers a computer that operates in a different manner than current AI programs that rely on operations and instructions. This computer simulates the every nerve and every neuronal firing that would occur in the brain of a native Chinese speaker. Since the computer operates exactly as the brain of the native speaker, and processes information in the same way, it should also understand Chinese.

Most rebuttals of this experiment are valid, however I would like to present my own reply- a blend of various previous replies.

The Chinese Room is flawed due to its vague definitions of “semantic knowledge” and “meaning.” The International Encyclopedia of the Social & Behavioral Sciences defines semantic knowledge as “a type of long-term memory consisting of concepts, facts, ideas, and beliefs.” Semantic knowledge is an individual interpretation of words and objects. For example, answering the question ‘What does the word bookshelf mean?’ requires semantic memory. Recent advances in deep neural networks have enabled computers to better understand the world. Through unsupervised machine learning, computers can draw inferences from datasets without labeled responses or human supervision. For example, computer scientist Dr. Jason Yosinski at Cornell University developed a Deep Visualization Toolbox, a deep neural network, that uses mathematical manipulation to turn the input into the output through a series of non-linear hidden layers. In addition, this toolbox reveals the activations produced in the hidden layers.

Let us go back to our example: “what does the word bookshelf” mean? The Deep Visualization Toolbox, learned to recognize printed text in a variety of sizes, colors, and fonts despite the fact that this was never manually coded into the network. The only reason the network learned features like text in the deep layers was to support final decisions in the output layer. For example, the text detector may provide evidence that the rectangle of a book and detecting multiple books next to each other might be indicative of a bookcase, which was in fact one of the input images used to train the toolbox. If we were to ask the toolbox for its own definition of a bookshelf, it might tell us that a bookshelf is an object with multiple rectangular objects that contain text. This coincides with the definition of semantic knowledge as an individual’s stored interpretation, verifying that computers can in fact form semantic knowledge.

Searle argues that not only does the computer lack knowledge, it also lacks “intentionality”. Searle defines intentionality as “that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not.” To have intentionality, one must have mental states, such as thoughts and desires, ties to particular objects or concepts. It is not enough for the network to know of bookshelves, the network must have its own meaning, its own dispositions about bookshelves.

However, one could argue that the lack of meaning comes from a lack of context, rather than from an innate inability of the computer itself. One could examine this through Mary's Room, a thought experiment proposed by the philosopher Frank Jackson. Mary knows everything there is to know about the physical properties of color, yet she has never experienced color, therefore she may or may not have a meaning for color. This thought experiment serves to question a materialistic explanation of understanding, and to question whether meaning requires context and experience. It is incontrovertible that Mary as a sighted human has the ability to perceive color and to assign it meaning, yet it can be debated whether she lacks meaning in her current state. If having the physical knowledge of color coupled with the ability to see color is enough to verify that Mary can assign meaning to color, that should hold for computers as well. To return to the Robot Reply and the Intuition Reply, perhaps the lack of intentionality and meanings spawns from a lack of personal sensation. If the computer was removed from the vacuum, placed into a “body” that allowed for sensory input and context, perhaps it would develop meaning and intentionality for the objects in its surroundings.

Baltes, Paul B., and Neil J. Smelser. International Encyclopedia of the Social & Behavioral Sciences. Elsevier, 2001.

Cole, David, "The Chinese Room Argument", The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/spr2020/entries/chinese-room/.

Jackson, F., 1982, “Epiphenomenal Qualia”, Philosophical Quarterly, 32: 127–136.

Nida-Rümelin, Martine and O Conaill, Donnchadh, "Qualia: The Knowledge Argument", The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/win2019/entries/qualia-knowledge/.

Searle, J., 1980, ‘Minds, Brains and Programs’, Behavioral and Brain Sciences, 3: 417–57

Yosinski, J., & Lipson, H. (2015). Understanding Neural Networks Through Deep Visualization. Deep Learning Workshop, 31 St International Conference on Machine Learning. URL = http://yosinski.com/media/papers/Yosinski__2015__ICML_DL__Understanding_Neural_Networks_Through_Deep_Visualization__.pdf