Response to J. Bonilla’s ‘Your mind will never be uploaded to a computer’¶
Keith Wiley, Nov. 2018
In September, 2017, professor of philosophy of science Jesús Zamora Bonilla published an article on the Mapping Ignorance website titled ‘Your mind will never be uploaded to a computer’ [Bonilla 2017]. The counterpoint to many of Bonilla’s claims has already been offered in existing literature. Nevertheless, I provide the following response in an effort to move the ongoing metaphysical philosophy forward, if possible. In fact, this response is longer than the original article, but I feel it offers a good opportunity to introduce and present these topics to a broader audience.
The first claim in Bonilla’s article is that the mind is not necessarily tantamount to information because certain biological functions, such as digestion, are not information, nor would an electronic copy of a person be capable of true digestion. Searle had used the same example of digestion before [Searle 1990], while variants such as the wet status of simulated water, the windy status of simulated tornadoes, the hot status of simulated fire, and the photochemical status of simulated photosynthesis have all been similarly raised [Azarian 2016, Berger & Koch 2014, Gennaro 2017]. Admittedly, Bonilla appears to be less concerned with the reality status of simulations and more with the purported equivalency between physical processes and information (or presumably information processing). I will connect these two concepts in the next paragraph. My response to the claim of incomplete reality status of simulations is that otherwise physical processes within a given reality can be conceived as information-driven simulations in another reality. A simulated virtual creature in a virtual world, modeled on its own consistent physical laws, simultaneously simulates digestion as viewed by us from the outside and also has a true experience of digestion from its own internal point of view. Both interpretations are valid at the same time. Consider that there is a lively ongoing debate as to whether our own reality is some form of simulation within an outer reality [Moskowitz 2016]. I have always found this distinction and the associated question specious since any reality is real from the inside. It means nothing to say we live in a simulation from our own point of view; reality is what we experience it to be. Our phenomenal experience lies beyond the reach of anyone else’s denial, as Descartes’ famous dictum settled centuries ago: I experience digestion, therefore I digest.
To Bonilla’s claim that the mind is not information (or information processing) because digestion is not information, the reader can now see how I state the contrary position. Digestion is, in fact, information processing after all. Any physical process or phenomenon can simultaneously be real from an internal frame of reference and be a simulated information process from an external frame of reference. The mind is an informational process precisely because its transfer functions can be achieved computationally.
The next claim is that it would require quantum computers to reproduce neural processes. This claim was first brought to broad awareness by Penrose and Hameroff [Penrose 1994], and has been thoroughly considered in the passing years. Much of the discussion has disagreed with Penrose and Hameroff, concluding that quantum mechanics plays no crucial role in how the brain processes sensory information, stores and retrieves memory, or generates behavior. Its role in the phenomenon of consciousness remains similarly dubious. At best, we certainly needn’t conclusively presume the necessity of a quantum mechanical prosthetic brain as Bonilla decisively phrases it. Rather, we may interpret the question as unresolved. Furthermore, even if quantum computers are required for a successful prosthetic brain, we will simply utilize such technology as deemed appropriate. Such confidence may seem blusterous at current technological levels, but blind, unintentional evolution’s capability to produce the brain, quantum mechanical or otherwise, proves the plausibility of an engineered equivalent by reason of prototype. Whatever already exists must be possible to exist.
The next claim perplexingly argues that mind uploading is a large undertaking and therefore is apparently unachievable. While it is true that inventing the prosthetic brain will involve tremendous complexity, it is nevertheless an unhelpful observation with regard to likely outcomes. At any point in history, the current technology would appear essentially unachievable to residents of earlier eras. Arthur C. Clarke gave this observation its most famous adage when he drew an equivalency between futuristic technology and magic [Clarke 1973]. Yet Bonilla argues that the challenge of building computers on par with the brain’s scale is reason in itself to judge the near-impossibility of the task. To the contrary, there is no apparent evidence that engineering projects of the scale of the brain exceed the reach of technological advancement.
This argument about the technical difficulty of the task occurs in a different form later in the article, where it presents the challenge of cloning a human body and copying a brain scan into some sort of empty brain vessel. Of course, most speculations about mind uploading do not involve biologically grown or cloned bodies and brains, preferring depictions of computerized brains and robotic bodies. Consequently, Bonilla’s biological scenario seems out of place with regard to popular mind uploading speculation. This is not an inconsequential distinction since it may be considerably easier to engineer artificial systems from scratch than to attempt to recreate and maintain the chaotic biological results of billions of years of blind natural selection, which were never overseen with any attention to direct manufacture or maintenance.
Bonilla’s next objection to the plausibility of mind uploading is that we can never understand how the brain processes information because we lack a neural code Rosetta stone. Consequently, any endeavour to solve this problem is fundamentally flawed. By analogy, he suggests that decoding a music CD would be impossible without already having a CD player’s software, and similarly that translating audio recordings of ancient spoken conversation is impossible without additional information about the speakers’ language. To begin, the analogies are flawed. The brain involves no additional missing hardware, or otherwise helpful information, as in the CD-vs-player dichotomy. More accurately, given both a CD and a CD player, sufficiently advanced investigative technology could quite likely decipher how the assemblage operates. Ancient languages similarly involve additional, unrecoverable information in the unwritten, transient, societal agreement on the semantic meaning of arbitrary vocalizations. Much of the information crucial to deciphering ancient vocal recordings is simply missing from the recordings; it was encoded in the long-lost brains of the original speakers. I have previously discussed this information written across the population of brains comprising a society’s knowledge in my book [Wiley 2014]. Studying how the brain works faces no corresponding challenge of missing information. Everything about the brain lies there before us to scrutinize to our heart’s content.
Furthermore, to say we know nothing about how the brain processes information, and furthermore never could, is an astonishing statement. The principal goal of modern neuroscience is to solve precisely this problem and we have made great progress toward unweaving the connectionist algorithms by which numerous regions of the brain transform information from one representation into another, with ever deeper levels of the cortical processing system representing increasingly abstract cognitive concepts. There are many techniques and tools available for analyzing and comprehending neural function. In the worst case, even the most opaque information processing system can still be modeled in terms of its input/output functions. This approach closely mirrors behavioral psychology, but can just as easily be applied to raw mathematical functions, especially in terms of system identification, in which all inner details are abstracted away, leaving only their input and output data and the inferred transfer functions.
Some readers may counter that knowledge of input/output datastreams is insufficient to a complete neuroscience, that some critical aspect of the mind is not captured by such analysis, and rather that a complete neuroscience requires additional knowledge of the internal processing mechanism by which an input/output function is achieved. First, as just stated, the brain is not remotely a black box. We scan, dissect, stimulate, observe, and measure neural function at all levels, from the whole brain down to individual neurons, and even synapses, neurotransmitters, and ion channels. We already have an excellent understanding of how neurons operate chemically and electrically, and this knowledge is constantly improving. So we don’t have to treat neural components as black boxes; we can actually learn how they work.
Second, internal knowledge always drops you to the next level of abstraction. Once we know how the brain is wired, we still wonder how neurons work. Then we still wonder how molecules work. Then atoms, then quarks. Detractors of the you-know-nothing-if-don’t-know-everything persuasion can always move this goal post down one level. It is therefore a lost cause to engage this claim with much earnest. Eventually we must determine a limiting level below which we admit we have lost sight of the forest for the trees. At whatever level we draw that line, we are black-boxing the underlying mechanisms from that point on. Thankfully, the determination of the correct level of abstraction needn’t be ad hoc; it can be achieved experimentally. If we set our limiting level at the neural code level, then build models on that theory, and the models fail to reproduce the functions and behaviors observed in biological brains, we will know we have set our limit too high. If we set our level at the absurd quark level, model accordingly, and produce corresponding models that are indistinguishable from brains, we will know we have chosen a level at or below the necessary level. At that point, we can even explore back up the abstraction hierarchy to see if we can get similar results for less effort.
Third, it is not clear what additional information should be required, metaphysically speaking, to claim a sufficient understanding of how the brain works. Tononi has suggested that a concept of integrated information must be measured and that certain internal mechanisms will reveal higher degrees of integrated information than others, despite performing identical functions [Tononi 2004]. In other words, Tononi would judge two systems that perform identical input/output functions as having different levels of consciousness, and associated success in mind uploading’s purported goals, based on the internal differences of how the two systems perform the transfer functions. Searle has similarly described with his Chinese Room argument how a black box system, or certain partial components of such a system, such as the person inside of the room, might be interpreted as lacking nuanced metaphysical properties that we would prefer our prosthetic brains to preserve [Searle 1980]. Tononi’s and Searle’s arguments have been debated at length in the literature and are not remotely settled in their favor yet. At the current time, it is reasonable to consider that the functional aspects of the brain, and even the metaphysics of consciousness, at least might be fully represented with input/output information processing terminology, if for no better reason than that that is how the brain appears to actually operate.
The next claim is that a brain requires complex interaction with its environment in order to matter in some way that mind uploading would fail to capture. This is the argument of embodied cognition, the claim that brains, and by extension minds, only operate properly in the context of an associated body and its sensory-motor interactions with the physical world [Lakoff 1980]. At first glance, it is not clear how this claim challenges mind uploading since an obvious scenario involves equipping a mind upload with a prosthetic brain in a robotic body. However, it is an interesting metaphysical question whether such a physical system should an absolute requirement for successful mind uploading. An obvious alternative, likely the one that inspired Bonilla’s challenge, is to process an uploaded mind’s neural functions on a computer that is not necessarily attached to any robotic body. There is the follow-up question of whether such a mind would nevertheless require at least a simulated virtual body in a virtual 3D world, or whether such a mind could simply float around in the intangible nonspatiality of abstract thought space. This last possibility impinges on the medical state of locked-in syndrome, which in the most extreme cases can leave a healthy mind completely disconnected from sensory and motor access.
There are a few places along this spectrum where one may fall. Some people believe that real physical embodiment is required for successful mind uploading, that virtual embodiment in a virtual world is insufficient in some metaphysical sense. Others would interpret existence in a virtual 3D world as adequate for the metaphysical properties under consideration, but would still reject the further case of total sensorimotor disconnection as insufficient. However, as described above, we can brush all these deeper questions aside by confining our consideration of mind uploading to a fully embodied and physical robotic scenario. Whether the increasingly nonphysical or disembodied scenarios fail some fundamental metaphysical test, and whether such a test could ever be quantified in a falsifiable and detectable way, is a question we simply don’t have to resolve today. Robotic brains and bodies will suffice in the meantime.
The science, engineering, and philosophy of mind uploading are areas rich with speculation, prediction, and debate. Our knowledge of how the brain works is steadily advancing in the twenty-first century. One popular response is an Icarusian, Huxleyian, Shelleyian premonition that the brain either will, or morally should, remain an inscrutable domain of mystery. However, the actual progress coming out of neuroscience supports no such dire conclusion. The engineering to replicate natural systems is steadily progressing toward the eventual capability to engineer at the microscopic scale as well as achieve the macroscopic organizational complexity of vast neural systems. On the purely metaphysical questions, theories of consciousness, mind, and identity have continued to evolve over time. The writings on consciousness of the last half century have no comparison in antiquity. Concerns of substance dualism have been thoroughly jettisoned from respectable discourse, as shown in the treatises by Dennett, Chalmers, Koch, Searle, and others. Subtle questions, such as the thorny nature of epiphenomenalism, are barely a hundred years old. The careful presentation of the hard problem of consciousness, both Nagel’s contribution in the 1970s and Chalmers’ in the 1990s, are recent additions to the discussion. Penrose and Hameroff’s proposal, and Tononi and Koch’s proposal, regardless of whether one finds purchase in such ideas, nevertheless indicate vibrant ongoing exploration of the nature of the mind and consciousness. For those of a physicalist, functionalist, and patternist persuasion concerning the mind, not only is mind uploading possible, it is inevitable.
Azarian, B. (2016) A neuroscientist explains why artificially intelligent robots will never have consciousness like humans. Raw Story.
Berger, K. and Koch, C. (2014) Ingenious: Christof Koch. The neuroscientist tackles consciousness and the self. Nautilus (19). http://nautil.us/issue/19/illusions/ingenious-christof-koch
Bonilla, J. Z. (2017) Your mind will never be uploaded to a computer. Mapping Ignorance. https://mappingignorance.org/2017/09/11/mind-will-never-uploaded-computer/
Clarke, A. C. (1973) Hazards of prophecy. In Profiles of the Future: An Inquiry into the Limits of the Possible. Harper & Row.
Gennaro, R. J. (2017) Consciousness. Routledge, 173–174.
Lakoff, G. and Johnson, M. (1980) Metaphors we live by. Univ. Chicago Press.
Moskowitz, C. (2016) Are we living in a computer simulation? Scientific American. https://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/
Penrose, R. (1994) Shadows of the mind: A search for the missing science of consciousness. Oxford Univ. Press.
Searle, J. (1990) Is the brain’s mind a computer program? Scientific American (262):1, 25–31.
Searle, J. (1980) Minds, brains, and programs. Behavioral and Brain Sciences (3), 417–424.
Tononi, G. (2004) An information integration theory of consciousness. BMC Neuroscience (5):42.
Wiley, K. (2014) A Taxonomy and Metaphysics of Mind-Uploading. Humanity+ Press and Alautun Press.