Summary of Winter 2019 Workshop

Whole Brain Emulation & AI Safety
Summary of the March 2019 Carboncopies workshop on whole brain emulation
Keith Wiley, 4/15/2019

On March 16th, 2019, the Carboncopies Foundation hosted its first workshop of 2019, entitled Whole Brain Emulation & AI Safety. The opening presentation was made by Randal Koene, founder and chairman of Carboncopies, followed by interviews between Dr. Koene and three leading researchers in the areas of whole brain emulation and AI safety. These researchers were Jaan Tallinn, cofounder of Skype and cofounder of the Center for the Study of Existential Risk and the Future of Life Institute, Anders Sandberg of the Future of Humanity Institute at Oxford University, and Ben Goertzel, chief scientist at Hanson Robotics, chairman of the OpenCog Foundation, and chairman of the AGI Conference series. In addition, there were two panel sessions during the conference, which consisted partly of discussion about the presentations and partly of questions from viewers. The panelists included Koene, Sandberg from the presentations, as well as Carboncopies board members Mallory Tackett, Abolfazl Alipour and Keith Wiley, author of A Taxonomy and Metaphysics of Mind-Uploading.

Randal Koene gave the opening remarks, presenting what has become known as the “FOOM” scenario, in which AI suddenly surpasses a threshold of capability from which it rapidly accelerates beyond comprehension and control. In the context of Carboncopies, namely whole brain emulation (WBE), Koene introduced what was perhaps the primary question of the workshop, that of whether WBE represents an alleviation of AI safety concerns by enabling humanity to keep pace with advancing AI’s cognitive and computational abilities, or alternatively whether WBE represents an exacerbating factor in the risk, such as might occur if a solitary bad actor, emulated and uploaded, were to become a hegemonic god, of sorts. Koene presented another way in which WBE might have detrimental effects: neurological and WBE research might reveal the crucial insights into the nature of intelligence that enable super-intelligent AI and a subsequent harmful fast takeoff scenario that might otherwise have never been achieved. Koene raised the question of how brain computer interfaces (BCI) might affect either the brain or the computer in question. What effects will such interfacing have on the respective systems? Are such effects even predictable?

Koene then interviewed Jaan Tallinn, who has taken the matter of AI safety quite seriously through his founding and involvement with various AI safety organizations. Tallinn foresees a wider diversity of future scenarios than are generally acknowledged. Tallinn emphasizes increasing optionality, the number of choices available to us, which is almost always better and more desirable than less optionality. He emphasized that the more powerful decision makers in our society are notoriously focused on short term risks and goals, to our global detriment. He questioned the practicality of using WBE for the purpose of keeping up with AI, offering the analogy that little could have been done to assist horses in maintaining competitiveness with cars. Koene asked whether WBE might itself represent the takeoff scenario under concern, as opposed to AGI. Tallinn responded that there might be an uncanny valley of highly capable WBEs that aren’t quite human. What if sacrificing part of a WBE’s humanity actually bestows some other advantage? In this way, WBE advancement might trigger a takeoff in ways that AGI would not. Tallinn concluded with an appeal to more coordination and cooperation in research to make progress on these issues.

Anders Sandberg expressed a concern that writings such as Nick Bostrom’s on the subject of existential risk might slow down AI research. Of course, that is precisely what some people prescribe on these matters. Sandberg made the interesting observation that while it might be possible to demonstrate that developing AGI is more favorable than not doing so, it does not necessarily follow that we can demonstrate that developing safe AGI is easier than developing a dangerous variant. If possible, he emphasized that we should move the AI safety risk to earlier points in time. To state the contrary, he described how the worst scenario would be one in which AI works reasonably well right up until some turning point at which it suddenly becomes dangerous in a way that is no longer avoidable. Facing such dangers earlier, if possible, enables us to contend with them while the AI is in an earlier and more manageable state. Koene and Sandberg discussed the question of whether WBE represents an alleviation or an exacerbation of AI risk. Would the neuroscience of WBE research propel AI research more quickly? They concluded that WBE won’t necessarily reveal such advances. Its findings and developments may not assist AI research very much. One can see how this conclusion might play out. While neuroscience may reveal the connectionist algorithms underlying intelligence, and in so doing accelerate AI research, WBE need not necessarily depend on furthering our understanding of neural algorithms, so much as on developing the low level processes of neuronal function and replication. At best, it is unclear whether WBE developments will play a significant role in AI research.

The final presentation was by Ben Goertzel. Goertzel is more immediately concerned with narrow AI than AGI. Current AI applications focus on arguably the worst conceivable goals: selling, spying, and killing, as Goertzel put it. If AGI ultimately evolves from our contemporary narrow AI systems, it may be come out quite badly on the basis of its amoral (or immoral) antecedents. Goertzel asked how democratic as opposed to centralized and elitist our control of AI should be. Would we be safer with a small band of wise and tempered sages overseeing the world’s various AIs, or should we prefer the collective wisdom of the masses? We currently operate more like the first approach but without having selected for the wisdom and sagacity just mentioned. Rather, the tiny cohort in charge of the world’s AIs simply is the corporate and military leaders who happen to grab the reins. Consequently, Goertzel advocated for the more democratic approach in which collective human morality governs the direction of AI. Personally, I have my doubts about that approach. Winston Churchill’s famous quote leaps to mind, concerning whether the average voter really has the necessary information to make guiding decisions. But an even more dire argument against the wisdom of crowds is the observation that human crowds easily devolve into mobs, from which arise not wisdom, fairness, and liberty, but nationalism, fascism, and discrimination. Would the world’s demographic minorities really be pleased if the world’s AI were subject to the majority decisions of a few (or one) dominant world demographics? (See Alexander Hamilton’s warnings to Thomas Jefferson about the tyranny of the majority.) It is not obvious to me that the outcome here is anything short of disaster. Koene presented his primary question again, does WBE represent a worsening or salvaging lever with respect to AI risk? Goertzel feels that WBE is a terrible way to develop self-improving AI. Engineered AI should be a better approach to the end goal. Koene asked what we should be doing to maximize the likelihood of a favorable outcome, and Goertzel concluded his presentation with the excellent call for increasing resource spending on globally beneficial technologies, to focus on compassion in our AI designs, to move the bulk of our AI research spending away from marketing and weaponry and toward medicine, education, genomic research, agriculture, science, poetry, art, and philosophy.

There were two panel sessions during the workshop as well. These fielded questions from the live online audience and involved discussion between board members of Carboncopies, Sandberg, and the audience members themselves. I encourage readers to seek out the videos and transcripts of the workshop to delve into this additional material. The new Carboncopies workshop series is now over a year old and has included several workshops, with many more to come. The day when whole brain emulation becomes possible draws nearer with each passing year, as do the serious issues of safety with regard to narrow AI, AGI, and WBE. Although the risks are real, so are the benefits. With continued involvement from researchers, communities, and the public, we will revel and prosper as we traverse the greatest transition in humanity’s history.

 

The Carboncopies Foundation’s next workshop is currently scheduled for June 1, 2019, with the topic “Review of the 2008 Oxford Roadmap on Whole Brain Emulation”.

 

 

Keith wiley serves on the board of the Carboncopies Foundation as director of communications. His book, A Taxonomy and Metaphysics of Mind-Uploading, is available on Amazon.

 

The workshop’s archived URL is https://carboncopies.org/workshop-2019-mar and includes links to the videos. The same URLs are offered below in chronological order: