Episode 6 | Transcript

Episode 6 | Transcript

Episode 6: “Mind Uploading in Popular Culture Part I”

Published Mon, 21 Jan 2019 | Transcript

Allen Sulzen: Welcome to the Carboncopies Podcast! This is episode six, and it's the first in our series on Mind Uploading in Popular Culture. I want to welcome Randal Koene, Chairman of Carboncopies, to the call.

Randal Koene: Hi, great to be here.

Allen Sulzen: Hey Randal. Great to have you on the podcast - finally live!

Randal Koene: Yeah, I mean it's fantastic that you're doing these podcasts.

Allen Sulzen: Yes. Thank you, I enjoy doing them! So I wanted to just give our listeners a quick introduction on you. As people may know, going to the carboncopies.org website, on our team page we've got some of this info available, but:

You're a neuroscientist, first and foremost; you got your PhD in Computational Neuroscience from McGill University, and you've got a Master's degree in Electrical Engineering. Sounds like a good combination for mind uploading and things like that. But can you tell us more about your background?

Randal Koene: Yeah, sure! And actually I'd like to say that you're kind of right. If you look at the sort of people who are working in neuroscience today, you really have people coming from, of course a multitude of directions, but there are two stark delineations there - or, one delineation and two starkly different approaches. One is really a very biology-focused approach where you come at it as, okay, the brain is an organ and what goes on in that; and treating it like that. And the other one is the more physics- and exact-sciences-backed approach where you come into it looking at what are the functions, what is the processing that's going on in here? And looking at it - you know, I hate to say it because that has sometimes been misconstrued - but looking at it as a computer. Of sorts. I'd like to say machine rather than computer, because when people think computer, they think digital, they think it's got to be binary or something. Or you know, like your laptop. It's not that. But it is a kind of machine. People also say the body's a machine, but this is an information processing machine, the brain. And so coming at it from an angle where you're already familiar with information theory or information processing (which is what I was doing in electrical engineering), or if you're a physicist, then you tend to think in terms of these sort of equations, formulas, and how you compute things, how do you make one thing turn into another, what reaction is happening there. And that tends to help, when people are considering, for instance, how you might communicate with the brain, how you would get in contact with the signals in the brain that the brain itself is using for different regions to communicate with one another, and to replace a part of the brain or to be able to copy what the brain is doing in some other medium like a computer.

When we're thinking about those sorts of things, having a mathematical background often helps just to get that perspective started. And I've noticed this whenever I was teaching - that when you get students from these two different big directions - the more biology-based direction or the more physics-based direction - that those with the physics backgrounds often have a pretty good time, you know, taking what they know from there and then adding to that some biology and being able to work with it. Whereas for the biologist it can be difficult to then suddenly learn a whole bunch of math, and how you would work with that, and then program, and things like that; and bring that into play. So for people who end up modeling, doing computational neuroscience like I did, it's helpful to have the sort of math and physics background first.

I'm saying this just because, you know, I hope that anyone who's interested in what we're about to talk about - that they might consider the importance of those fields, the STEM fields, but especially the more math- and physics-oriented ones, as something that you need as a basis to get into it.

I was already interested in the brain even before I knew that neuroscience was a thing or that neuroscience was what I was supposed to be getting into. I was interested in the brain because it seemed to me like the most important thing that we'd want to be able to carry over, from our limited lifespan that we have right now into a much, much longer lifespan. Because it's where WE live. It's where WE are. It's our personalities. And also, if we wanted to improve how we can think, that the brain was really something important to deal with. And it occurred to me that, you know, the same way that we work with molecules, and we work with atoms, and we work with chemistry, you can also work with whatever the brain is doing and whatever the body is doing that the brain depends upon. And this was sort of underscored by reading some science fiction. Like, you know, I often tell interviewers that I read Arthur C. Clarke's 'The City and the Stars' and I love the way that they treated everything as information in there, in that book. You know, it doesn't matter whether it's a city, or whether it is people, or whatnot. You can store them in a computer and you can recreate whatever you want, you know, from the ground up. They have atomic atomic precision manufacturing, something in that sense there. It's kind of like what we might call nanotechnology now.

Allen Sulzen: So you say you were interested in the brain since even before you knew what neuroscience was. What was your earliest kind of development of this interest?

Randal Koene: So, it came from that. It came from this thought that, you know, there's got to be more that we can do. There has to be a way that people could tackle really big projects or really long explorations of the universe or things that we just can't handle at the moment (and things that were interesting to me as well). And how could that be done? Well, it could be done if we had a handle on ourselves. If we didn't, if it wasn't just us and then, you know, nature takes its course and whatever happens to this human animal that we are - we are here and then we're not here and it's over and we don't really have any control, we don't have any ability to change that - but if we could do the sort of things that they do in 'The City and the Stars', well, you know, it's kind of like the Star Trek replicators: you can make anything; you can even make yourself again if you need to.

And so this sort of thinking about how the brain is really just another thing, another machine, something that has atoms that work together in a certain way - that whole line of thinking kind of led to, okay, well it's nothing special. It's complicated, but it's nothing special in the sense it's not magical. So it's not magical, it's physical. And, well, then I started studying physics because that seemed like the way to go. But eventually I realized, of course, that to really get into the brain and understand what's going on there, that's where neuroscience comes in. And the part of the brain that I was most interested in dealing with at the time was everything to do with memory because memory is where we store the characteristics that make us who we are. Everything that is personal about us is a type of memory. Whether it's the way that you move when you walk, whether it's the way that you talk, whether it's the way you play a piano, whether it's the way you interpret your own dreams or how you feel or how you're feeling, color, what you see; or, the memory of whatever happened on Wednesday, which is the type of memory people normally think about when they think about memory. They think about the sort of memory of events.

But all of that is memory. And from a physics point of view, memory is just any function, any operation that has hysteresis. Meaning, where whatever happened in the past affects the output in the future. And so synapses are like that because synapses change, and so there they have hysteresis. But there are lots of other ways you can accomplish that as well. That's why, you know, long term potentiation (LTP) (a type of memory that is sort of a medium-term storage facility in the brain) has an effect that also affects how output is carried out in the future. All of these things are a type of memory.

And to study memory, well, one of the things you do in neuroscience as you look at areas like the hippocampus. Because that's an area that is extremely important - it's crucial for the ability to make new memories that we care about a lot, like these episodic event memories. And then it also immediately involves you in this complexity of the machinery of the brain where, no, you don't just see something and then write it straight into the pattern that you will eventually have, the long-term memory. No, instead you have a whole series of different types of memory. You capture something in short term memory where you can only hold a certain number of patterns at one time, sort of quickly looping through them for a period of time; and then you can chemically alter the synapses (this is where the LTP comes in) so that you can store something for a longer period of time, for a number of days. And then you can alter the synapses themselves and change their shape, change a lot of things about them; and that's how you can get this much longer term memory.

But even that isn't the end. You know, after a while, as you store patterns, you've stored things you've experienced, and you gain new experiences. And then that night you reactivate some of the patterns that you've had in the past and you reactivate some of the new patterns and because they are appearing in a new order, they associate with one another and you get new ways to recall older information so that there are new pathways to get there. This is something that's often called long-lasting long term memory. It's a process that can take 10 years or more to establish that kind of memory. This is something that then - animals who don't even live that long, never even experience that kind of memory.

So we have all these different stages, temporally, in memory; but we also have all kinds of different types of memory. We have memory of language, memory of vision, memory of auditory things. Those are patterns in different parts of our brain, and to have them work together and be a single experience, you need to make connections between the patterns in these areas; and then the hippocampus orchestrates a lot of that, so that when a person loses the hippocampal area, if that area is no longer functioning, as it often gets damaged by hypoxia or anything that leads to a lack of oxygen in that part of the brain, these cells die first. Say for instance, you have a heart attack, or a stroke, and often those cells in the hippocampus go first. And these people have incredible problems making new memories. Their established long term memory is still there, but they won't be able to store any new memories.

So, you know, in addition to the long term storage of memory (which happens a lot in our cortex, and everywhere else where there are synapses), and then the process called Hebbian learning, where the synapses change, and patterns being these distributed patterns, which is what we think of when things are stored in the cortex - in addition to that (which is one of the very important things to study), the hippocampus keeps coming up because it's got this weird different function, where it's like it has not very distributed storage. It's got these very focused packets of neurons that are just active for a specific event that happened, and they don't really store what that event meant. They just point to patterns elsewhere in the brain that were active at the time when you experienced this thing and they reactivate those patterns.

So you've got these two different big systems there and you have to work with both of those to get a model of what's going on in memory. So, it's funny, that's the path that I ended up going down, working with hippocampus and prefrontal cortex, and memory in that, and building models about it - because you can't even think about one of those pieces without worrying about the others. Because you wonder, okay, if there's long term memory in the cortex, how did it get there? And then there's this process of short term memory that even makes it possible for the synapses to be reactivated often enough for it to get embedded in long term memory. And you've got the hippocampus telling them what patterns those should be. And then when you're dealing with that, then you have to worry about the different temporal types of memory, and you have to worry about where the memory is coming from, and how you select which things you do remember and which things you don't - so which things are emphasized by emotional context, for example - it quickly just builds up as this whole. You realize why the brain has to be as complex as it is. And why, say, if you're looking at artificial intelligence and people say, well, "deep learning is it," deep learning is really just the way that patterns are stored in cortex.

Allen Sulzen: Right, it's not an AI system, it's a method that your brain currently uses, but it's one of many methods, right?

Randal Koene: It's one of many methods that the brain uses and the brain uses all these different methods together for a reason. And that's the sort of stuff you kind of start discovering, as you dig in, the way that I dug in when I was studying at McGill, and then later at Boston University where I continued studying the same areas of the brain and these models. And this has continued on and on because then I got a job in Spain where I was offered a Directorship for a new Department of Neural Engineering they were starting, and they said, well, can't we build probes that would be able to detect what's going on in the hippocampus and the cortex? That didn't really carry on because the funding disappeared when the financial crisis hit Spain. But okay, you know, it was a good idea. And then later as I headed to Silicon Valley, they sort of pulled that through. And you can see that there's a whole chain of events and connections and things that go on, and being forced to immerse myself in the various complexities of how the brain works and how memory works.

So I'm just going to back pedal for a second and say why it is that memory is a thing I was focusing on, rather than, say, you know, like people who do BCI, "how does motor action happen" and that sort of thing. And it's really because it feels to me like memory is one of the most fundamental subsystems of how any processing system can get things done. Because if you want it to have complex thought, if you want to do complex thinking, you need to store intermediate results. You need to store things that you know, knowledge, as you proceed and work your way through a problem.

So problem solving requires memory. Problem solving is another thing I was studying in the prefrontal cortex. So it all comes down to "what are the things that make us who we are? what are the things we call intelligence? what are the things that we'd like to augment?" And yeah, I believe that everything, including the body, is important - embodiment is important - and muscle control, and all that, is important; and, of course, helping people whose limbs are not working anymore is important; but if I have to choose something that I find most critical for understanding the sorts of things that matter to people interested in mind uploading, then, yes, I'll focus on memory first. And you can see this happening as well with others who share our interests. Like if you look at the brain preservation foundation and Ken Hayworth there, you'll see him talking about memory all the time because it's just such a fundamental component of this whole area, this new domain we're trying to map out.

Yeah, it, it really helps to have that background just because it forces you to abstract things where you need to abstract, but also to identify the details of the complexity that matter at the same time. You're not going to get distracted by say, oh, well, why is this particular neurotransmitter the way it is? And how does that synapse work? That's too detailed. That's, you know, you could easily replace the function of that with anything else that could accomplish the same input/output function. But you do have to focus this way on "how is information processed in the brain?" And that's what computational neuroscience really is about. And that's where it also very closely relates to artificial intelligence. And both of those domains are going to be supporting each other as the years progress. There're going to be things from AI that will be very helpful with analyzing data from the brain and understanding what the brain is doing, and also building replacements for functions in the brain. And there are going to be things that we find in neuroscience that are absolutely essential for moving artificial intelligence forward. So these two things are closely related.

Allen Sulzen: Well I think our listeners are going to get a lot of that, having a good opinion on how they can expand their career and if that's the direction they're going. So thanks for that. We have an interesting podcast coming up for you today. We wanted to talk about a few different references, in pop culture, to mind uploading and whole brain emulation, or movies that feature technologies that would have required that as a given.

So let's move on to the first pop culture reference we've got. Let's dig into the movie Transcendence. Quick Synopsis: we have Will Caster. He's professor, a scientist, played by Johnny Depp. Alright. And the goal for him is to create the first artificial intelligence, basically putting his mind in a computer. Some people like the movie, some people said it was lackluster. We want to get your opinion, Randal, on the technology in the movie, you know; we'll talk about different plot points, too, but just from that opening intro, in the first 20 minutes, they put Johnny Depp's brain in a computer. What were your pros and cons of those scenes? How did it look?

Randal Koene: I'm actually someone who liked the movie, even though I think that the second half of the movie probably lost a lot of people because it was going into, you know, the sort of crazy growth of whatever might happen, that would happen, if you had a mind upload -

Allen Sulzen: It moved so fast.

Randal Koene: Yeah, it moved really fast after that. And I think that might've been too fast for people, no matter whether you believe this is realistic or not. But the beginning part, you know, it's not a coincidence that I think that was well done. The science behind it was strongly informed by the advisors that the film had. And among the advisors they had Michel Maharbiz and Jose Carmena from Berkeley, who both work in BCI (Brain Computer Interfacing) and who also have expressed interest in whole brain emulation and related topics. And so they advised the movie - although the movie took some liberties. So, for instance, when you see them jacking in from an abandoned church and just using what amounts to an ethernet cable to downloading the brain - I have no idea how that's supposed to work. I mean, how do you even get those terabytes of information through the Internet that fast? I don't understand what they were doing there, but yeah.

Allen Sulzen: They would have needed some really some really big connections.

Randal Koene: But I think that's just, you know, they took some liberties that probably just moved everything along, so that they wouldn't have to explain other arcane things - like, why are they storing things on a set of disks and then shipping the disks over or something like that -

Allen Sulzen: Right. It's easier for people to kind of see what they understand, right? Everybody's got an Ethernet port at home, they plug their computer into the wall. So maybe they're just trying to make it relatable for the layman.

Randal Koene: Yeah. With the danger that then people will say "hey, but it's not quite like my home computer system, the brain is a little different." That's true, that's true. But they got the basics right, which is, yes, the brain is processing information; you can describe what's going on in the brain in terms of functions - these are the sort of things that the main character was working on, on explaining what these functions were, how to make models of the brain - and that you could record that. You can take data out of the brain, you can record what's going on in there, and you can use that to then populate these models so that you get a specific brain that will respond in the way that character, the person, the personality, who/where you got that data from, how that person would respond. And that is mind uploading.

Allen Sulzen: And I feel like that was pretty well displayed in the movie where they, you know, they made Will Caster, they made Johnny Depp's character very much seem like he was the same guy at the beginning and end. And I felt like that was a pretty powerful representation of mind uploading in film. They talked a lot about a technological singularity. Now the concept of a singularity is different than whole brain emulation, although they're related. Do you think you could help maybe separate the wheat from the chaff here?

Randal Koene: Yeah. it's actually a strange choice they made there, to attach so much about the singularity to this bit about mind uploading, because it's not that clear that carrying out mind uploading would quickly lead to the so-called singularity. Now the singularity, the way it's often described, is this idea that at some point, if you can create an artificial intelligence that has capabilities, that has an intelligence that is at or above the level of human intelligence, and that also has the ability to improve itself, and to improve itself fairly rapidly just by changing its software, for example, - that you would get a runaway process where the artificial intelligence becomes better and better so quickly - and continuously accelerates at how fast it's becoming better and more intelligent and more capable - that at some point, very soon after the first AI of that caliber, you would no longer be able to follow what it was doing because it would become incomprehensible to humans, unpredictable to us what would come out of that. Kind of like a whole horde of mice couldn't possibly predict what a human would do next because they just don't understand how human brains work. And it would be the same with us towards these AI. That's the singularity in that sense. That's what, for example, MIRI (the Machine Intelligence Research Institute) and many others are thinking about and how they describe the potential risks involved in AI as soon as it reaches that stage and what we should be thinking about there.

But the thing about the movie is this movie was really about mind uploading. It was about taking a human brain, with the human brain's complexity, and what the human brain could know about itself by making these models - and taking that and moving it into a computer. But it's not immediately clear that the human brain is actually wired to improve itself in that fashion. And this sort of exponential curve of self improvement, it's not clear that, say, the brain would be expressible in a simple function where a "utility function," as they call it in AI, would be something that the brain could follow to keep on making itself better, to be able to test, "okay, am I now this much better? Am I performing this much better at solving these problems and what do I have to change? Which parameters do I need to teach a change to, to get even better at that?" So the self improvement mechanism that would be necessary for this sort of "intelligence explosion" isn't clear in the case of mind uploading. No one has yet pointed out how that should work. If we want to be strict about it, we could say that people like the researchers at MIRI, and the FHI, and elsewhere, haven't really pointed out exactly what the mechanism would be for an intelligence explosion in AI either; because if we did know that, then probably someone would have implemented an AI that did this. The problem is that when you have a simple set of mental functions, cognitive functions in an AI, and you can use that simple set of functions, very uniform functions where, let's say, every part basically follows the same function - then you can make predictions about how this would affect the utility function, where you're trying to improve, where you're trying to reach a certain goal, trying to solve a certain problem; and how you would be improving on that. It's relatively simple, then, to make predictions about that and you could see a sort of exponential increase. But these very simple single-function AIs where, say, you have just a big neural net or something like that - a deep learning net - it's not clear why these would be able to solve all the problems. How are they able to grasp context? How are they able to do what we do, for instance, where we select which things to remember and which things not to remember and what emphasis to give them, etc. All this stuff that, the complexity of our brain, that is not composed of a single function but has many different functions that work together - it's not clear that we know how to do that in AI. That's why we don't yet have AI that is able to do the things that humans do. We've got AI that can do certain things like, say, recognize images and give them a label; or that can translate or understand, by at least parsing audio wave forms, understand language to some degree - again, giving it labels, labels that are the words that we tell it that these things mean. But we don't really have an AI that can combine all of these things and that can also generate a personality and that can make its own goals, set its own goals, and solve complex problems. We don't have something quite as general as that. We don't have this thing called "artificial general intelligence." We don't really know what that is. And there isn't any clear evidence that a mind upload would lead to the kind of runaway scenario that you would see in a simple AI that has predictable development of its functions. So that's that connection there. That connection in that movie was a bit forced and I think that's also where it kind of started losing me. That's when you saw this whole runaway effect. It just, yeah. That's where -

Allen Sulzen: They did, right at the end, that scenario -

Randal Koene: Like there's this point where, in the movie Transcendence, it seems like they're forcing two things together because they want to talk about both. They want to talk about mind uploading and they want to talk about the singularity. Sometimes in movies it works better when you concentrate on one thing, one story you're trying to tell, one topic that you're trying to explain, that's already difficult enough to get across to your viewers. And that's where I think if you look at Altered Carbon, for instance, another interesting show, they made the good choice there that they're really just looking at mind uploading. They have some AI in there, but they're not talking about singularity because, you know, it's - at least not in the episodes that I've watched - they didn't really talk about a runaway singularity. They just talked about mind uploading most of the time.

Allen Sulzen: I would agree with that. I think that Transcendence did, it displayed a healthy amount of self-criticism too. Tou know, one of the first things that, when Johnny Depp was uploaded, he said "I need to get online." And then he started - he's like, "I need to buy stocks right now." Right? And they were showing some of the more conspiracy theory, sinister, like, "oh, maybe it would run away immediately" - but it also presented, by the end - and I'm not going to spoil anything here, but it presented a narrative that made you question whether or not he was a good guy or bad guy. Right?

Randal Koene: Oh, absolutely. Yeah. Yeah. It was, you know - yeah, in that sense he was still human, because humans are always neither or both. There's always a little bit of both in us, right?

Allen Sulzen: So even though it's critically panned, it was a pretty interesting movie to, to think about the singularity, and I thought it was an interesting choice, too, to put mind uploading and singularity in the same presentation. You know I'm a fan of Superintelligence by Nick Bostrom, and in his book he talks a little bit about different avenues towards a singularity, and mentions that mind uploading is one of those, but that there are a lot of different ways it can go. So Transcendence obviously chose this avenue, right?

Randal Koene: Yeah. I talked to Nick about that quite extensively because before he wrote the book, he had a workshop in the UK, at Oxford, and he gathered a bunch of people. And I was among those people, and we sat down and we had a number of sessions in this workshop to talk about the problems of a potential runaway AI and how it could happen. And of course my contribution was to talk about, okay, where does mind uploading fit into this? And I was trying to explain that, you know, before we make any grand statements about that, before we decide whether or not mind uploading is either A) something that could protect us against the dangers of runaway AI, or B) a possible avenue to runaway AI, we need to understand a lIttle bit more about, first of all, what it takes to get runaway AI; and secondly, whether or not the human brain, when uploaded, fits that pattern. And I don't think that that Nick really decided to spend a lot of time in his book talking about those things. So it feels like it got glossed over a little and it should probably deserve its own new chapter or updated book or something like that.

Allen Sulzen: Absolutely. So overall I was a big fan of the movie Transcendence. If you haven't seen it and you're listening to this podcast, it's a good primer on the singularity, has some good scenes on mind uploading and some of the metaphysical questions, philosophical questions, in what might happen, what could a runaway AI look like. It's realistic in its depiction of some of the potential dangers, and some of the potential benefits; was a little bit light on the acting, but it had Morgan Freeman, it had Cillian Murphy, Paul Bettany - star-studded cast. what could go wrong? Find out for yourself. Right?

That's all for today's episode. Thanks for listening. If you enjoyed our podcast, please visit us at carboncopies.org.