Episode 7 | Transcript

Episode 7 | Transcript

Episode 7: “Kernel and Neuralink: Panel Discussion Part I”

Published Wed, 6 Mar 2019 | Transcript

Allen Sulzen: Hi there, and welcome back to the Carboncopies podcast. My name is Allen Sulzen. I host the podcast and I'm a proud volunteer at the Carboncopies foundation. Before we get started, I wanted to announce an upcoming event. On March 16th, 2019, we'll have our next workshop, and if you'd prefer to hear them live rather than watching the events on Youtube, or listening through this podcast, you can RSVP and receive updates at carboncopies.org/rsvp. Go there now, and you can sign up, if that's an event that you'd like to attend. The topic for this upcoming workshop is whole brain emulation and AI Safety. The speakers so far will include Ben Goertzel, Anders Sandberg, and Jaan Tallinn. If you haven't taken a look at Carboncopies.org recently, it's worth perusing. Just one more quick plug before we move on. This podcast is free, and we hope it provides value to you.

Allen Sulzen: Whether you're just curious about mind uploading or you're involved directly in computational neuroscience or related fields, sharing this podcast goes a long way. We're currently published on Stitcher and Google Play Music. So if you use either of those services for podcasts, please like, subscribe, and share, and that will help us advance the discussion on whole brain emulation. So for this episode, we're on episode seven, we're going to continue where we left off on our workshop series. You're going to hear a moderated panel discussion between Professor Theodore Berger, Professor Tony Zador, Dr. Sean McCoola, Dr. Adam Marblestone, Dr. Randal Koene, Dr. Diana Deca, Dr. Sim Bamford, and others. Not all of them will speak in this section, because it's a longer discussion, so we're going to break this one up into a few different episodes. And Keith Wiley wrote up the following summary that helps us understand what this discussion was about: "In 2016 and 2017 commercial efforts emerged with highly outspoken founders who openly declare that their companies exist to create bridges between human and machine, and, even to achieve neural prostheses to lift up humanity. What did these companies contribute that is difficult or impossible in academia? And which new hurdles are presently beyond their reach? So we'll just get right into it now. You'll hear Dr. Randal Koene moderate. I give you our panel discussion.

Randal Koene: Let's get started at this point, and I'll just begin with a little more introduction- not much though. While most of the work that we discussed this morning has of course been happening in academic labs, and has been going on for decades in various labs and universities throughout the country and throughout the world, there has always been an attempt to also bring this into other areas of effort and funding. And what we've seen, for example, is that- sorry Keith, Keith was saying I'm creating motion sickness, so I should stop moving around. I think it's better to be right here for the microphone. I'll just try not to move around as much. I was wiggling a lot. But yeah, so, for instance, the place where we are right now: 3Scan, this company, this is a startup, and it started with a machine: the knife edge scanning microscope that was originally produced at Texas A&M University in its original form. The idea there was to use it to slice brains, and to image brains. So, this place where we are right now is actually a perfect example of a startup that got involved in this problem of connectomics, brain emulation, and neurotechnology. Since then, there have been a few more efforts. Of course, a lot of the efforts we see are sort of tangential. There are companies out there that are doing BCI of some kind where an EEG is involved and trying to make consumer products out of that.

Randal Koene: There's nothing wrong with that, but it doesn't get you whole brain emulation. And then of course, about two years ago (this actually goes back a little further than two years- I've been working with the people involved in that for a while longer) there was a sudden emergence of interest from a few well-known or high-profile entrepreneurs who got involved in this question. The most well-known is probably Elon Musk, who saw artificial intelligence coming up, and that this would change society, would change how we interact with each other and with machines and what this is all going to cause. He felt that perhaps part of the way to go, part of the way to ensure a future for humanity that is as desirable as possible, would be acting on the importance of being able to interact with, merge with, or in some way connect with what's going on there on that machine side of things. This is what got his interest going into the neurotechnology of neural interfaces and neural prosthesis, while Brian Johnson, the founder of Kernel, was also interested for some relatively similar reasons, although perhaps not with the same emphasis on what's going on in AI. From that, without getting into too many details, partly because I'm just not allowed to tell, two companies emerged, and they're both still active. They've had about two years to do something at this point, which again, I can't talk too much about, because products are not things we're supposed to talk about. And we're wondering, generally speaking, looking at these as models, looking at 3Scan as well, and other endeavors like that, where does this for-profit space fit into the whole? What works? What advantages do they have in some ways over working in a lab, working in academia, looking for grant funding and that sort of thing? What kind of hurdles do they also experience? What sort of parts of the problem are really well suited to work in a for-profit environment versus which parts are not? And I'm sure we all have a lot of opinions about that, but I've said enough for the moment. Before we make this a whole audience-wide discussion I want to just hand this over to Diana Deca for a moment so that she can say a few words, and maybe you could introduce yourself while you're at it.

Diana Deca: Thanks very much Randal. Really nice introduction. So, hi, my name is Diana Deca, I'm a postdoc at USC, I currently work on flies, and I'm trying to find out how a neuron works. In my Phd and my previous postdoc research, I've been watching a lot of neurons in the mouse brain and looking at spine and dendrite inputs just to see how all these thousands of inputs into one neuron gather into one output- this is my focus. At the same time I've been very interested in neuroprosthetics in a more philosophical sense because I'm interested in the concept of whole brain emulation. My background is also in philosophy, and I wanted to know whether it's possible for us to one day make perfect copies of ourselves that don't just correspond behaviorally to what our biological cells do, but also are based on a causal understanding of the inner workings of our brains. I think, until now, we've discussed all these startups that are getting into neuroscience, but I think one of the questions is: What is the goal of neuroscience, or are there several? Is the goal to rebuild the human brain? Is it to understand one process, and build out, and so on and so forth? It is not entirely clear to me, for example, with Kernel and Neuralink what their end goals are. I saw some interesting ideas there, for example, with Neuralink the wish seems to be to somehow connect the human brain to computers or AIs, but it is not entirely clear to me how that would happen. And, if it happened, what would be the next step? Let's say that we connect our brains to machines. It just means that there will be exchanges- voltage changes in our brains that somehow correspond to voltage changes in devices. That's maybe something we want to explore. But yes, I suppose all these have been goals in neuroscience and academic neuroscience in general. It might've been forced into smaller questions just for the sake of funding, and now, I think we have the opportunity to rethink the big questions in neuroscience of whether there is a model that can answer this question. Anyone?

Randal Koene: Yup. So let's just open the floor to all now. I'm sure somebody has a comment or question related to startups in neuroscience. Anyone want to go next?

Theodore Berger: Okay. It was just a comment. It's not obvious to me what the goal of a startup would be in an area like neuroscience or partial brain emulation. Certainly, the most obvious goal would be a prosthesis; something that's a medical device- doesn't have to be a prosthesis, but something that's a medical device that could be used by some part of the medical community for a particular patient population, with the end point being to make money. The goal would be to help the patients of course, but to do that and make money. That's sort of an obvious goal, and there are some good examples of that already.

Theodore Berger: Beyond that, it's not clear to me what the goal is, and I don't think I've seen it defined well. In other words, I can imagine that someone has an idea that by studying the brain in a particular way and by developing brain computer interfaces, that there can be an avenue, a path for understanding brain computation better, and many of us have the belief that brain computation is better than machine computation, and it follows that we would be able to improve machine computation beyond its current borders. Beyond that kind of a general statement, I'm not sure how we move from that to exactly how to work that path. I don't mean to be critical of any companies that have gotten involved, because I don't have a good idea either. I think it's very difficult to see how studying even human brain computation could be used to improve machine computation. I would enjoy seeing a concept like that become well developed and a path from current status to future status become outlined. Even if it were just an example: not something that someone did, but just as an example, what would that look like? How might we do it? It would be incredibly interesting just to see an example be explored.

Jun Axup: Hi, yeah, to kind of follow up to that, I'll give some concrete examples. My name is Jun Axup, I'm scientific director and partner at IndieBio. We're a startup accelerator program here in San Francisco. We funded four neurotech companies, and for the purpose of this discussion I think two of them are somewhat relevant. I think the goal of uploading is obviously very lofty, and I like how the first speaker talked about how it's going to be a hundred year thing, and that in 30 years, someone's going to die and that knowledge probably isn't going to get transferred. I think a lot of the startups really need to hit some more near-term goals. With 3Scan, ultimately their technology can be used for uploading purposes and really understanding the connectome and everything, but they are first looking at medical purposes: drug discovery. This means interfacing with your big pharmas of the world who just really want to look at the things like cancer tissue micro-environments. That's a great way to really start and get in the door and get some revenue going while going after more ambitious projects. The two companies that we have that are relatively relevant: one is called Nuro (nuro.ca) and it's a brain computer interface. Basically, they have a headset that allows you to move a cursor on a screen. They've had quite a bit of computation around calling these signals out and making it more efficient. For most of us, of course, that kind of just sounds like, "oh, it's a new entertainment platform or something". But for them, they're going to go after locked-in patients: ICU patients, or, people who have had a stroke and can't move or talk at all. That's where it's, again, a prosthetic, essentially, an extension to allow people to start communicating. Then, you can imagine that kind of technology translating into something maybe more sophisticated for the average user in the future. Another one is called Truust neuro-imaging. They take EEG data and they do real time mapping and display of the way energy flows in your brain. They have a video of someone seeing a picture, and they see all these neurons firing in different places. With that particular technology, again, they were aiming for Alzheimer's researchers or people who are trying to do research and understand the connectome. But what they found is that there are actually other markets that this can be very useful for. For example, they ended up working with a company that works with flavors and fragrances, and they wanted a way to more quantifiably understand how these flavors were affecting people's sensory systems. That was a very interesting way to carve out a technology that seems to be just in the research field into industry, creating value for other industries that might not originally use this kind of technology. I think there are a lot of stepping stones all towards building up to that big goal, and we need to really focus on what those stepping stones are, because those technologies that are developed will be able to all come to fruition in the future too.

Brian ?: My name is Brian. Expanding on that, in terms of some of the near term goals, you mentioned the... What was it, Truust? Is it FMRI?

Jun Axup: It's MRI. Oh, sorry, sorry, it's EEG signals.

Brian ?: Oh, EEG signals. I know there's Openwater, and they're working on a wearable FMRI, so a more diagnostic approach where it's not just for the brain, but it could be for any kind of tissue: more in terms of diagnostic and in the health space. That's a near term solution. Ultimately I think their aim is not just reading neural activity, but also being able to use light to be able to stimulate as well. There's diagnostics, health is probably the obvious one, but then also communication, with regard to which you also hinted that people are not capable of being able to communicate with the outside world. Also we could maybe expand upon that as a more general communication device. I think Facebook is touted to be working on something like that, but I'm not sure where that is, or if anybody else is thinking along those lines. To think about it more generically, there's the health diagnostic aspect, there's the communication aspect, and I don't know if anyone had any other ideas in terms of other categories of near term goals that would be feasible for everyone.

Diana Deca: I think that's a really good point that there can be near-term goals that can be addressed by smaller startups, which will then gather data, making the problem smaller overall. So instead of having a 20 or a 50 year timeline, it will be 15 years or so forth, assuming this can all come together and work. This also brings me to another issue that I would like to fit in this, which is: How do we distinguish between techniques and experimental methods of data acquisition? What are the good ones and what are the bad ones? In academic neuroscience, this is very easy to discuss, but I'm wondering how to do this in a startup setting. I can give one example of companies trying to develop a VR that is more immersive. One of the issues is that, if I get up and start walking, I'll have some sense of gravity and direction. That's because the vestibular system inside my inner ear is telling me I'm walking straight now, or I'm bending, or whatever. There is a way to stimulate our vestibular system with small electrical shocks that would give us the impression that in the arm, for example, we'd move like this. However, the vestibular system is a bit complex, and stimulating certain parts can have a certain effect, and stimulating the whole thing at the same time can have an effect which is not fully understood. What is the correct way? What are the right experiments to do if we wanted to have more immersive VR? At the same time, different companies went for the noninvasive version of it, which is easier. Then, as a neuroscientist, I have to ask the question: is this enough? Is this data conclusive and is this helping in neuroscience as well? I do not know the answer. It might be yes, but I just wanted to add that as a thing in the startup world: How can we distinguish between techniques and methods? Also, I don't want to make this too long, but I was thinking about the philosophical premises of emulation and neuroscience in general. What is it, again, that we're aiming at, and, like professor Berger mentioned, what is the plan or the philosophical plan here for what is to be achieved in 20 years by connecting the human brain with AI or computers? I was thinking one of them might be- it's just a guess in the background of the people involved in this, but it's the fact that technology appeared rather late in the process of human evolution. Some people believe that the human brain has developed for a very long time to make us very good predators, which is shown in our behavior on this planet. The issue is that there might be a need for the human brain to be connected somehow to the technology it creates just to provide it a quick jump.

Randal Koene: Right. I think that judging by what I see here, we can just continue in the audience right now. From what I've been hearing just now, my summary comment so far is that it seems like there's no obvious reason why a startup would base what they're doing around helping neuroscience as much as possible. That's one really important thing. So when we ask "how does what this company is doing help neuroscience?" they have no particular reason to want to help neuroscience as such, even if that's the original reason why the founder of the company got into that. In the end, it still has to operate as a startup, and it can't make that the number one priority. That leaves, then, the selection of what you're working on, because it seems to be the type of thing you're trying to build, the product you're making, whether that is a suitable sort of thing that intersects with neuroscience, but also something you do in a startup.

Randal Koene: 3Scan works because it's building a technology that is a tool that you can apply to brains, but you can also apply to other things. They ramped to find themselves a market: locked-in patients. You mentioned them as a target group for some kinds of technologies. That's an interesting one because that comes up a lot. Locked-in patients are patients who love to work with researchers on potential ways to get them a little bit out of that box. But then you run into problems like how many locked-in patients there are. How big is that market? How much money are you going to get from insurance companies for this sort of thing? There's so many issues there. Anyhow, that was just my summary. Let's continue with whomever.

Spencer?: Hi, my name is Spencer and I work, not in bio, but in artificial intelligence, which is closely related. A lot of the problems that I've had to work on over the last few years have been problems of classification and detection using artificial intelligence. Right now, the best way that we have to approach those problems is to do a naïve analysis where we just have a deep neural network where we feed the data in, and come up with results. One thing about human consciousness and human observation abilities is that we can detect anomalies and create classification schemes really quickly, and that those can learn, change, and develop. One of the ways I think of it is, coming from the opposite direction, not how can we build the hardware that gets the operating system of the mind ready to be adjusted, but rather how can we understand and anticipate the way that consciousness would plug into a machine by developing networks that make decisions and perform classifications closer to the way that humans perform these rather than just crunching numbers.

Spencer?: I think that there are tremendous applications there in computer vision and security, like anomaly detection. One really productive avenue is to improve artificial intelligence such that we can anticipate and prepare for the way that a consciousness would need to be within a network. I think that that would provide a lot of fertile ground for plugging in. I also just wanted to add the caveat that I think it's also important to consider the way that we can call on the state as well as industry to fund some of this research. Because I think that, especially in an era where there's dwindling funding for a lot of important projects, we need to also care deeply about getting the state involved at a fundamental, pure research level, but also supporting projects that are in private industry that are going to contribute to the benefit of everyone, which I imagine we would all agree, is the reason why we're here.

Speaker 6: I'm sorry. I want to pursue this. I heard what you said. Can you elaborate a bit more on how what you are doing interacts with the neuroscience part? I can understand that you're developing computational systems that deal with anomaly detection and classification, etc., but how are those systems based on the brain? How do you get your systems and neural systems to be closer together? I'm not sure how that works, if you can answer that.

Spencer?: Sorry, that was unclear. I'm not a neuroscientist, so the language that I think of it in is, regrettably, a kind of pop-neuroscience. Malcolm Gladwell has the idea of thin slicing, which is the traditional way to work with artificial intelligence to detect patterns: you feed it a set of data, you teach it about the different exceptions to the data, it recursively runs those experiments over and over again, and that data set has to be confirmed. The most advanced artificial intelligence networks that we work with today still require a lot of human input for classification to say that this is what this is, what this item is, this is what this image depicts, et cetera. The beauty of the way that humans perceive the world and that human cognition operates is that from a very limited data set, you can get extremely accurate classification models that require very little information. There's a lot of inference that human consciousness is able to do because there's so much context available. If we can determine the way that you can make smarter inferences from smaller data sets, then it provides a really good opportunity to start figuring out where the base level mechanics of consciousness operates so you can plug back into that.

New Speaker: Okay, got it. Thank you.

New Speaker: I'm just going to dive in to continue the conversation. It's interesting, of course, with humans, we don't need very much labeled data, as machine learning people put it, to understand a new phenomena that we observe in the world. But if you look at children and the way they get parented, they do get read to from picture story books, and continually asked to correctly identify all the objects on the page and in the story, and this goes on for years of childhood. It's probably the case that the amount of labeled training data that a baby or a child gets is still way less than we currently need to train convolutional neural networks to do similar tasks, so as that still progresses as it's going to get better at playing from data more efficiently.

Speaker 7: There's already an analogy, then, between what's happening in these neural networks and humans. I don't work in this space directly, so it's fun to hear people speculating about what commercial projects and neural interfaces might look like. I'm just throwing ideas in from the outside. One thing that's really striking is that you need FDA approval. If you want to drop networks of electrodes into people's brains, you need FDA approval. Insofar as companies in the names of this space, they're going to have to figure out a way to be allowed to do this with human brains and therefore necessarily to do it with some medical application in mind at first. If you can get past that obstacle, if you could demonstrate that such a technology was practical and safe, (and I think that's one way down the road) you can imagine this being an enormously profitable business. It becomes a platform business. You can install apps in your brain and those apps can do whatever you want. Depending on which part of the brain it is, if it's your hippocampus or your visual cortex, or something else, suddenly you can have an app help you be more mindful of something, or help you notice patterns in your own behavior, or aid your memory- not in a repair sense, but in the sense that we all want better memory. We want our memory between our brains to be cross-indexed with our computers. We have some ability to do that right now through typing into a search box. It's obvious that that would be a hugely valuable domain to be able to answer commercially. There probably are paths for companies that start as start-ups finding medical applications, to then get investment and value from the speculative field that they're trying to reach beyond that.

Diana Deca: I think this makes a nice circle of the discussion and brings us back to the title of the discussion, which is related to Kernel and Neuralink. I think, what he just mentioned, that there is a threshold for having an implant that can safely connect with the brain, has passed FDA approval, and allows us to do great things is the stepping stone- probably the biggest milestone that many of these big startups are going for. But that usually takes 10 to 15 years in itself. That's actually the issue: what kind of model will support that kind of budget? It is very ambitious, but it can also be extremely profitable once it's there. The number of applications coming out of this is *inaudible*.

Speaker 7: This is from P.J. Manny, and the question is "how do the priorities of proprietary IP, ROI, and marketshare, with startup companies contrast with university research, which is by nature collaborative? What is the likelihood, as financing moves to BC and traditional investment which are less likely to share? So yeah, in essence, the combat between academia and industry. Clearly, Kernel and Neuralink are not going to be publishing peer reviewed science, perhaps, for initial products, but perhaps they'll do FDA approval, but they're not going to disclose the meat and potatoes.

Randal Koene: Well, I'm going to let Ted talk about this a little bit because, not just for Kernel and Neuralink, but you've had alot of experience previously working either trying to spin off companies, or working for companies, and now you work with IB and stuff like that.

Theodore Berger: Yes, we've had experience with Kernel, and we've also spun out technologies that we've worked on that had been in the area of signal identification and that's been then used for perimeter detection and perimeter security. These started as basic issues in trying to understand how incorporating aspects of biological neural networks into otherwise artificial neural networks could make those artificial neural networks, or biologically realistic neural networks, better at detecting particular signals. It started with speech, and then non-speech sounds, et cetera. That collection of those things could then be used for perimeter security. We found that it was at times difficult to develop the right kind of relationship between the commercial entity and universities. This was work that was started at a university, and it was funded through grants and SBIRs (so to the company and to the university). So, we had a lot of things we had to worry about, while integrating the company efforts and the university efforts. But we found that as long as we were careful about filing for patents at the appropriate time, and then coordinating the publication with the patent submissions, as long as we kept up that relationship and the timing of those things so that they were done in the best interests of the company and in the best interests of the lab, we could make that work. It just required an understanding of the relationship between the demands of IP and the demands of filing for IP, and then coordinating the timing between the company and the university.

Theodore Berger: Conversation mattered a lot, and coordinating efforts mattered a lot. But otherwise, if you didn't pay attention to those things, there was a lot of friction. The biggest problem that we had, where we spun off a mathematical model of the synapse that was extremely useful for better academic and scientific understanding of glutamatergic synaptic transmission, was spun off to a set of investors who wanted to develop new drugs and use the model to develop new agents that could act at those receptors and channels. This was, as most people know, very important, because if you're trying to do that experimentally, the costs are astronomical and it takes a very long time. So, if you can short circuit that by having a mathematical model of a synapse, at least get the first few steps done in terms of identifying the chemical structure that could act at that synapse, you were at least ahead of the ballgame. That also worked as long as we kept up the timing. A real problem that remains a problem for both of those companies was that you worry about having the funds to support the personnel and the surroundings- the computational systems and the computational umph to do all the work. But it's mostly personnel you have to worry about. But then to come up with the money to pay the licensing fees to the university and to keep up those licensing fees was a burden. Without having something that would generate income, the investors had to provide both the support for the personnel and had to provide the funding to keep the university happy about the relationship, and that was not easy. I mean it's the same old question about generating money and managing funds, but that was a difficulty.

Brian?/Spencer?: The university wouldn't accept equity?

Theodore Berger: In one of those two cases, they had equity in the company as well. I guess you could say it was a little too greedy on the part of the university to want both equity and payments on time. That fell behind in both of those two issues. Anybody else?

Speaker 7: One thing that I haven't heard talked about before in this discussion is getting into the brain without cracking open people's craniums and drilling and doing whatnot. There's something to be said about Jason Silva: he talks about the skin-bag bias, and how we shouldn't consider our cell phones to not be part of us, that they are extensions of our brain and already memory prosthesis. I think that's taking a little bit too much poetic license, but what we're really trying to get to is getting into deep communication with our brains and listening to the symphony of the brain from inside the cranium (docked outside the cranium). The question I'd like to put on the table is: how do you get inside the brain in a mass market way without doing max surgeries on people? The only one I've heard thus far that may be viable is going through the nose, because apparently olfactory neurons are not bound. You can infect olfactory neurons, and they seem to not be as protected by the blood brain barrier- I don't know if this is true or not. Do you guys have any other ideas? Are we going to go through the nervous system? Are we going to do Ray Kurzweilian nanobots and whatnot? What are you guys' thoughts on that?

Randal Koene: We're almost at the end of this panel, so we're not going to be able to get into that in depth, but it's an incredibly fascinating topic. Maybe we can get into it at the end of the workshop or something. Just to give a quick summary of my thinking about it: The problem today is that of course we want, at the same, time to build products that are appealing to the mass market because that's where the money is. We also have this problem of breaking the skin. How do you get in there if you want to build something that's really a very high bandwidth, high resolution interface and that can go in both directions: that can read and that can write?

Randal Koene: Some people are looking for technological solutions that would be able to do that, that would somehow be able to create that connection without having to dig into the brain. But, I would say that all of those, without mentioning anyone in particular, are right now still very speculative and sort of serendipitous things. They might work, they might not work. You don't know how good they're going to be, both for reading and writing. On the other side, with the things we use today, electrodes, it's the steady path, the path where we know what's happening. We know that they're becoming smaller and more flexible so they don't cause inflammation in the brain, and the surgical procedures are getting better. So you could say, sure, this looks risky, but laser eye surgery could also seem risky, and plastic surgery can seem risky. So it's a question of how much risk can you accept? How good is the technology? How good is the procedure? That's the other side of it. Instead of the serendipitous projects, there's also the steady moving forward. How many thousands of recording sites and stimulation sites can you put on one electrode? How many electrodes can you put in at the same time in many different parts of the brain? What is the surgical procedure like? Both of those paths are worth exploring further, and they get you, probably, to different results, different types of interfaces you can build, different types of devices. It's a really complicated topic, because, of course, that's where the FDA gets involved right away. The FDA is now even getting involved with things that we might have called noninvasive: tDCS. That's the other side of the coin. Some things that we think are noninvasive are not necessarily non-harmful, just because they're non-invasive: Pumping a lot of radiation into the skull is also not a good idea.