Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Michael Levin Λ Joscha Bach on The Global Mind, Collective Intelligence, Agency, and Morphogenesis

November 9, 2022 2:00:52 undefined

ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.

Transcript

Enhanced with Timestamps
304 sentences 20,344 words
Method: api-polled Transcription time: 117m 59s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
[0:26] I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
[0:53] As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] Michael Levin's work on regulating intractable pattern formation in living systems has made him one of the most compelling biologists of our time. In translation, this means that his team is sussing out how to develop limbs, regenerate limbs, how to generate minds, and even life extension by manipulating electric signals rather than genetics or epigenetics. His work is something that I consider to be worthy of a Nobel Prize, and I don't think I've said that about anyone on the podcast.
[1:28] Michael Evans' previous podcast, On Toe, is in the description. That's a solo episode with him, where we go into a two-hour deep dive, as well as there's a theolo-cution, so that is him and another guest, just like today, except between Karl Friston and Chris Fields on consciousness. Yoshua Bach is widely considered to be the pinnacle of an AI researcher, dealing with emotion, modeling, and multi-agent systems. A large focus of Bach's is to build a model of the mind from strong AI. Speaking of minds, Bach is one of the most inventive minds in the field of computer science.
[1:56] biology has much to teach us about artificial intelligence and vice versa this discussion between two brilliant researchers is something that i'm extremely
[2:15] Lucky, blessed, fortunate to be a part of, as well as us as collective as an audience that are fortunate enough to witness. Approximately 30 minutes into the conversation, you'll see two sponsors. They are Masterworks and Roman. I implore you not to skip it as firstly watching it supports Toe directly. And then secondly, they're fascinating companies in and of themselves. Additionally, you'll hear from one more Trade Coffee around the one and a half hour mark. Thank you and enjoy this theolocution between Joschabach and Michael Levin.
[2:43] Welcome both Professor Michael Levin and Joscha Bach. It's an honor to have you on the Toll Podcast again, both of you and then together right now. Thank you. It's great to be here. Likewise. I enjoy very much being here and look forward to this conversation. I look forward to it as well. So we'll start off with the question of what is it that you Michael find most interesting about Joscha's work? And then Joscha will go for you toward Michael. Yeah.
[3:09] I really enjoy the breadth. So I've been looking, I think I've probably read almost everything on your on your website, you know, the short kind of blog pieces and everything. And, yeah, I'm a big fan of the breadth of tackling a lot of the different issues that you do with respect to, you know, computation and cognition and AI and, you know, and ethics and everything. I really like that aspect of it. And Yoshua?
[3:36] Yeah, my apologies. My blog is not up to date. I haven't done any updates for a few years now on it, I think. So, of course, I'm still in the process of progressing and having new ideas. And the ideas that I had in recent years, I have a great overlap with a lot of the things that you are working on and
[4:02] When I listened to your Lex podcast last night, there were many thoughts that you had that I had stumbled on that I've never heard from anybody else. And so I found this very fascinating and thought, maybe let's look at some of these thoughts first and then go from there and expand beyond those ideas. But I, for instance, found after thinking about how sales work,
[4:30] Kind of obvious, but missed by most people in neuroscience or in science in general, is that every cell has the ability to send multiple message types and receive multiple message types and do this conditionally and learn under which conditions to do that and to modulate this. Also, every cell is an individual reinforcement learning agent. Single-celled animal that tries to survive by cooperating with its environment gets most of its rewards from its environment.
[4:56] And as a result, this means that every cell can in principle function like a neuron. It can fulfill the same learning and information processing tasks as a neuron. The only difference that exists with respect to neurons or the main difference is that they cannot do this over very long distances because they are mostly connected only to cells that are directly adjacent.
[5:20] Of course, neurons also only communicate to adjacent cells, but the adjacency of neurons is such that they have axons, parts of the cell that reach very far through the organism. So in some sense, a neuron is a telegraph cell that uses very specific messages that are encoded in a way, like more signals in extremely short high energy bursts that allow to send messages over very long distances very quickly to move the muscles of an animal at the limit of what physics allows.
[5:50] So it can compete with other animals in search for food. And in order to make that happen, it also needs to have a model of the world that gets updated at this higher rate. So there is going to be an information processing system that is duplicating basically this cellular brain that is made from all the other cells in the body of the organism.
[6:08] and at some point these two systems get decoupled. They have their own codes, their own language, so to speak, but it still makes sense, I guess, to see the brain as a telegraphic extension of the community of cells in the body. And for me this insight that I stumbled on just because means and motive that evolution would equip cells with doing that information processing if the organism lives long enough and if the cells share a common genetic destiny so they can get attuned to each other in an organism.
[6:38] that basically every organism has the potential to become intelligent and if it gets old enough to process enough data to get to a very high degree of understanding of its environment in principle. So of course a normal house plant is not going to get very old compared to us because its information processing is so much slower so they're not going to be very smart.
[7:04] But at the level of ecosystems, it's conceivable that there is quite considerable intelligence. And then I stumbled on this notion that our ancestors thought that one day in fairyland equals seven years in human land, which is told in the old myth. And also at some point I revised my notion of what the spirit is. For instance, a spirit is an old word for the operating system for an autonomous robot.
[7:32] When this word was invented, the only autonomous robots that were known were people and plants and animals and nation states and ecosystems. There were no robots built by people yet, but there was this pattern of control in it that people could observe that was not directly tied to the hardware that was realized by the hardware, but disembodied in a way.
[7:57] And this notion of service is something that we lost after the enlightenment, when we tried to deal with the wrong Christian metaphysics and superstition that came with it and threw out a lot of babies with the bathwater. And suddenly we basically lost a lot of concepts, especially this concept of software that existed before in a way, this software being a control pattern or a pattern of causal structure that exists at a certain level of coarse graining.
[8:24] as some type of very, very specific physical law that exists by looking at reality from a certain angle. And what I liked about your work is that you systematically have focused on this direction of what a cell can do, that a cell is an agent, and that levels of agency emerge in the interaction between cells.
[8:47] You use a very clear language and clear concepts, and you obviously are driven by questions that you want to answer, which is unusual in science, I found. Most of our contemporaries in science get broken, if it doesn't happen earlier during the PhD, into people who apply methods in teams, instead of people who join academia because they think it's the most valuable thing they can do with their lives to pursue questions that they're interested in, want to make progress on.
[9:18] All right, Michael, there's plenty to respond to. Yeah, yeah, lots of lots of ideas. Yeah, I think I think it's very, your point is very interesting about, you know, what what what it really what really fundamentally is the difference between neurons and other cells. Of course, evolutionarily, they're reusing machinery that has been around for a very long time since the time of bacteria, basically, right? So our multicellular or unicellular ancestors
[9:41] had a lot of the same machinery and even I mean, of course, the axons are very can be very long. But but there are sort of intermediate structures, right? There are tunneling nanotubes and things that allow cells to connect to maybe five or 10 diameter cell diameters away, right? So so not terribly long, but but also not immediate neighbors necessarily. So that kind of architecture has been around for a while. And people like girls so well look at
[10:08] very brain like electrical signaling in bacterial colony so this is you know i think i think evolution began to reuse this tool kit specifically of using this kind of communication to scale up with the computational and other kinds of
[10:25] other kinds of tricks. Yeah, a really long time ago, you know, and I like to I like to imagine that if somebody had come to the people who were inventing connection ism and the first, you know, sort of perceptrons and neural networks and so on, if somebody had come to them and said, Oh, by the way, sorry, you know, we're with the biologists, we got it wrong. It's not the brain thinking isn't in the brain. It's in the liver.
[10:47] And so then the question is, what would they do, right? Would they have changed anything about what they're doing? And then we said, ah, now we have to rethink our model, or would they have said, fine, who cares? This is exactly the same, the same model, everything works just as well. So I often think about that question, what exactly do we mean by neurons? And isn't it interesting that
[11:06] we are able to steal most of the tools, the concepts, the frameworks, the math from neuroscience and apply it to problems in other spaces. So not movement in three dimensional space with muscles, but for example, movement through a morphous space, right? Anatomical morphous space. The techniques can't tell the difference. We use all the same stuff, optogenetics, neurotransmitter signaling, we model active inference and we see perceptual by stability, you name it, you know, we take concepts from
[11:37] from neuroscience, and we apply it out elsewhere in the body. And generally speaking, everything works exactly the same. And that shows us, I think, what you were saying that there's this really interesting kind of symmetry between these, that a lot of the distinctions that we've been making are, you know, in terms of having different departments and different PhD programs and other things that say, you know, this is neuroscience, this is developmental biology, a lot of these things are just not
[12:04] not as firm distinctions as we used to think. I suspect that people who insist on strong disciplinary boundaries do this out of a protective impulse. And what I noticed by studying many disciplines when I was young, that the different methodologies are so incompatible across fields.
[12:27] that when I was studying philosophy or psychology, I felt that computer scientists would be laughing about the methods that each of these fields are using to justify what they're doing. And this, I think, is indicative of a defect, because if you take science into the current regime of regulating it entirely by peer review, there is no external authority. Even the grant authorities are mostly fields of people who have been trained in the sciences
[12:56] in existing paradigms and then are finding the continuation of those paradigms from the outside. This meta paradigmatic thinking does not really exist that much in a peer reviewed paradigm. And ultimately, when you do peer review for a couple of generations, it also means that if your peers deteriorate, there is nothing who pulls your science back. And what I miss specifically and a lot of the way which neuroscience is done is what you call the engineering stance.
[13:25] And this engineering sense is very powerful and you get it automatically when you're a computer scientist because you don't really care what language is it written in. What you care in is what causal pattern is realized and how can this be realized and how could I do it? How would I do it? How can evolution do it? What it means are this disposal and this determines the search space for the things that I'm looking for. But this requires that I think in causal systems.
[13:51] And this thinking in causal systems is not impossible to, it's possible not to do for a computer scientist, but it is unusual outside of computer science. And once you realize that it's, it's very weird. And suddenly you have notions that try to replace causal structure with say evidence. And then you notice that
[14:11] For instance, evidence based medicine is not about probabilities of how something is realized and must work. Like you see people on the cruise ship getting infected over distances and you think, oh, this must be airborne. But no, there is no peer controlled study. So there is no evidence that it's airborne. And when you look at disciplines from the outside, like in this case, the medical profession or the medical messaging and decision making, or I get terrified because it directly affects us.
[14:40] And in terms of neuroscience, of course, there's more theoretical for the most part, but there must be a reason why it's for the most part a theoretical, why there is no causal model that clinicians can use to explain what is happening in certain syndromes that people are exhibiting.
[14:59] I noticed this when I go to a doctor and even at a reputable institution like Stanford, that most of the neuroscientists at some level there, or most of the neurologists that I'm talking to, at some level dualists.
[15:15] that they don't have a causal model of the way in which the brain is realizing things. There are a lot of studies which discover that very simple mechanisms like the ability of human beings to use grammatical structure are actually reflected in the brain. This is so amazing. Who would have thought?
[15:35] Yeah, but the developments that existed in computer science have led us on a completely different track. The perceptron is vaguely inspired by what the brain might be doing, but I think it's really a toy model or a caricature of what cells are doing. Not in the sense that it's inferior, it's amazing what you can brute force with the modern perceptron variations. The current machine learning systems are mind-blowing in what they can do.
[16:04] but they don't do it like biological organisms at all. It's very different. The cells do not form change in which they weight sums of real numbers. There is something going on that is roughly similar to it, but there's a self-organizing system that designs itself from the inside out, not by a measure learning principle that applies to the outside and updates weights after reading and comparing them.
[16:28] computing radiance to the system. So this perspective of local self-organization by reinforcement agents that try to trade rewards with each other, that is a perspective that I find totally fascinating. And I wish this would have come from neuroscience into computer science, but it hasn't.
[16:48] There are some people which have thought about these ideas to some degree, but there's been very little cross pollination. And I think all this talk of neuroscience influencing computer science is mostly visual thinking. Yeah, it's also I find this, you know, what you were saying about the different disciplines, it's, it's kind of amazing how, well, when I when I give a talk, I can always tell which department I'm in by by which part of the talk makes people uncomfortable and upset.
[17:18] And it's always different depending on which department it is, right? So there are things you can say in one department that are completely obvious and you say this in another group of people and they throw tomatoes. For instance, I could say in a neuroscience department, I could say,
[17:36] Information can be processed without changes in gene expression. You don't need changes in gene expression to process information because the processing inside a neural network runs on the physics of action potentials. So you can do all kinds of interesting information processing and you don't need transcriptional or genetic change for that.
[17:56] If I say the same thing
[18:26] Again, molecular cell biology, what do you mean? How can a collection of cells remember a spatial pattern? But again, in neuroscience or in an engineering department, yeah, of course. Of course they have electrical circuits that remember patterns and can do pattern completion and things like that.
[18:42] So, you know, views of, views of causality, views of just, just lots of things like that, that are, that are very obvious to one group of people is completely taboo elsewhere. So that, that, that distinction and, and, and, and yeah, and as, as you, as Joshua just said, it impacts, it impacts everything. It impacts education, it impacts grants, you know, grant, grant reviews, because when these kinds of interdisciplinary grants come up,
[19:10] the study sections have a really hard time finding people that can actually review them. Because what often happens is you'll find you'll you'll get some kind of computational biology grant, and you put a proposal and you'll have some people on the on the panel who are biologists and some people who are the computational folks. And it's very hard to get people that actually can appreciate both sides of it and understand what's happening together, right? So they will sort of each critique a certain part of it. And the other part, they say, I don't know what this is. And
[19:39] And as a result, grants like that don't tend to not have a champion, you know, one person who can say, no, I get, I get the whole thing and I think it's really good or not.
[19:48] So yeah, it's even to the point where I'm often asked, you know, when people want to list me somewhere, they'll say, so what are you? You know, what kind of what's your field? And I never know how to answer that question. You know, this day, it's been 30 years, I still don't know how to answer that question. I can't boil it down to one, you know, it just wouldn't make any sense to say any of the traditional, you know, the traditional fields. So what do you say, Yoshua, when someone asks you what field you're in?
[20:17] It depends on who's asking. So, for instance, I found it quite useful to sometimes say, sorry, I'm not a philosopher, but this or I'm not that interested in machine learning. And I did publish papers in philosophy and in machine learning, but it's not my specialty in the sense that I need to identify with it.
[20:44] And in some sense, I guess that these categories are important when you try to write a grant proposal or when you try to find a job in a particular institution and they need to fill a position. But for me, it's more what questions am I interested in? What is the thing that I want to make progress on? Or what is the thing that I want to build right now? And I guess that in terms of the intersection, I'm a cognitive scientist.
[21:13] So I was asking Michael, prior to you joining Yocha, why is it Michael that you were doing podcasts? And if I understand correctly, part of the reason was because you think out loud and you like to hear the other person's thoughts and take notes and espers your own. And firstly, like Michael, you can correct me if that's incorrect. And then secondly, Yocha, I'm curious for an answer for this, the same question, what is it that you get out of doing podcasts other than, say, some marketing for if you were promoting something, which I don't imagine you are currently?
[21:41] No, I'm not marketing anything. What I like about podcasts is the ability to publish something in a format that is engaging to interesting to people who actually care about it.
[21:56] I like this informal way of holding on to some ideas and also like conversations is a medium to develop thought. It's this space in which we can reflect on each other, look into each other's minds, interact with the ideas of others in real time. The production format of a podcast creates a certain focus of the conversation that can be useful. And it's a pleasant kind of tension that focuses you to stay on task.
[22:25] and I also found that it's generally useful to some people. The feedback that I get is that people tell me I had this really important question and I found this allowed me to make progress on it and I feel much better now about these questions. This clarified something for me that has plagued me for years and put me on track to solving it or this has inspired the following work
[22:53] So it's a form of publishing ideas and getting them into circulation in our global hive minds that is very informal in a way, but it's not useless. And also it leaves me in this instance, at least of the work of cutting, editing and so on. But anyways, I'm very grateful that you provide the service of curating our conversation and putting it in a form that is useful to other people.
[23:21] Yeah, yeah, there's something, well, two things I was thinking of. One is that, you know, I mean, I have conversations with people all day long about these issues, right? So people in my lab, collaborators, whatever. And most, of course, the vast majority of those conversations are not recorded, and they just sort of disappear into the ether. And then I take something away from it, and the other person takes something away from it. But I've often I've often thought that wouldn't it be wouldn't isn't it a shame that all of this
[23:45] is just kind of disappears and it would be amazing to have a record of it and of course not every conversation is gold but a lot of them are useful and interesting and there are plenty of people that could be interested and could benefit from it. So I really like this aspect that we can have conversations and then they're sort of canned and they're out there for people who are interested. The other kind of aspect of it which I don't really understand but it's kind of neat
[24:15] Is that if when somebody when somebody asks me to pre record a talk?
[24:20] It takes a crazy amount of time because because I keep stopping and realizing I could have said that better. Let me start from the beginning. And and it's just it's it's an incredible ordeal. Whereas something like this that's real time, I'm sure has as many, you know, mistakes and things that I would have rather fixed later. But but you can't do that. Right. So you just sort of go with it and that's it. And then it's done and you can move on. So so I like I like that real time aspect of it, because it just it just helps you to get the ideas out without getting hung up and trying to redo things 50 times.
[24:51] Yeah, it's a format that allows tentativity. If we publish, we have a culture in sciences that requires us to publish the things that we can hope to prove and make the best proof that we can. But when we have anything complicated, especially when we take our engineering stance, we often cannot prove how things work.
[25:13] Instead our answers are in the realm of the possible and we need to discuss the possibilities and there is value in understanding these possibilities to direct our future experiments and the practical work that we do to see what's actually the case.
[25:28] And we don't really have a publication format for that. We don't get neuroscientists to publish their ideas on how the mind works because nobody has a theory that they can prove. And as a result, there is basically a vacuum where theories should be. And the theory building happens informally in conversations that basically requires personal contact, which is a big issue once conferences went virtual because that contact diminished.
[25:54] And you get a lot of important ideas by reading the publications and so on. But this what could be or connecting the dots or possibilities or ideas that might be proven wrong later that we just exchange as in the status of ideas. That is something that has a good place in a podcast. Now, is this podcast, not this TOE podcast, but podcast in general something new? So for instance, I was thinking about this and I, well, podcasts go back a while and Rogan invented this long form format or popularized it. However,
[26:24] On television, there are interviews. So there's Oprah, and those are long one hour, there's 60 minutes. And then back in the 90s, there was a three and a half hour, it's essentially a podcast like Charlie Rose, three and a half hour conversation. It's like a field location with Freeman Dyson, Daniel Dennett, Stephen Jay Gould, like the Rupert Sheldrake, all of those on the same one
[26:47] I think it's like it's like blogging blogging is also not new.
[27:14] Being able to write text that you publish and people can follow what you are writing and so on, did exist in some sense before, but the internet made it possible to publish this for everyone. You don't need a publisher anymore.
[27:30] and you don't need a TV studio anymore. You don't need a broadcast station that is recording your talk show and sends it to an audience. There is no competition with all the other talk shows because there is no limitations on how many people can broadcast at the same time. And this allows an enormous diversity of thoughts and small productions that are done at a very low cost, lowering the threshold for putting something out there and seeing what happens.
[27:58] So in this sense, it's the ecosystem that emerged is new, because the variable change that changed the cost of producing a talk show. Right. Michael, you agree?
[28:09] Yeah, yeah, I mean, yes, that and all of that and also just the fact that, you know, as you just said, these kind of like long form things were fairly rare. So most of the time, if you're going to be in one of the traditional media, they tell you, okay, you've got the you've got, you know, three minutes, you know, we're going to cut all this stuff, and we're going to boil it down to three minutes. And this is often incredibly frustrating. And I understand, I mean, we're drowned in information. And so
[28:36] So there is obviously a place for very short statement on things, but the kind of stuff that we're talking about cannot be boiled down to, you know, TV sound bites or any of it's just not. And so the ability to have these long form things so that anybody who wants to really dig in can hear what the actual thought is, as opposed to something that's that's been just, you know, boiled into into a very, very, very short statement, I think is invaluable, just being able to have it out there for people to find.
[29:05] Now a brief note from two sponsors.
[29:22] Roman offers clinically proven medication to help treat hair loss. All of course from the comfort and privacy of your own home. Roman offers prescription medication and over the counter treatments. They also offer specially formulated shampoos and conditioners with ingredients that fortify and moisturize the hair to look fuller
[29:38] Research shows that 80% of men who use prescription hair loss medication had no further hair loss after two years. Roman is licensed and the whole process is straightforward and discreet. Plans start at $20 a month. Currently, Roman has a special offer for toe listeners that is you. Use the link ro.co.curt to get 20% off your first order. Again, that's ro.co.curt.
[30:03] The link is in the description and you get 20% off. As the Toe project grows, we get plenty of sponsors coming. And I thought, you know, this one is a fascinating company. Our new sponsor is Masterworks. Masterworks is the only platform that allows you to invest in multi-million dollar works of art by Picasso, Bansky, and more.
[30:20] Masterworks is giving you access to invest in fine art, which is usually only accessible to multimillionaires or billionaires. The art that you see hanging in museums can now be partially owned by you. The inventive part is that you don't need to know the details of art or investing. Masterworks makes the whole process straightforward with a clean interface and exceptional customer service.
[30:39] They're innovating as more traditional investments suffer. Last month, we verified a sell which had a 21.5% return. So for instance, if you were to put $10,000 in, you would now have $12,000. Welcome to our new sponsor Masterworks and the link to them is in the description. Just so you know, there's in fact a waitlist to join their platform right now. However, toll listeners can skip the waitlist by visiting the link in the description, as well as by using the promo code theories of everything. Again, the link is in the description.
[31:04] What's some stance of yours, some belief that has changed most drastically in the past few years, let's say three, and it could be anywhere from something abstruse and academic to more colloquial, like I didn't realize the value of children or I overvalued children. Now I'm stuck with them. Like, geez, that was a mistake. Yeah. So something where I changed my mind was RNA based memory transfer.
[31:31] And I think it's a super interesting idea in this context, because it's close to stuff that Michael has been working on and is interested in. There have been some experiments in the Soviet Union, I think in the 70s, where scientists took planaria
[31:50] trained them to learn something. I think they learned how to be afraid of electric shocks and things like that. And then they put their brains into a blender, extracted the RNA, injected other planaria with it, and these other planaria had learned it. And I learned about this as a kid when I, in the 1980s, read Soviet science fiction literature. I grew up in Eastern Germany. And the evil scientist harvested the brains of geniuses
[32:20] and injected himself is RNA extracted from these brains and thereby acquired the skills. And even though I'm pretty sure this probably doesn't work if you do it at this level.
[32:33] This was inspired by this original research and I later heard nothing about this anymore and so I dismissed it as similar things as I read in Sputnik and other Russian publications which create their own mythological universe about ball lightning that is
[32:52] agentic and possibly sentient and so on and dismiss this all as basically another universe of another religious digest culture that is producing its own ideas that then later on get dissolved once science advances because everybody knows its synapses, its connections between neurons that matter. The RNA is not that important for the information processing. It might change some state, but you cannot learn something by extracting RNA and re-injecting it into the next organism because how would that work if it's done in the synapses?
[33:22] and then recently there was some papers which replicated the original research and has been replicated from time to time in different types of organisms but to my knowledge not in of course macaques or not even mice but so it's not clear if their brains work according to the same principles as planaria but planaria are not
[33:47] extremely simple organisms, only a handful of neurons, they are something intermediate. So their main architecture is different from ours and the functioning principles of their neurons might be slightly different, but it's worth following this idea and going down that rabbit hole. And then I looked from my computer science engineering perspective and I realized that there are always things about the synaptic story that I find confusing because they're very difficult to implement.
[34:16] For instance, weight sharing. As a computer scientist, I require weight sharing. I don't know how to get around this. If I want to entrain myself as computational primitives in the local area of my brain, for instance, the ability to rotate something, rotation is some operator that I apply on the pattern that allows this pattern to be represented in a slightly different way to have this object rotated a few degrees.
[34:42] But an object doesn't consist of a single point, it consists of many features that all need to get the same rotation applied to them using the same mathematical primitives. So how do you implement the same operator across an entire brain area? Do you make many copies of the same pattern?
[35:01] And computer scientists solve that with convolutional neural networks, which basically use the same weights again and again in different areas, only training them once and making them available everywhere. And that would be very difficult to implement in synapses. Maybe there are ways, but it's not straightforward. Another thing is if we see how training works in babies, they learn something and then they get rid of the surplus synapses. Initially, they have much more connectivity than they need.
[35:31] And when they get after they've trained, they optimize the way in which the wiring works by discarding the things they don't need to compute what they want to compute. So it's like calling the synapses. It does not freeze or edge the learning into the brain, but it optimizes the energy usage of the brain. Another issue is that
[35:52] Patterns of activation are not completely stable in the brain. In the cortex, if you look, you find that they might be moving the next day or even rotate a little bit, which is also difficult to do with synapses. You cannot read out the weights and copy them somewhere else in an easy, straightforward fashion. And another issue is defragmentation. If you learn, for instance, your body map into a brain area,
[36:14] And then somebody changes your body map because you have an accident and lose a finger or somebody gives you an artificial limb and you start to integrate this into your body map. How do you shift all the representations around? How do you make space for something else and move it? Or also initially when you set up your maps by a happier learning, how do you make sure that the neighborhoods are always correct and you don't need to realign anything? And I guess you need some kind of realignment. And all these things seem to be possible when you switch to a different paradigm.
[36:45] And so if you take this RNA base series seriously, go down this rabbit hole, what you get is the neurons are not learning a local function over its neighbors, but they are learning how to respond to the shape of an incoming activation front, a spatial temporal pattern in their neighborhood. And they are densely enough connected, so the neighborhood is just a space around them.
[37:09] And in this space, they basically interpret this according to a certain topology to say this is maybe a convolution that gives me two and a half D or it gives me two D or one D or whatever. The type of function is that they want to compute and they learn how to fire in response to those patterns and thereby modulate the patterns when they're passed on. So the neurons act something like self-modulating ether, so which wavefronts propagate.
[37:38] that perform the computations. And they store the responses to the distributions of incoming signals possibly in RNA. So you have little mixtapes, little tape fragments that they store in a summa.
[37:53] and that it can make more of very cheaply and easily. If they are successful mixtapes and they're useful computational primitives that they discovered, they can distribute this to other neurons through the entire cortex. So neurons of the same type will gain the knowledge to apply the same computational primitives. And that is something I don't know if the brain is doing that and the human brain is using these principles or if it's using them a lot and how important this is and how many other mechanisms exist.
[38:20] But it's a mechanism that we haven't, to my knowledge, tried very much in AI and computer science. And it would work. There is something that is a very close analog that is a neural cellular automaton. So basically, instead of learning weight shifts or weight changes between adjacent neurons, what you learn is global functions that tell neurons on how to respond to patterns in the neighborhood.
[38:47] And these functions are the same for every point in your matrix. And you can learn arbitrary functions in this way. And what's nice about is that you only need to learn computational primitives once. Our current neural networks need to learn the same linear algebra over and over again in many different corners of the neural network, because you need vector algebra for many kinds of operations that we perform, for instance, operations in space, where we shift things around or rotate them.
[39:18] And if they could exchange these useful operations with each other and just apply an operator whenever the environment dictates that this would be a good idea to try to apply this operator right now in this context, that could speed up learning, that could make training much more sample efficient. And so it's something super interesting to try. And this is one of the rabbit holes I recently fell down, where I changed my thinking based on some experiment from neuroscience.
[39:46] that doesn't have very big impact for the mainstream of neuroscience, but that I found reflected in Michael's work with Planaria.
[39:56] Yeah, that's super interesting stuff. I can sprinkle a few details onto this. So the original finding in planaria was a guy named James McConnell at Michigan, actually in the US. And then that was in the 60s, the early 60s. And then there was some really interesting Russian work that picked it up after that. We reproduced some of it recently using modern quantitative automation and things like this.
[40:25] One of the really cool aspects of this, and there's a whole community, by the way, with people like Randy Gallistil and Sam Gershman and, of course, Glantzman, David Glantzman. That story of memory in the precise details of the synapses, that story is really starting to crack, actually, for a number of reasons. One of the cool things that was done in the Russian work, and it was also done later on by
[40:50] Doug Blackiston who's in my lab now as a staff scientist and other people is this.
[40:55] you can certain certain animals that go through larval stages right so you can taste so what the russians were using um beetle beetle larvae and uh and doug and other people used uh used used moths and butterflies so what happens is you train you train the larva right so so so here you've got a butterfly a caterpillar so so this caterpillar lives in a two-dimensional world it's a soft-bodied robot it lives in a two-dimensional world that eats leaves and so on right and so you train this thing for a particular task
[41:22] Well during metamorphosis it needs to become a moth or butterfly which it lives in a three-dimensional world plus it's a hard-bodied creature so the controller is completely different right for running this for running a caterpillar versus a butterfly so so during that process what happens is the brain is basically dissolved so most of the connections are broken most of the cells are gone they die you you put together a brand new brain itself assembles
[41:46] And you can ask all sorts of interesting philosophical questions of what it's like to be a creature whose brain is undergoing this massive change. But the information remains. And so one can ask, okay, this is, you know, certainly for computer science, it's amazing to have a memory medium that can survive this radical remodeling and reconstruction. And there's the RNA story, but also,
[42:11] You had mentioned, you know, does this work for mammals? So there was a guy in the 70s and 80s, there was a guy named George Ungar who did tons of, he's got tons of papers, he reproduced it in rats. So his was Fear of the Dark and he actually, by establishing this essay and then
[42:29] You know fractionating their brains and and and extracting this this activity now he thought it was a peptide not not RNA so he he ended up with a with a thing called scotaphobin which turns out to be I think an eight mer peptide or something and the claim was that you can transfer this scotaphobia you can synthesize it and then transfer it from brain to brain.
[42:50] And that's, and that's, you know, that's, that's, that's what he thought it was. And then of course, I think David Glantzman favors RNA again. But yeah, I agree with you. I think, I think that's a, that's a, that's a super important story of how it is that this kind of information can survive. Uh, just, just massive remodeling of the, of the cognitive substrate in planaria. What we, what we did in planaria, you know, they, they have a true centralized brain. They have all the same neurotransmitters that we have. They're not a simple, a simple organism.
[43:19] What we did was McConnell's first experiments, which is to train them on something and we train them to recognize a laser etched kind of bumpy pattern on the bottom of the dish and to recognize that that's where their food was going to be found. So they made this association between this pattern and getting food. And then we cut their heads off and we took the tails and the tails sit there for 10 days doing nothing. And then eventually they grow a new brain.
[43:43] And what happens is that information is then imprinted onto the new brain and then you can recover behavioral evidence that they remember the information.
[43:52] So that's pretty cool too because it suggests that we don't know if the information is everywhere or if it's in other places in the peripheral nervous system or in the nerve core that we don't know where it is yet. But it's clear that it can move around, that the information can move around in the body because it can be in the posterior half and then imprinted onto the brain which actually drives all the behaviors.
[44:15] So thinking about that, I totally agree that this is a really important rabbit hole for asking. But there's an interesting puzzle here, which is this. It's one thing to remember things that are evolutionarily adaptive, like fear of the dark and things like this. But imagine, and this hasn't really been done well, but imagine for a moment
[44:39] if we could train them to something that is completely novel, let's say, let's say we train them. Three yellow life flashes means take a step to your left, otherwise you get shocked, something like that. And let's say they learn to do it. We haven't done this yet. But let's say, let's say this could work. One of the big puzzles is going to be when you extract whatever it is that you extract, let's say it's RNA or protein, whatever it is, you stick it into the brain of a recipient host,
[45:05] And in order for that memory to transfer, one of the things that the host has to be able to do is has to be able to decode it.
[45:10] In order to decode it, it's one thing if we share the same codebook, and by evolution, we could have the same codebook for things that come up all the time, like fear of the dark, fear, things like that. But how would the recipient look at a weird, some kind of crazy hairpin RNA structure and analyze it and be like, oh, yes, that's three light flashes, and then step to the left, I see. So you would need to be able to interpret somehow this structure and convert it back to the behavior.
[45:40] And for behaviors that are truly arbitrary, that might be, I don't know actually how that would work. And so I think the frontier of this field is going to be to have a really convincing demonstration of a transfer of a memory that doesn't have a plausible pre-existing shared evolutionary decoding, because otherwise you have a real puzzle as to how the decoding is going to work. And then even without the transfer, you can also think of it a different way,
[46:10] Every memory is like a message is like basically a transplanted message from your past self to your future self, meaning that you still have to decode your memories, whatever your memories are, in an important sense, you have to, you know, those n grams, you have to decode them somehow. So that that whole issue of encoding and decoding, whatever the substrate of memory is, is, you know, maybe one of the most important questions there are. One of the ways we can think about these n grams, I think that there are
[46:37] Priors that condition what kinds of features are being spawned in which context. For instance, when we see a new scene, the way that perception seems to be working is that we spawn lots and lots of feature controllers that then organize into objects that are controlled at the level of the scene. And this is basically a game engine that is forming in our brain.
[47:02] that is creating a population of interacting objects that are tuned to track our perceptual data at the lowest levels. So all the patterns that we get from our retina and so on are samples, noisy samples that are difficult to interpret, but we are matching them into these hierarchies of features that are translated into objects that assign every feature to exactly one object and every pixel, so to speak, to exactly one.
[47:30] except in the case of transparency and use this to interpret the scene that is happening in front of us. And when we are in the dark, what happens is that we spawn lots of object controllers without being able to disprove them, because there is no data that forces us to reject them. And if you have a vivid imagination, especially as a child, you will fill this darkness automatically with lots of objects, many of which will be scary.
[47:55] And so I think that lots of the fear of the dark doesn't need a lot of encoding in our brain. It is just an artifact of the fact that there are scary things in the world which we can learn to represent at an early age and that we cannot just prove them that they just will just spawn.
[48:12] I remember this vividly as a child that whenever I had to go into the dark basement to get some food in our house in the countryside, that this darkness automatically filled with all sorts of shapes and things and possibilities. And it took me later to learn that you need to be much more afraid of the ghosts that can hide in the light. So what would be the implications of if you were able to transfer memory for something that's
[48:42] not trivial so nothing that's like an archetype of fear of the dark between uh... mammal like rats
[48:50] And when I say transfer memory, I mean, in this way that you blend up the brain. And also, can you explain what's meant by, I think I understand what it means to blend the brain of a planaria, but I don't think that's the same process that's going on in rats. Maybe it is. Well, Ungar did exactly the same thing. He would train rats for particular tasks. He would extract the brain, literally liquefy it to extract the chemical contents. He would then either inject the whole extract or a filtered extract where you would divide it up. You'd set fractionate it. So here's the RNAs, here's the proteins.
[49:20] here you know other things uh and and then he would inject that liquid directly into the brains of recipient rats so you know when you do that you lose you lose spatial structure on the input because you just blended your brain whatever spatial structure there was you just destroyed it also on the recipient
[49:39] You just inject it. You're not finding that particular place where you're going to stick. You just inject this thing right in the middle of the brain. Who knows where it goes, where the fluid goes. There's no spatial specificity there whatsoever. If that works, what you're counting on is the ability of
[49:59] the brain to take
[50:21] you're basically asking the cells to take it up almost as as a primitive animal would with taste or touch you right that that's kind of distributed all over the body and you can sort of pick it up anywhere and then you have to process this information. So so so you've got those issues right off the bat right that you've destroyed the incoming spatial structure you you can't really count on where it's going to land in the brain and and then the third thing is as you just mentioned is the idea that
[50:46] especially if we start with information
[50:57] you know, kind of invented, you know, the three light flashes means move to your left. I mean, there's never there's never been an evolutionary reason to have that encoded. Like, as you just said, having a fear of the dark is absolutely a natural kind of thing that showed that that you can expect. But and then there are many other things like that. But but something something as as contrived as you know, three light flashes, and then you move to your left, there's no reason to think that we have a built in way to recognize that. So when you as a recipient brain or handed this weird
[51:26] a molecule with a particular structure or a set of molecules, being able to analyze that, having the cells in your brain or other parts of the body actually, that could analyze that and recover that original information would be extremely puzzling. I actually don't know how that would work. And I'm a big fan of unlikely sounding experiments that have implications if they would work. So this is something that I think should absolutely be done. And at some point we'll do it, but we haven't done it yet.
[51:56] So how far did the research in my school? What is the complexity of things that could be transmitted via this route? I don't remember everything that he did. The vast majority of he did not go
[52:14] far to test all the complexities what he tried to do was because as you can imagine he faced incredible opposition right so so so everybody you know sort of wanted to critique this thing so he spent all of his time on he picked one simple assay which was the sphere of the dart thing and then he just he just bashed it for for 20 years to just finally try to kind of crack that into the into the paradigm he did not as far as i know do lots of different assays to try and make it more complex
[52:43] I think it's very ripe for investigation. This is the kind of... Did anyone else build upon his work? Not that I know. I mean, David Glansman is the best modern person who works on this, right? So he does a plesia and he does RNA. So he favors RNA. There's a little bit of work from Oded Rahavi in Israel with C. elegans. He's kind of looking into that.
[53:08] There's related work that has to do with cryogenics, which is this idea that if memories are a particular kind of dynamic electrical state, then some sort of cryogenic freezing is probably going to disrupt that. Whereas if it's a stable molecule, then it should survive. So again, I think there are people interested in that aspect of it, but I'm not sure they've done anything with it.
[53:36] There's also Gaurav Venkataraman. I think he is at Berkeley. He told me that he has been working on this for several years, but he said it's sociologically tricky. And that's to me fascinating that we should care about that. What does he mean by that? What do you care about? What stupid people think?
[53:59] If this possibility exists that this works, the upside is so big that it's criminal to not research this. I think it's a disaster that you can read introductory textbooks on neuroscience and never ever hear about any of these experiments. Everybody who gets the introductory stuff on neuroscience only knows about information stored in the connectome. And this leads to, for instance, the Blue Brain Project,
[54:27] If RNA-based memory transfer is a thing, then this entire project is doomed. Because you cannot get the story out of just recording the connectome. Most of the research right now is focused on reconstructing the connectome as it was circuitry and hoping that we can get the functionality of information processing and deduce the specificity of the particular brain, what it has learned from the connections between neurons.
[54:54] What if it turns out this doesn't matter? What if you just need connections that are dense enough and so basically stochastic lattice that is somewhat randomly wired and what matters is what the new ones are doing with the information that they're getting through this ether through this lattice. This changes the entire way in which we need to look at things.
[55:12] And if this possibility exists, and if this possibility is just 1%, but there is some experiment that points in this direction, it is ridiculous to not pursue this with high pressure and focus on it and support research that goes in this direction. Basically, what's useful is not so much answering questions in science, it's discovering questions, it's discovering new uncertainty. Reducing the uncertainty is much easier than discovering new areas of where you thought that you were certain.
[55:42] but that allow you to get new insights. And it seems to me that a lot of neuroscience is stuck.
[55:48] that does not produce results that seem to accumulate in an obvious way towards a theory on how the brain processes information. So the neuroscientists don't deliver input to the researchers and the transformer is not the result of reading a lot of neuroscience. It's really mostly the result of people's thinking about statistics of data processing.
[56:14] And it would be great if we would focus on ideas that are promising and new and that have the power to shake existing paradigms. Yeah, this is, you know, this is this is so important. And it's not just neuroscience in developmental biology, we have exactly the same thing. And I'll just give you two very simple examples of it, where and I tell the students when I give talks to students, I say, Isn't it amazing that that in your whole
[56:42] So here's a couple of examples. One example is that
[56:55] as of trophic memory in deer. So there are species of deer that every year they regenerate, you know, the whole, so they make this antler rack on their heads, the whole thing falls off and then it regrows the next year. So these two guys, Bobenak, which are a father and son team that did these experiments for 40 years, and actually have all these antlers in my lab now, because when the younger one retired, I asked him, he sent me all these things, all these antlers. The idea is this,
[57:22] What you can do is you take a knife and somewhere in this branch structure you make a wound and the bone will heal and you get a little callus and that's it for that year. Then the whole thing drops off and then next year it starts to grow and it will make an ectopic tine, an ectopic branch at the point where you injured it last year. This goes on for five or six years and then eventually it goes away and you get a normal rack again.
[57:50] The amazing thing about it is that the standard models for patterning for morphogenesis are these kind of gene regulatory networks and genetic kinds of biochemical gradients and so on. If you try to come up with a model for this, so for encoding
[58:11] an arbitrary point within a branch structure that your cells at the scalp have to remember for months after the whole thing is dropped off and then not only remember it but then implement it so that when the bone starts to grow something says oh yes that's the you know the start another another time growing to your left exactly exactly here right coming trying to try to make a model of this using the standard tools of the field is
[58:33] uh just just incredibly difficult and this is that that's and there are other examples of this but this kind of non non-genetic memory that's just very difficult to explain with standard models the other thing which is any i think an even bigger scandal is the whole situation with planaria um
[58:50] planaria, some species of planaria, the way they reproduce is they tear themselves in half, each half regenerates the missing piece. And now you've got two, that's how they reproduce. So if you're going to do that, what you end up avoiding is Weisman's barrier, this idea that when we get mutations in our body, our children don't inherit those mutations, right? So this means that any mutation that doesn't kill the stem cell in the body gets amplified as that cell contributes to regrowing the worm. So as a result of this, for 400 million years, these planaria have accumulated mutations.
[59:19] Their genomes are an incredible mess. Their cells are basically mixoploid, meaning they're like a tumor. Every cell has a different number of chromosomes, potentially. They just look horrible. As an end result, you've got an animal that is immortal, incredibly good at regenerating with 100% fidelity.
[59:38] And very resistant to cancer now this is all of this is the exact opposite of the message you get from from from a typical course through through biology which is says that what is the genome for the genome is for setting your body structure if you mess with the genome that information goes away you get you get aging you get cancer.
[59:57] Right. Why does the animal with the worst genome have the best anatomical fidelity? I mean, that's just, and I think we actually, as of a few months ago, we actually, I think, have some insight into this, but it's been bugging me for years. And this is the kind of thing that nobody ever talks about because it goes against the general assumption of what genomes actually do and what they're for. And this complete lack of
[60:21] correlation between the genome in fact an anti correlation between the genome quality and the incredible ability of this of this animal to uh to to to to have a healthy anatomy yeah what is that insight that you mentioned you acquired a few months ago preliminary so so okay uh in the in the name of uh you know throwing out uh kind of new unproven ideas right so this is you know this this is this is just my my conjecture we've we've done some we've done some computational modeling of it which which i initially this was a um
[60:50] a very clever student that I work with the name Lakshwin who I did did some models with me and I initially thought it was a bug and then I realized that no actually this is this is this is the feature the idea is this imagine so we've been working for a long time on a concept of competency among embryonic parts and what this means is basically the idea that there are there are homeostatic feedback loops
[61:18] Among various cells and tissues and organs that attempt to reach specific outcomes in anatomical morphing space despite various perturbations.
[61:28] The idea is that if you have a tadpole and you do something to it, whether by a mutation or by a drug or something, you do something to it where the eye is a little off kilter or the mouth is a little off. All of these organs pretty much know where they're supposed to be. They will try to minimize distance from other landmarks and they will remodel and eventually you get a normal frog so that they will sort of
[61:50] recover the correct anatomy, despite starting off in the wrong position, or even things like changes in the number of cells or the size of cells, they're really good at getting their job done, despite various changes, right? So okay, so they have these competencies to optimize specific things like like their position and structure and things like that. So, so that's it. So that's competency. Now, here's, here's the interesting thing.
[62:16] Imagine that you have a species that has some degree of that competency and so you've got an individual of that species comes up for selection, fitness is high, looks pretty good but here's the problem, selection doesn't know whether the fitness is high because his genome was amazing or the fitness is high because the genome was actually so-so but the competency sort of made up for it and now everything kind of got back to where it needs to go.
[62:43] So what the competency apparently does is shield information from evolution about the actual genome. It makes it harder to pick the best genomes because your individuals that perform well don't necessarily have the best genomes. What they do have is competency. So what happens in our simulations is that if you start off with even a little bit of that competency, evolution loses some power in selecting the best genomes
[63:12] but where all the work tends to happen is increasing the competency. So then the competency goes up. So the cells are even better at, and the tissues are even better at getting the job done despite the bad genome. That makes it even worse. That makes it even harder for evolution to see the best genomes, which relieves some of the pressure on having a good genome, but it basically puts all the pressure on being really competent. So basically what happens is that
[63:40] The genetic fitness basically levels out at a really suboptimal level and in fact the pressure is off of it so it's tolerant to all kinds of craziness.
[63:51] But the competency and the mechanisms of competency get pushed up really high. So in many animals, and there are other factors that sort of push against this ratchet, but it becomes a positive feedback loop. It becomes a ratchet for optimal performance despite a suboptimal genome. And so in some animals, this sort of evens out at a particular point. But I think what happened in planaria is that this whole process ran away to its ultimate conclusion. The ultimate conclusion is
[64:21] The competency algorithm became so good that basically whatever the genome is, it's really good at creating and maintaining a proper worm because it is already being evolved in the presence of a genome whose quality we cannot control. So in computer science speak,
[64:37] It's kind of like i'm in steve frank put me on to this analogy it's kinda like what happens in rate arrays when you have a nice rate array where the software make sure that you don't lose any data the pressure is off to have really really high quality media and so now you can tolerate you can tolerate media with lots of mistakes because the because the software takes care of it in the in the rate and the end of.
[64:59] the architecture takes care of it. So basically what happens is you've got this animal where that runaway feedback loop went so far that the algorithm is amazing and it's been evolved specifically for the ability to do what it needs to do, even though the hardware is kind of crap. And it's incredibly tolerant. So this has a number of implications that, to my knowledge, have never been explained before. For example,
[65:27] In every other kind of animal, you can call a stock center and you can get mutants. So you can get mice with kinky tails, you can get flies with red eyes, and you can get chickens without toes, and you can get humans come with various albinos and things. There's always mutants that you can get. Planaria,
[65:50] There are no abnormal lines of planaria anywhere except for the only exception is our two-headed line and that one's not genetic, that one's bioelectric. So isn't it amazing that nobody has been able, despite 120 years of experiments with planaria, nobody has isolated
[66:10] a line of plan area that is anything other than a perfect plan area and i think this is why i think it's because they have been actually selected for being able to do what they need to do despite the fact that the that the that the hardware is just very junky and so so that's my that's my current that's my current current take on it and and and really
[66:30] It puts more emphasis on the algorithm and the decision making among that cellular collective of what are we going to build and what's the algorithm for making sure that we're all working to build the correct thing.
[66:44] If you translate this idea into computer science, a way to look at it is imagine that you find some computers that have hard disks that are very, very noisy and where the hard disk basically makes lots and lots of mistakes in encoding things and bits often flip and so on.
[67:02] and you will find that these computers still work and they work in pretty much the same way as the other computers that you have. And there is an orthodox sect of computer scientists that thinks it is necessary that every bit on the hard disk is completely reliable or reliable to such a degree that you only have a mistake once every 100 trillion copies
[67:25] And you can have an error correction code running on the hard disk at the low level that corrects this. And after some point, it doesn't become efficient anymore. So you need to have reliable hard disks to be able to have computers that work like this. But how would these other computers work? And it basically means that you create a virtual structure on top of the noisy structure that is correcting for whatever degree of uncertainty you have or the degree of randomness that gets injected into your substrate.
[67:53] David, Dave Eckley has a very nice metaphor for this. You know him, maybe? Yeah, I know him. Yeah, he's a, I think, beautiful artist who explores complexity by tinkering with computational models. I really find his work very inspiring. And he has this idea of best effort computing. So in his view, our own nervous system is the best effort computer. It's one that does not rely on the other neurons around you working perfectly.
[68:20] But make an effort to be better than random. And then you stack the probabilities empirically by having a system that evolves to measure, in effect, the unreliability of its components, and then stack the probabilities until you get the system to be deterministic enough to do what you're doing with what to do with it. And so you
[68:45] If you have a system that is, as in the planaria, inherently very noisy, where the genome is an unreliable witness of what should be done in the body, you just need to interpret it in a way that stacks the probabilities, that is evaluating things with much more error tolerance.
[69:02] And maybe this is always the case. Maybe there is a continuum. Maybe not. It's also possible that there is some kind of phase shift where you switch from organisms with reliable genomes to organisms with noisy genomes. And you basically use a completely different way to construct the organism as a result. But it's a very interesting hypothesis then to see if this is a radical thing or a gradual thing that happens in all organisms to some degree.
[69:27] What I also like about this description that you give about how the organism emerges, it maps in some sense also in how perception works in our own mind. At the moment, machine learning is mostly focused on recognizing images or individual frames and you feed in information frame by frame and the information is actually disconnected. A system like Dali2 is trained by giving it several hundreds of millions of images.
[69:57] And they are disconnected. They are not adjacent images in the space of images. And maybe could not probably learn from giving 600 million images in a dark room and only looking at this introduced the structure of the world from this. Whereas Dali can, which gives testament to the power of our statistical methods and hardware that we have, that far surpasses, I think, the combined power and reliability of brains, which probably would not be able to integrate so much information over such a big distance.
[70:24] For us, the world is learnable because its adjacent frames are correlated. Basically, information gets preserved in the world through time. And we only need to learn the way in which the information gets transmogrified. And these transmogrifications of information means that we have a dynamic world in which the static image is an exception. The identity function is a special case of how the universe changes. And we mostly learn change.
[70:49] I just got visited by my cat. And my cat has difficulty to recognize static objects compared to moving objects, where it's much, much easier to see a moving ball than a ball that is lying still. And it's because it's much easier to segment it out the environment when it moves. So the task of learning on a moving environment, a dynamic environment is much easier because it imposes constraints on the world.
[71:11] And so how do we represent a moving world compared to a static world? The semantics of features changes. An object is basically composed of features that can be objects themselves. And the scene is a decomposition of all the features that we see into a complete set of objects that explain the entirety of the scene.
[71:32] The interaction between them and causality is the interaction between objects, right? And in a static image, these objects don't do anything. They don't interact with each other. They just stand in some kind of relationship that you need to infer, which is super difficult because you only have this static snapshot.
[71:48] And so the features are classifiers that tell you whether a feature is a hand or a foot or a pen or a sun or a flashlight or whatever, and how they relate to the larger scene in which, again, you have a static relationship in which you need to classify the object based on the features that contribute to them.
[72:06] And you need to find some kind of description where you interpret features which are usually ambiguous and could be many different things depending on the context in which you interpret them into one optimal global configuration, right? But if the scene is moving, this changes a little bit. What happens now is that the features become operators. They're no longer classifiers that tell you how your internal state needs to change, how your world needs to change, how your simulation of the universe in your mind needs to change to track the sensory patterns.
[72:34] So a feature now is a change operator, a transformation. And the feature is in some sense a controller that tells you how the bits are moving in your local model of the universe. And they organize in a hierarchy of controllers.
[72:51] That's incredibly interesting because
[73:17] You know as soon as you started saying that I was starting to think that the virtualization that enables right so the earlier part of which was saying the virtualization of
[73:30] the information that allows you to deal with unreliable hardware and everything. The bioelectric circuits that we deal with are a great candidate for that because actually we see exactly that. We see a bioelectric pattern that is very resistant to changes in the details and make sure that everybody does the right thing under a wide range of
[73:51] Did you know different defects and so on but but but even more than that the other thing that you were just emphasizing this the fact that we learn the delta right and that and that we're looking for change.
[74:02] Very interesting. If you pivot the whole thing from the temporal domain to the spatial domain, so in development, when we look at these bioelectric patterns, now these patterns are across space, not across time. So unlike in neuroscience where everything is kind of in the temporal domain for neurons, these things, these are static voltage patterns across tissue, right, across the whole thing.
[74:26] So for the longest time, we asked this question, how are these read out? How do cells actually read these? Because one possibility early, this was a very early hypothesis 20 years ago, was that maybe the local voltage tells every cell what to be. So it's like a paint by numbers kind of thing. And each voltage value corresponds to some kind of outcome. That turned out to be false. What we did find is that, and we have computational models of how this works now,
[74:56] What is read out is the delta, the difference between regions. It doesn't care, nobody cares about what the absolute voltage is, what is read out in terms of outcomes for downstream cell behavior, gene expression, all that. What is actually read out is the voltage difference between two adjacent domains.
[75:15] So that is exactly actually what it's doing, just in the spatial domain, it only keys off of the delta. And what is in what is learned from that is exactly just as you as you were saying, it modifies the controller for what's downstream of that. And there may be multiple ones that are sort of moving around and Cohen having I mean, it's a very
[75:34] It's a very compelling picture, actually, and way to look at some of the some of the simulations that that we've been doing about how the bioelectric data are interpreted by the rest of the cells. You know, it's very interesting. Could we take a couple of couple minute break? Yeah, sure. Okay, I got a new coffee. All right. Speaking of coffee, a brief note from our sponsor.
[75:56] Coffee helps me work, it helps me fast from carbs, it's become one of the best parts of my day consistently. That's why I'm delighted that we're collaborating with Trade Coffee. They partner with top independent roasters to freshly roast and send the finest coffee in the country directly to your home on your preferred schedule. This matters to me as I work from home.
[76:15] Their team of experts do all the work testing hundreds of disparate coffees to land on a final curated collection of 450 exceptional coffees. I chose these three and the team at Trade Coffee worked to create a special lineup for theories of everything for the Toh audience based on some questions they asked me such as how much caffeine do I enjoy and what's the bitterness ratio etc. You can get that lineup or if that's not let's say your cup of coffee
[76:41] Then you can take your own quiz on their website to find a set that matches your specific profile. If you'd like to support small businesses and brew the best cup of coffee you've ever made at home, then it's time to try Trade Coffee. Right now, Trade is offering our listeners $30 off your first order plus free shipping at www.drinktrade.com slash everything. That's www.drinktrade.com slash everything for $30 off.
[77:08] So Professor Levin used the word competence earlier, and I'd like you to define that. Yeah. In order to define it, I want to put out two concepts to this. One idea is that, to me, and this goes back to what we were talking about before as the engineering stance on things, I
[77:30] think that useful cognitive claims such as something, you know, when you say this system has whatever or it can whatever, right, as far as various types of cognitive capacities, I think those kinds of claims are really engineering claims.
[77:45] That is, when you tell me that something is competent at a particular level, maybe, right? So you can think about like Wiener and Rosenbluth scale of cognition that goes from simple, you know, simple passive materials and then reflexes and then all the way up to kind of second order metacognition and all that. When you tell me that something is on that ladder and where it is, what you're really telling me is,
[78:09] if i want to predict its behavior or i want to use it in an engineering context or i want to interact with it or relate to it in some way this is what i can expect so right so that's what you're really telling me so all of these terms
[78:21] what they really are, are engineering protocols. So if you tell me that something has the capacity to do associative learning or whatever, what you're telling me is that, hey, you can do something more with this than you could with a mechanical clock. You can provide certain types of stimuli or experiences and you can expect it to do this or that afterwards.
[78:43] Or if you tell me that something is a homeostat, that means that, hey, I can count on it to keep some variable at a particular range without having to be there myself to control it all the way. It has a certain autonomy. Now, how much, right? And if you tell me that something is really intelligent and it can do X, Y, Z, then I know that, okay, you're telling me that it has even more autonomous behavior in certain contexts. So all of these terms, to me, what they really are, they're not, and that has an important implication. The implication is that they're
[79:13] observer dependent, that you've picked some kind of problem space, you've picked some kind of perspective, and from that problem space and that perspective, you're telling me that given certain goal states, this system has that much competency to pursue those goal states. And different observers can have different views on this for any given system. So for example,
[79:35] is somebody might look at a brain like let's say a human brain and say well i'm pretty sure the only thing this this is a paperweight so it's really pretty much just competent in going down gravitational gradient so all i can do is hold down paper that that's it and somebody else will look at and say you missed the whole point you missed the whole point this thing has competencies in behavioral space and linguistic space right so these are all um empirically testable uh engineering claims about what you can expect the system to do so when i say competency what i mean is
[80:03] We specify a space, a problem space, and at the time when we were talking about this, the problem space that I was talking about was the anatomical morphous space. That was the space we were talking about. So the space of possible anatomical configurations and specifically navigating that morphous space. So you start off
[80:21] As an egg or you start off as a damaged limb or whatever and you navigate that more for space into the correct structure. So when I say competency, I mean you have the ability to deploy certain kinds of tricks to navigate that more for space with some level of
[80:38] Performance that i can count on and so the competency might be really low or it might be really high and i would have to make specific claims about what i mean here's an example of a copy and there are many you know if you just think about the behavioral science of navigation there are many competencies you can think about does it you know does it does it know ahead of time where it's going does it have a memory of where it's been or is it a very simple you know sort of reflex arc is all it has or
[81:04] Here's one example of a pretty cool competency that a lot of biological systems have. If we take some cells that are in the tail of a tadpole and we
[81:17] Give them a particular we modify their ion channels with such that they now acquire the goal of navigating to an eye fate in more in this more of a space, meaning that they're going to make an eye. These things, in fact, will create an eye and they'll make an eye in the tail on the gut wherever you want.
[81:36] But one of the amazing aspects is if I only modify a few cells, not enough to make an actual eye, just a handful of cells. And we've done this and you can see this work.
[81:50] One of the competencies they have is to recruit local neighbors that were themselves not in any way manipulated to help them achieve that goal. It's a little bit like in an ant colony, right? There's this idea of recruitment in ants and termites is an idea of recruitment where individuals can recruit others and talk about a flexible collective intelligence. This is it. You've re-specified the goal for that set of cells, but one of the things that they do without us
[82:16] telling them how to do it or having to micromanage it, they already have the competency to recruit as many cells as they need to get the job done. So that's a very nice for an engineer, that's a very nice competency because it means that I don't need to worry about taking care of getting exactly the right number of cells.
[82:33] If I'm a little bit over, that's fine. If I'm way under, also fine. The system has that competency of recruiting other cells to get the job done. So that's what I meant. So to me, to make any kind of a cognitive claim, you have to specify the problem space. You have to specify the goal towards which it's expressing competencies. And then you can make a claim about, well, how competent is it to get to that goal? And somebody, I wish I could remember who it was, but somebody made this really nice analogy about kind of the ends of that spectrum.
[83:03] They said two magnets try to get together and Romeo and Juliet try to get together. But the degree of flexible problem solving that you can expect out of those two systems is incredibly different. And within that range, there are all kinds of in-between systems that may be better or worse and may deploy different kinds of strategies. Can they avoid local optima? Can they have a memory of where they've been? Can they look further than their local environment? A million different things.
[83:28] So the way in which you use the word competency could be treated as the capacity of a system for adaptive control.
[83:52] One issue that I have with the notion of goals and goal directedness is that sometimes you only have a tendency in a system to go in a certain direction. And so it's directed, but the goal is something that can be emergent. Sometimes it's not sometimes there is an explicit representation the system of a discrete event that is associated or a class of events.
[84:15] This is
[84:35] tendency in our behavior. We could also say we have the goal of finding food, but that is a rationalization that is maybe stretching things sometimes. So sometimes a better distinction for me is going from a simple controller to an agent. And I try to, because we are very good at discovering agency in the world, what does it actually mean when we discover agency and when we discover our own agency and start to amplify it.
[85:03] by making models of who we are and how we deal with the world and with others and so on. The minimal definition of agent that I found, it's a controller for future states. The thermostat doesn't have a goal by itself, right? It just has a target value and a sensor that tells its deviation from the target value and when that exceeds a certain threshold, the heating is turned on. And if it goes below a certain threshold, the heating is turned off again and this is it.
[85:30] So the thermostat is not an agent. It only reacts to the present frame. It's only a reactive system.
[85:37] Whereas an agent is proactive, which means that it's trying to not just minimize the current deviation from the target value, but the integral over a time span, the future deviation. So it builds an expectation about how an action is going to change this trajectory of the universe. And over that trajectory, it tries to figure out some measure of how big the compound target deviation is going to be.
[86:06] And so as a result, you get a branching universe and the branches in this universe, some of these branches depend on actions that are available to you and that translate into decisions that you can make that move you into more or less preferable wealth states. And suddenly you have a system with emergent beliefs, desires and intentions. But to make that happen, to move from a controller to agency,
[86:30] agent just really being a controller with an integrated set point generator and the ability to control future states that requires that you can make models that are counterfactual. So because the future universe doesn't exist right now, you need to create a counterfactual universe, the future model of the future universe, maybe even a model of the past universe that allows you to reason about possible future universes and so on. And to make these counterfactual causal models of the universe,
[86:59] You need to have a Turing machine. So without a computer, without something that is Turing complete, that insulates you from the causal structure of your substrate, that allows you to build representations regardless of what the universe says right now around you. You need to have that machine. And the simplest system in nature that has Turing machine integrated is the cell. So it's very difficult to find a system in nature that is an agent, that is not made from cells.
[87:30] as a result. Maybe there are systems in nature that are able to compute things and make models, but I'm not aware of any. So the simplest one that I know that can do this reliably is the sales or arrangement of sales. And that can possess agency, which is an interesting thing that explains this coincidence that living things are agents and vice versa.
[87:55] that the agent that we discover are mostly living things, or there are robots that have computers built into them, or virtual robots that rely on computation. So the ability to make models of the future is the prerequisite for agency. And to make arbitrary models, which means structures that embody causal simulations of some sort, that requires computation.
[88:22] Yeah, I'm on board with that ladder, that taxonomy of goals and so on. One interesting thing about goals, and as you say, some are emergent and some are not, there's an interesting planarian version of this, which is this. We
[88:44] We made this hypothesis about, so within planaria, you chop it up into pieces and every piece regenerates exactly the right rest of the worm, right? So if you chop it into pieces, each piece will have one head, one tail.
[88:57] And then, of course, what happens is it stops when it reaches a correct planarian, then it stops. And so we started to think that there are a couple of possibilities. One possibility is that this is a purely emergent process and that the goal of rebuilding ahead is an emergent thing that comes about as a consequence of other things.
[89:17] or could there be a an actual explicit representation of what a correct planarian is that serves as a set point as an encoded as an explicitly encoded set point for these cells to follow and and because it's a cellular collective we were communicating electrically we thought well maybe maybe what it's doing is basically storing a memory of what did you like you would in in a neural circuit you're storing a memory of what it should be so we started looking for this and this is what we found and this is this is kind of i think
[89:47] One important type of goal in a goal seeking system is a goal that you can rewrite without changing the hardware and the system will now pursue that goal instead of something else. In a purely emergent system, that doesn't work, right? If you have a cellular automaton or a fractal or something that does some kind of complex thing, if you want to change what that complex thing is, you have to figure out how to change the local rules. That's very hard in most cases. But what we found in planaria is that we can literally
[90:17] Using a voltage reporter die, we can look at the worm and we can see now the pattern, and it's a distributed pattern, but we can see the pattern that tells this animal how many heads it's supposed to have.
[90:29] And what you can do is you can go in and using a brief transient manipulation of the ion channels with drugs, with ion channel drugs that, and we have a computational model that tells you what those drugs should be, that briefly changes the electrical state of the circuit. But the circuit is amazing. Once you've changed that state, it holds. So by default, in a standard planarian, it always says one head.
[90:55] But it's kind of like a flip-flop in that when you temporarily shift it, it holds and you can push it to a state that says two heads. So now something very interesting happens. Two interesting things. One is that if you take those worms and you cut those into pieces,
[91:09] you get two headed worms, even though the genetics are, the hardware is all wild type. There's nothing wrong with the hardware. All the proteins are the same. All the genetics is the same, but the electric circuit now says make two heads instead of one. And so this is in an interesting way. It is an explicit goal because you can rewrite it because much like with your thermostat, there's an interface for changing what the goal state is. And then you don't even need to know how the rest of the thermostat works. As long as you know how to use your, how to modify that interface, the system takes care of the rest.
[91:37] The other interesting thing is, and I love what you said about the counterfactuals, what you can do is you can change that electrical pattern in an intact worm and not cut it for a long time. And if you do that, when you look at that pattern, that is a counterfactual pattern because that two-headed pattern is not a readout of the current state. It says two heads, but the animal only has one head. It's a normal planarian. So that pattern memory is not a readout of what the animal is doing right now.
[92:07] It is a representation of what the animal will do in the future if it happens to get injured and you may never cut it or you may cut it but if you do then the pattern becomes then the cells consult the pattern and build a two-headed worm and then it becomes a you know the current state but until then it's this weird like primitive it's a primitive counterfactual system because it's able to a body of a planarian is able to store at least two different
[92:34] representations of what a, probably many more, but we found two so far, what a correct planarian should look like. It can have a memory of a one-headed planarian or a memory of a two-headed planarian, and both of those can live in exactly the same hardware and exactly the same body. The other kind of cool thing about this, and I'll just mention this even though this is, you know, disclaimer, this is not published yet, so this is, you know, take all this with a grain of salt, but the latest thing you can do is
[93:03] You can actually treat it with some of the same compounds that are used in neuroscience in humans and in rats as memory blockers. So things that block recall or memory consolidation. And when you do that, you can make the animal forget how many heads it's supposed to have. And then they basically turn into a featureless circle when you can just wipe the pattern memory completely. Were they using exactly the same techniques you would use in a rat or a human?
[93:29] They just forget what to do when they turn into they fail to break symmetry and they just become a circle. So yeah, I think what you were saying is right on with this ability to store counterfactual states that are not true now, but may represent aspects of the future. I think that's a very important capacity. Another important notion is a constraint satisfaction. A constraint is a rule that tells you whether two things are compatible or not.
[93:59] and the constraint is satisfied if they are compatible. So you basically have a number of conditions that you establish by measuring them somehow, for instance, whether you have a head or multiple heads, and you try to find a solution where you can end up with exactly one head. And if you end up with exactly one head based on the starting state, then you have managed to find a way to satisfy your constraints.
[94:23] And so in a sense, what you call a competency is the ability of a system to take a region of the states of the space of the universe from basically some local region of possible state that he was can be in and move that region to a smaller region that is acceptable. So there is a region on the universe state space, but you have only one head.
[94:49] And there's a larger region where you don't have any head at all, but the starting state of your organism. And then you try to get from A to B. So you get from this larger region to the one in which you want to be. Of course, if you have one head, you want to stay in the region in which you have one head, which of course is usually much easier. But the ability to basically to condense the space to get to bridge over many regions into the target region.
[95:13] is what comes down to this, what this competency is. The system basically has an emergent wanting to go in this region and it's trying to move there. And so there are constraints at the level of the substrate that are battling with the functional constraints that the organism wants to realize to fulfill its function. And sometimes you cannot satisfy this and you end up with two heads because you don't know which one you get rid of or how to digest one of the heads and so on. And you end up with some Siamese twin.
[95:43] And so this is an interesting constraint that you have to solve for when you are dealing with reality and how you battle with the substrate until you get to the functional solution that you evolve for. Yeah, that's interesting. I mean, we've also found that there are, so we look at exactly this, the navigation, this kind of navigation and more of a space, how you get from here to there and what paths are possible to get from here to there and so on.
[96:13] One of the things that we found is that
[96:15] There are regions of that space that belong to other species and you can push a planarian with a standard wild type genome into the gold state of a completely different species. So we can get them to grow a head. So there's a species that normally has a triangular head. You can make it grow a round head like a different species or a flat head or whatever. So those are about 100 to 150 million years of evolutionary distance and you can do it
[96:45] you know, within a few days just by perturbing that electrical circuit so that it lands in the wrong space. And then outside of that there are regions that don't belong to planaria at all. So planaria are normally nice and flat. We've made planarians that are
[97:01] They look like they are cylinder like a like a ski cap, you know, they become like a like a like a hemisphere or really weird ones that are spiky. They're they're like a ball with spikes on it. There are all kinds of other regions in that space that you can that you can push them to.
[97:15] and so those are new those are not species that they diverge from those are no one's ever cited to my knowledge that's yes there's no such there are no such species it's easier to you and we've done this in frog too you can you can push tadpoles to make it to look like those of other species or you can make
[97:34] Yeah, that's a whole interesting thing for evolution anyway, right? One species birth defect is a perfectly reasonable different species. So we can make tadpoles with a rounded tail, which for a Xenopus tadpole is a terrible tail, but for a zebrafish, that's exactly the right tail. So you can sort of imagine evolution manipulating the different
[97:56] information processing, whether by the by electrical circuits or other other machinery that help the system explore that more of a space and, you know, start to start to start to move away from from whatever the you know, that speciation is moving away from your standard attractor that you're usually land on. How does this relate to intelligence? Well, intelligence is the ability to make models and usually in the service of control, at least that's the way I would
[98:25] Explain intelligence. There are other definitions, but it's the simplest one that I found. It also accounts for the fact that many intelligent people are not very good at getting things done. Intelligence and goal rationality are somewhat orthogonal. Excessive intelligence is often a prosthesis for bad regulation. Have you read The Intelligence Trap? No. Okay, the author makes a similar case and he's coming on shortly.
[98:55] Essentially saying that there are certain traps that people with high IQs have that are not beneficial for them as biological beings. They're mainly cognitive biases. So for instance, it's extremely interesting. So let's just give one of the biases to say you're either liberal biased or you're conservative biased. And then you were to give a test where there's some data that says that on the surface, it shows that the data shows that gun control prevents gun violence.
[99:18] Well, the liberals are more likely to say, yes, this data does show that. But if you're conservative, you're more likely to find, oh, actually, the subtleties in the data show that gun control increases gun violence. And then they thought, OK, well, let's just switch this to make it such that the superficial data suggests that gun control increases violence. You need to look at the data carefully to show that it actually prevents violence. Well, the conservatives in that case would be more quickly to say, oh, look, the gun control increases violence. And the liberals would find the loophole.
[99:47] Well, that's one of the reasons why I don't mind interviewing people who are biased, because to me, they're more able to find a justification for something that may be true, but I and others are so, well, we all have our own biases, we're so inclined in some other direction that we just were blind to it. But anyway, the point is to affirm what you're saying, Yoshua. Okay, so I know Michael has a hard cut off at 2pm. So I want to ask the question for AGI, that is Artificial General Intelligence. It seems as though we're far away or that
[100:17] First of all, I don't know how far we are for AGI. It could be that the existing paradigms are sufficient to brute force it.
[100:48] But we don't know that yet, so we are going to find out in the next few months. But it could also be that we need to rewrite the stack to build systems that work in real time, that are entangled with the environment, that can build shared representations with the environment, and that we need to rewrite the stack. And there are actually a number of questions that I'd like to ask Michael.
[101:12] I noticed that Michael is wisely reluctant to use certain words like consciousness a lot. And it's because a lot of people are very opinionated about what these concepts mean and you first have to deal with these opinions before you come down to saying, oh, here I have the following proposal for implementing reflexive attention as a tool to form coherence and a representation. And this leads to the same phenomena as what you call consciousness.
[101:36] Right, so that is a detailed discussion. Maybe you don't want to have that discussion in every forum and rather than having this discussion, you may be looking at how to create coherence using a reflexive attention process that makes a real-time model of what it's attending to and the fact that it's attending to it so it remains coherent but for itself. So this is a concrete thing, but I wonder how to implement this in a self-organized fashion if the substrate that you have are individual agents.
[102:06] There is a similarity here between societies and brains and social networks. That is, if you have self-interested agents in a way that try to survive and that get their rewards from other agents that are similar to them structurally, and they have the capacity to learn to some degree, and that capacity is
[102:32] sufficient so they can, in the aggregate, learn arbitrary programs, arbitrary computable functions. And it's sufficient enough so they can converge on the functions that they need to as a group reap rewards that apply to the whole group because they have a shared destiny, like the poor little cells that are locked in the same skull and they're all going to die together if they fuck up. So they have to get along, they have to form an organization
[103:00] that is distributing rewards among each other. And this gives us a search space for possible systems that can exist. And the search space is mostly given, I think, by the minimal agent that is able to learn how to distribute rewards efficiently while doing something useful, using these rewards to change how you do something useful. So you have an emergent form of governance in these systems.
[103:25] It's not some centralized control that is imposed on the system from the outside as an existing machine learning approaches and AI approaches. But this only is an emergent pattern in the interactions between the individual small units, small reinforcement learning agents.
[103:40] And this control architecture leads to hierarchical government. It's not fully decentralized in any way. There are centralized structures that distribute rewards, for instance via the dopaminergic system, in a very centralized top-down manner. And that's because every regulation has an optimal layer where it needs to take place. Some stuff needs to be decided very high up. Some stuff needs to be optimally regulated very low down, depending on the incentives. Game theoretically, a government is an agent that imposes an offset on your payoff metrics to
[104:10] make your Nash equilibrium compatible with the globally best outcome. To do this, you need to have agents that are sensitive to rewards. It's super interesting to think about these reward infrastructures. Elon Musk has bought Twitter, I think, because he has realized that Twitter is the network among all the social networks that is closest to a global brain. It's totally mind blowing to realize that he basically trades a bunch of wealthy stock for the opportunity to become Pope.
[104:40] Pope of a religion that has more active participants than Catholicism, even daily practicing people who enter this church and think together. And it's the thing that is completely incoherent at this point, almost completely incoherent. There are bubbles of sentience, but for the most part, this thing is just screeching at itself. And now there is the question, can we fix the incentives of Twitter to turn it into a global brain? And Elon Musk is global brain built. He believes that this is the case.
[105:07] And that's the experiment that he's trying to do, which makes me super excited, right? This might fail. There's a very big chance that it fails.
[105:14] But there is also the chance that we get the global brain, that we get emerging collective intelligence that is working in real time using the internet in a way that didn't exist before. So super fascinating thing that might happen here. And it's fascinating that very few people are seeing that Elon Musk is crazy enough to spend 44 billion dollars on that experiment just because he can and has nothing else to do and thinks it's meaningful to do it more meaningful than having so much money in the bank.
[105:42] And so this makes me interested in this test bed for rules. And this is something that translates into the way in which society is organized, because social media is not different from society, not separate from it. Problem of governing social media is exactly the same thing as governing a society. You need a right form of government, you need a legal system. Ultimately, you need representation and all these issues, right? It's not just the moderation team
[106:08] And the same thing is also true for the brain. What is the government of the brain that emerges in what Gary Edelman calls neural Darwinism among different forms of organization in the mind until you have a model of a self organizing agent that discovers that what it's computing is driving the behavior of an agent in the real world and it's covers a first person perspective and so on. How does that work? How can we get a system that is looking for the right incentive architecture? And that is basically the main topic where I think that
[106:38] where Michael's research is pointing from my perspective that is super interesting. We have this overlap between looking at cells and looking at the world of humans and animals and stuff in general.
[106:53] Yeah, super interesting. So Chris Fields and I are working on a framework to understand where collective agents first come from. How do they organize themselves?
[107:12] And we've got a model already about this idea of rewards and rewarding other cells with neurotransmitters and things like this to keep copies of themselves nearby because they're the most predictable. So this idea of reducing surprise, well, what's the least surprising thing? It's a copy of yourself. And so you can sort of, Chris calls it the imperial model of multicellularity. But one thing to really think about here is
[107:39] Imagine an embryo. This is an amniote embryo, let's say a human or a bird or something like that. And what you have there is you have a flat disk of 10,000, 50,000 cells. And when people look at it, you say, what is that? They say it's an embryo, one embryo. Well, the reason it's one embryo is that under normal conditions, what's going to happen is that in this disk,
[108:03] One cell is symmetry breaking. One cell is going to decide that it's the organizer. It's going to do local activation, long range inhibition. It's going to tell all the other cells, you're not the organizer, I'm the organizer. And as a result, you get one special point that begins the process that's going to walk through this memorphous space and create a particular large scale structure with two eyes and four legs and whatever else it's going to have.
[108:29] But here's the interesting thing, those cells, that's not really one embryo, that's a weird kind of Freudian ocean of potentiality. What I mean by that is if you take, and I did this as a grad student, you can take a needle and you can put a little scratch through that blastoderm, put a little scratch through it,
[108:46] What will happen is the cells on either side of that scratch don't feel each other, they don't hear each other's signals, so that symmetry breaking process will happen twice, once on each end, and then when it heals together, what you end up with is two conjoined twins because each side organized an embryo and now you've got two conjoined twins. Now, many interesting things happen there.
[109:08] One is that every cell is some other cells and external environment. So in order to make an embryo, you have to self organize a system that puts an arbitrary boundary between itself and the outside world. You have to decide where do I end and the world begins. And it's not given to you, you know, somehow,
[109:27] From outside for a biological system, every biological system has to figure this out for itself. Unlike modern robotics or whatever where it's very clear. Here's where you are. Here's where the world is. These are your effectors. These are your sensors. Here's the boundary of the outside world. Living things don't have any of that. They have to figure out all of this out from scratch.
[109:43] The benefit to being able to figure it out from scratch, having to figure out from scratch, is that you are then compatible with all kinds of weird initial conditions. For example, if I separate you in half, you can make twins. You don't have a total failure because now you have half the number of cells. You can make twins, you can make triplets, probably many more than that. So if you ask the question, you look at that blasted room and you ask how many individuals are there,
[110:10] You actually don't know it could be zero, it could be one, it could be some small number of individuals. That process of auto polices has to happen. And here are a number of things that are uniquely biological that I think relate to the kind of flexibility plasticity that you need for AGI.
[110:31] In whatever space, it doesn't have to be the same space that we work in, but your boundaries are not set for you by an outside creator. You have to figure out where your boundaries are, where is the outside world. So you make hypotheses about where you end and where the world begins. You don't actually know what your structure is, kind of like Vanguard's robots from 2006 where they didn't know their structure and they had to make hypotheses about, well, do I have wheels? Do I have legs? What do I have? And then
[110:55] Make a model based on basically babbling right like the way that babies babble so so you have to figure out you have to make a model of where the boundaries you have to make a model of what your structure is. You are energy limited which most of the most AI and robotics nowadays are not when your energy and time limited it means that you cannot.
[111:15] Pay attention to everything you are forced to course grain in some way and lose a lot of information and compress it down so you have to you have to choose a lens a course screening lens on the world and figure out how you're going to represent things.
[111:29] And all of this has to, and there are many more things that we could talk about, but all of these things are self-constructions from the very beginning. And then you start to act in various spaces, which again are not predefined for you. You have to solve problems that are metabolic, physiological, anatomical, maybe behavioral if you have muscles.
[111:52] But nobody's defining the space for you. For example, if you're a bacterium, Chris Fields points this out, if you're a bacterium and you're in some sort of chemical gradient, you want to increase the amount of sugar in your environment, you could act in three-dimensional space by physically swimming up the gradient, or you can act in transcriptional space by turning on other genes that are better at converting whatever sugar happens to be around, and that solves your metabolic problem instead of... So you have these hybrid problem spaces.
[112:22] All of this, I think what contributes in a strong sense to all the things that we were just talking about is the fact that everything in biology is self-constructed from the beginning. You can't rely on, you don't know ahead of time. When you're a new creature born into the world, and we have many examples of this kind of stuff, you don't know how many cells you have, how big your cells are. You can't count on any of the priors.
[112:43] You have this weird thing that evolution makes these machines that don't take the past history too seriously. It doesn't overtrain on them. It makes problem-solving machines that use whatever hardware you have. This is why we can make weird chimeras and cyborgs, and you can mix and match biology in every way.
[113:05] with other living things or with nonliving things, because all of this is interoperable, because it does not make assumptions about what you have to have, it tries to solve whatever problem is given, it plays the hands that it's dealt. And that results in that assumption that you cannot trust what you come into the world with, you cannot assume that the hardware is what it is, gives rise to a lot of that intelligence, I think, and a lot of that plasticity.
[113:32] So if you translate this into necessary and sufficient conditions, what seems to be necessary for the emergence of general intelligence in a bunch of cells or units is that each of them is a small agent, which means it's able to behave with an expectation of minimizing future target
[113:54] value deviations that learns that the configurations environment that single anticipated reward. Next thing these units need to be not just agents, they need to be connected to each other. And they need to get the rewards or proxy rewards, something that allows them to anticipate whether the organism is going to feed them in the future from other units that also adaptive. So you need multiple message types and the ability to recognize and send them with a certain degree of reliability.
[114:24] What else do you need? You need enough of them, of course. What's not clear to me is how deterministic do the units need to be? How much memory do they need to be? How much state can they store? How deep in time does their recollection need to go? And how much forward in time do they need to be able to form expectations? So we see how large is this activation front that they can with this shape of the distribution that they can learn.
[114:53] and have to learn to make this whole thing happen. And so basically conditions that are necessary are relatively simple. If you just wait for long enough and get a such a system to percolate, I imagine that the compound agency will at some level emerge on the system, just in a competition of possibilities in the same way as emerging agency has emerged on Twitter in a way with
[115:19] the world.
[115:39] going to organize groups of people into behavioral things. It's really interesting to look at Twitter as something like a mind at some level, right? It's working slower, but it would probably be possible to make a simulation of these dynamics in a more abstract way and to use this for arbitrary problem solving. And so what would an experiment look like in which we start with these necessary conditions and narrow down the sufficient conditions?
[116:09] Yeah, right on. And that's, I mean, yeah, we're doing some of that stuff, some of that kind of modeling. I apologize. I've got to run here. Thank you both for coming out for this. I appreciate it. Thank you so much. And thank you for bringing us together. So a great, great conversation. I really enjoyed it. So, yeah, thank you. Likewise. I enjoyed it very much. Thank you, Kurt. Thank you so much, Kurt. Thanks, Joshua.
[116:31] The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc.
[116:52] It shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. If you'd like to support more conversations like this, then do consider visiting theories of everything dot org. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you.
[117:28] Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store.
[117:38] . . . . . . .
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 9299,
  "audio_duration_seconds": 7078.95,
  "completed_at": "2025-12-01T01:23:34Z",
  "segments": [
    {
      "end_time": 26.203,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
    },
    {
      "end_time": 53.234,
      "index": 1,
      "start_time": 26.203,
      "text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
    },
    {
      "end_time": 64.514,
      "index": 2,
      "start_time": 53.558,
      "text": " As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 88.422,
      "index": 3,
      "start_time": 66.254,
      "text": " Michael Levin's work on regulating intractable pattern formation in living systems has made him one of the most compelling biologists of our time. In translation, this means that his team is sussing out how to develop limbs, regenerate limbs, how to generate minds, and even life extension by manipulating electric signals rather than genetics or epigenetics. His work is something that I consider to be worthy of a Nobel Prize, and I don't think I've said that about anyone on the podcast."
    },
    {
      "end_time": 116.698,
      "index": 4,
      "start_time": 88.422,
      "text": " Michael Evans' previous podcast, On Toe, is in the description. That's a solo episode with him, where we go into a two-hour deep dive, as well as there's a theolo-cution, so that is him and another guest, just like today, except between Karl Friston and Chris Fields on consciousness. Yoshua Bach is widely considered to be the pinnacle of an AI researcher, dealing with emotion, modeling, and multi-agent systems. A large focus of Bach's is to build a model of the mind from strong AI. Speaking of minds, Bach is one of the most inventive minds in the field of computer science."
    },
    {
      "end_time": 135.23,
      "index": 5,
      "start_time": 116.698,
      "text": " biology has much to teach us about artificial intelligence and vice versa this discussion between two brilliant researchers is something that i'm extremely"
    },
    {
      "end_time": 163.422,
      "index": 6,
      "start_time": 135.981,
      "text": " Lucky, blessed, fortunate to be a part of, as well as us as collective as an audience that are fortunate enough to witness. Approximately 30 minutes into the conversation, you'll see two sponsors. They are Masterworks and Roman. I implore you not to skip it as firstly watching it supports Toe directly. And then secondly, they're fascinating companies in and of themselves. Additionally, you'll hear from one more Trade Coffee around the one and a half hour mark. Thank you and enjoy this theolocution between Joschabach and Michael Levin."
    },
    {
      "end_time": 189.258,
      "index": 7,
      "start_time": 163.899,
      "text": " Welcome both Professor Michael Levin and Joscha Bach. It's an honor to have you on the Toll Podcast again, both of you and then together right now. Thank you. It's great to be here. Likewise. I enjoy very much being here and look forward to this conversation. I look forward to it as well. So we'll start off with the question of what is it that you Michael find most interesting about Joscha's work? And then Joscha will go for you toward Michael. Yeah."
    },
    {
      "end_time": 215.555,
      "index": 8,
      "start_time": 189.77,
      "text": " I really enjoy the breadth. So I've been looking, I think I've probably read almost everything on your on your website, you know, the short kind of blog pieces and everything. And, yeah, I'm a big fan of the breadth of tackling a lot of the different issues that you do with respect to, you know, computation and cognition and AI and, you know, and ethics and everything. I really like that aspect of it. And Yoshua?"
    },
    {
      "end_time": 242.005,
      "index": 9,
      "start_time": 216.613,
      "text": " Yeah, my apologies. My blog is not up to date. I haven't done any updates for a few years now on it, I think. So, of course, I'm still in the process of progressing and having new ideas. And the ideas that I had in recent years, I have a great overlap with a lot of the things that you are working on and"
    },
    {
      "end_time": 269.428,
      "index": 10,
      "start_time": 242.602,
      "text": " When I listened to your Lex podcast last night, there were many thoughts that you had that I had stumbled on that I've never heard from anybody else. And so I found this very fascinating and thought, maybe let's look at some of these thoughts first and then go from there and expand beyond those ideas. But I, for instance, found after thinking about how sales work,"
    },
    {
      "end_time": 295.759,
      "index": 11,
      "start_time": 270.162,
      "text": " Kind of obvious, but missed by most people in neuroscience or in science in general, is that every cell has the ability to send multiple message types and receive multiple message types and do this conditionally and learn under which conditions to do that and to modulate this. Also, every cell is an individual reinforcement learning agent. Single-celled animal that tries to survive by cooperating with its environment gets most of its rewards from its environment."
    },
    {
      "end_time": 319.872,
      "index": 12,
      "start_time": 296.459,
      "text": " And as a result, this means that every cell can in principle function like a neuron. It can fulfill the same learning and information processing tasks as a neuron. The only difference that exists with respect to neurons or the main difference is that they cannot do this over very long distances because they are mostly connected only to cells that are directly adjacent."
    },
    {
      "end_time": 349.48,
      "index": 13,
      "start_time": 320.316,
      "text": " Of course, neurons also only communicate to adjacent cells, but the adjacency of neurons is such that they have axons, parts of the cell that reach very far through the organism. So in some sense, a neuron is a telegraph cell that uses very specific messages that are encoded in a way, like more signals in extremely short high energy bursts that allow to send messages over very long distances very quickly to move the muscles of an animal at the limit of what physics allows."
    },
    {
      "end_time": 368.609,
      "index": 14,
      "start_time": 350.009,
      "text": " So it can compete with other animals in search for food. And in order to make that happen, it also needs to have a model of the world that gets updated at this higher rate. So there is going to be an information processing system that is duplicating basically this cellular brain that is made from all the other cells in the body of the organism."
    },
    {
      "end_time": 398.763,
      "index": 15,
      "start_time": 368.933,
      "text": " and at some point these two systems get decoupled. They have their own codes, their own language, so to speak, but it still makes sense, I guess, to see the brain as a telegraphic extension of the community of cells in the body. And for me this insight that I stumbled on just because means and motive that evolution would equip cells with doing that information processing if the organism lives long enough and if the cells share a common genetic destiny so they can get attuned to each other in an organism."
    },
    {
      "end_time": 423.387,
      "index": 16,
      "start_time": 398.763,
      "text": " that basically every organism has the potential to become intelligent and if it gets old enough to process enough data to get to a very high degree of understanding of its environment in principle. So of course a normal house plant is not going to get very old compared to us because its information processing is so much slower so they're not going to be very smart."
    },
    {
      "end_time": 451.408,
      "index": 17,
      "start_time": 424.138,
      "text": " But at the level of ecosystems, it's conceivable that there is quite considerable intelligence. And then I stumbled on this notion that our ancestors thought that one day in fairyland equals seven years in human land, which is told in the old myth. And also at some point I revised my notion of what the spirit is. For instance, a spirit is an old word for the operating system for an autonomous robot."
    },
    {
      "end_time": 476.766,
      "index": 18,
      "start_time": 452.585,
      "text": " When this word was invented, the only autonomous robots that were known were people and plants and animals and nation states and ecosystems. There were no robots built by people yet, but there was this pattern of control in it that people could observe that was not directly tied to the hardware that was realized by the hardware, but disembodied in a way."
    },
    {
      "end_time": 504.155,
      "index": 19,
      "start_time": 477.227,
      "text": " And this notion of service is something that we lost after the enlightenment, when we tried to deal with the wrong Christian metaphysics and superstition that came with it and threw out a lot of babies with the bathwater. And suddenly we basically lost a lot of concepts, especially this concept of software that existed before in a way, this software being a control pattern or a pattern of causal structure that exists at a certain level of coarse graining."
    },
    {
      "end_time": 527.654,
      "index": 20,
      "start_time": 504.548,
      "text": " as some type of very, very specific physical law that exists by looking at reality from a certain angle. And what I liked about your work is that you systematically have focused on this direction of what a cell can do, that a cell is an agent, and that levels of agency emerge in the interaction between cells."
    },
    {
      "end_time": 557.039,
      "index": 21,
      "start_time": 527.654,
      "text": " You use a very clear language and clear concepts, and you obviously are driven by questions that you want to answer, which is unusual in science, I found. Most of our contemporaries in science get broken, if it doesn't happen earlier during the PhD, into people who apply methods in teams, instead of people who join academia because they think it's the most valuable thing they can do with their lives to pursue questions that they're interested in, want to make progress on."
    },
    {
      "end_time": 581.493,
      "index": 22,
      "start_time": 558.916,
      "text": " All right, Michael, there's plenty to respond to. Yeah, yeah, lots of lots of ideas. Yeah, I think I think it's very, your point is very interesting about, you know, what what what it really what really fundamentally is the difference between neurons and other cells. Of course, evolutionarily, they're reusing machinery that has been around for a very long time since the time of bacteria, basically, right? So our multicellular or unicellular ancestors"
    },
    {
      "end_time": 607.142,
      "index": 23,
      "start_time": 581.493,
      "text": " had a lot of the same machinery and even I mean, of course, the axons are very can be very long. But but there are sort of intermediate structures, right? There are tunneling nanotubes and things that allow cells to connect to maybe five or 10 diameter cell diameters away, right? So so not terribly long, but but also not immediate neighbors necessarily. So that kind of architecture has been around for a while. And people like girls so well look at"
    },
    {
      "end_time": 625.179,
      "index": 24,
      "start_time": 608.114,
      "text": " very brain like electrical signaling in bacterial colony so this is you know i think i think evolution began to reuse this tool kit specifically of using this kind of communication to scale up with the computational and other kinds of"
    },
    {
      "end_time": 646.698,
      "index": 25,
      "start_time": 625.811,
      "text": " other kinds of tricks. Yeah, a really long time ago, you know, and I like to I like to imagine that if somebody had come to the people who were inventing connection ism and the first, you know, sort of perceptrons and neural networks and so on, if somebody had come to them and said, Oh, by the way, sorry, you know, we're with the biologists, we got it wrong. It's not the brain thinking isn't in the brain. It's in the liver."
    },
    {
      "end_time": 666.527,
      "index": 26,
      "start_time": 647.005,
      "text": " And so then the question is, what would they do, right? Would they have changed anything about what they're doing? And then we said, ah, now we have to rethink our model, or would they have said, fine, who cares? This is exactly the same, the same model, everything works just as well. So I often think about that question, what exactly do we mean by neurons? And isn't it interesting that"
    },
    {
      "end_time": 696.681,
      "index": 27,
      "start_time": 666.92,
      "text": " we are able to steal most of the tools, the concepts, the frameworks, the math from neuroscience and apply it to problems in other spaces. So not movement in three dimensional space with muscles, but for example, movement through a morphous space, right? Anatomical morphous space. The techniques can't tell the difference. We use all the same stuff, optogenetics, neurotransmitter signaling, we model active inference and we see perceptual by stability, you name it, you know, we take concepts from"
    },
    {
      "end_time": 724.531,
      "index": 28,
      "start_time": 697.227,
      "text": " from neuroscience, and we apply it out elsewhere in the body. And generally speaking, everything works exactly the same. And that shows us, I think, what you were saying that there's this really interesting kind of symmetry between these, that a lot of the distinctions that we've been making are, you know, in terms of having different departments and different PhD programs and other things that say, you know, this is neuroscience, this is developmental biology, a lot of these things are just not"
    },
    {
      "end_time": 747.022,
      "index": 29,
      "start_time": 724.991,
      "text": " not as firm distinctions as we used to think. I suspect that people who insist on strong disciplinary boundaries do this out of a protective impulse. And what I noticed by studying many disciplines when I was young, that the different methodologies are so incompatible across fields."
    },
    {
      "end_time": 775.93,
      "index": 30,
      "start_time": 747.022,
      "text": " that when I was studying philosophy or psychology, I felt that computer scientists would be laughing about the methods that each of these fields are using to justify what they're doing. And this, I think, is indicative of a defect, because if you take science into the current regime of regulating it entirely by peer review, there is no external authority. Even the grant authorities are mostly fields of people who have been trained in the sciences"
    },
    {
      "end_time": 804.684,
      "index": 31,
      "start_time": 776.169,
      "text": " in existing paradigms and then are finding the continuation of those paradigms from the outside. This meta paradigmatic thinking does not really exist that much in a peer reviewed paradigm. And ultimately, when you do peer review for a couple of generations, it also means that if your peers deteriorate, there is nothing who pulls your science back. And what I miss specifically and a lot of the way which neuroscience is done is what you call the engineering stance."
    },
    {
      "end_time": 830.981,
      "index": 32,
      "start_time": 805.794,
      "text": " And this engineering sense is very powerful and you get it automatically when you're a computer scientist because you don't really care what language is it written in. What you care in is what causal pattern is realized and how can this be realized and how could I do it? How would I do it? How can evolution do it? What it means are this disposal and this determines the search space for the things that I'm looking for. But this requires that I think in causal systems."
    },
    {
      "end_time": 851.391,
      "index": 33,
      "start_time": 831.22,
      "text": " And this thinking in causal systems is not impossible to, it's possible not to do for a computer scientist, but it is unusual outside of computer science. And once you realize that it's, it's very weird. And suddenly you have notions that try to replace causal structure with say evidence. And then you notice that"
    },
    {
      "end_time": 880.282,
      "index": 34,
      "start_time": 851.681,
      "text": " For instance, evidence based medicine is not about probabilities of how something is realized and must work. Like you see people on the cruise ship getting infected over distances and you think, oh, this must be airborne. But no, there is no peer controlled study. So there is no evidence that it's airborne. And when you look at disciplines from the outside, like in this case, the medical profession or the medical messaging and decision making, or I get terrified because it directly affects us."
    },
    {
      "end_time": 899.48,
      "index": 35,
      "start_time": 880.742,
      "text": " And in terms of neuroscience, of course, there's more theoretical for the most part, but there must be a reason why it's for the most part a theoretical, why there is no causal model that clinicians can use to explain what is happening in certain syndromes that people are exhibiting."
    },
    {
      "end_time": 914.394,
      "index": 36,
      "start_time": 899.77,
      "text": " I noticed this when I go to a doctor and even at a reputable institution like Stanford, that most of the neuroscientists at some level there, or most of the neurologists that I'm talking to, at some level dualists."
    },
    {
      "end_time": 933.251,
      "index": 37,
      "start_time": 915.043,
      "text": " that they don't have a causal model of the way in which the brain is realizing things. There are a lot of studies which discover that very simple mechanisms like the ability of human beings to use grammatical structure are actually reflected in the brain. This is so amazing. Who would have thought?"
    },
    {
      "end_time": 964.087,
      "index": 38,
      "start_time": 935.725,
      "text": " Yeah, but the developments that existed in computer science have led us on a completely different track. The perceptron is vaguely inspired by what the brain might be doing, but I think it's really a toy model or a caricature of what cells are doing. Not in the sense that it's inferior, it's amazing what you can brute force with the modern perceptron variations. The current machine learning systems are mind-blowing in what they can do."
    },
    {
      "end_time": 988.677,
      "index": 39,
      "start_time": 964.514,
      "text": " but they don't do it like biological organisms at all. It's very different. The cells do not form change in which they weight sums of real numbers. There is something going on that is roughly similar to it, but there's a self-organizing system that designs itself from the inside out, not by a measure learning principle that applies to the outside and updates weights after reading and comparing them."
    },
    {
      "end_time": 1007.756,
      "index": 40,
      "start_time": 988.831,
      "text": " computing radiance to the system. So this perspective of local self-organization by reinforcement agents that try to trade rewards with each other, that is a perspective that I find totally fascinating. And I wish this would have come from neuroscience into computer science, but it hasn't."
    },
    {
      "end_time": 1038.114,
      "index": 41,
      "start_time": 1008.439,
      "text": " There are some people which have thought about these ideas to some degree, but there's been very little cross pollination. And I think all this talk of neuroscience influencing computer science is mostly visual thinking. Yeah, it's also I find this, you know, what you were saying about the different disciplines, it's, it's kind of amazing how, well, when I when I give a talk, I can always tell which department I'm in by by which part of the talk makes people uncomfortable and upset."
    },
    {
      "end_time": 1056.237,
      "index": 42,
      "start_time": 1038.302,
      "text": " And it's always different depending on which department it is, right? So there are things you can say in one department that are completely obvious and you say this in another group of people and they throw tomatoes. For instance, I could say in a neuroscience department, I could say,"
    },
    {
      "end_time": 1076.561,
      "index": 43,
      "start_time": 1056.869,
      "text": " Information can be processed without changes in gene expression. You don't need changes in gene expression to process information because the processing inside a neural network runs on the physics of action potentials. So you can do all kinds of interesting information processing and you don't need transcriptional or genetic change for that."
    },
    {
      "end_time": 1106.561,
      "index": 44,
      "start_time": 1076.561,
      "text": " If I say the same thing"
    },
    {
      "end_time": 1122.534,
      "index": 45,
      "start_time": 1106.561,
      "text": " Again, molecular cell biology, what do you mean? How can a collection of cells remember a spatial pattern? But again, in neuroscience or in an engineering department, yeah, of course. Of course they have electrical circuits that remember patterns and can do pattern completion and things like that."
    },
    {
      "end_time": 1149.872,
      "index": 46,
      "start_time": 1122.534,
      "text": " So, you know, views of, views of causality, views of just, just lots of things like that, that are, that are very obvious to one group of people is completely taboo elsewhere. So that, that, that distinction and, and, and, and yeah, and as, as you, as Joshua just said, it impacts, it impacts everything. It impacts education, it impacts grants, you know, grant, grant reviews, because when these kinds of interdisciplinary grants come up,"
    },
    {
      "end_time": 1178.609,
      "index": 47,
      "start_time": 1150.418,
      "text": " the study sections have a really hard time finding people that can actually review them. Because what often happens is you'll find you'll you'll get some kind of computational biology grant, and you put a proposal and you'll have some people on the on the panel who are biologists and some people who are the computational folks. And it's very hard to get people that actually can appreciate both sides of it and understand what's happening together, right? So they will sort of each critique a certain part of it. And the other part, they say, I don't know what this is. And"
    },
    {
      "end_time": 1187.483,
      "index": 48,
      "start_time": 1179.121,
      "text": " And as a result, grants like that don't tend to not have a champion, you know, one person who can say, no, I get, I get the whole thing and I think it's really good or not."
    },
    {
      "end_time": 1216.869,
      "index": 49,
      "start_time": 1188.336,
      "text": " So yeah, it's even to the point where I'm often asked, you know, when people want to list me somewhere, they'll say, so what are you? You know, what kind of what's your field? And I never know how to answer that question. You know, this day, it's been 30 years, I still don't know how to answer that question. I can't boil it down to one, you know, it just wouldn't make any sense to say any of the traditional, you know, the traditional fields. So what do you say, Yoshua, when someone asks you what field you're in?"
    },
    {
      "end_time": 1244.002,
      "index": 50,
      "start_time": 1217.91,
      "text": " It depends on who's asking. So, for instance, I found it quite useful to sometimes say, sorry, I'm not a philosopher, but this or I'm not that interested in machine learning. And I did publish papers in philosophy and in machine learning, but it's not my specialty in the sense that I need to identify with it."
    },
    {
      "end_time": 1271.578,
      "index": 51,
      "start_time": 1244.002,
      "text": " And in some sense, I guess that these categories are important when you try to write a grant proposal or when you try to find a job in a particular institution and they need to fill a position. But for me, it's more what questions am I interested in? What is the thing that I want to make progress on? Or what is the thing that I want to build right now? And I guess that in terms of the intersection, I'm a cognitive scientist."
    },
    {
      "end_time": 1300.947,
      "index": 52,
      "start_time": 1273.507,
      "text": " So I was asking Michael, prior to you joining Yocha, why is it Michael that you were doing podcasts? And if I understand correctly, part of the reason was because you think out loud and you like to hear the other person's thoughts and take notes and espers your own. And firstly, like Michael, you can correct me if that's incorrect. And then secondly, Yocha, I'm curious for an answer for this, the same question, what is it that you get out of doing podcasts other than, say, some marketing for if you were promoting something, which I don't imagine you are currently?"
    },
    {
      "end_time": 1315.828,
      "index": 53,
      "start_time": 1301.937,
      "text": " No, I'm not marketing anything. What I like about podcasts is the ability to publish something in a format that is engaging to interesting to people who actually care about it."
    },
    {
      "end_time": 1345.128,
      "index": 54,
      "start_time": 1316.869,
      "text": " I like this informal way of holding on to some ideas and also like conversations is a medium to develop thought. It's this space in which we can reflect on each other, look into each other's minds, interact with the ideas of others in real time. The production format of a podcast creates a certain focus of the conversation that can be useful. And it's a pleasant kind of tension that focuses you to stay on task."
    },
    {
      "end_time": 1373.422,
      "index": 55,
      "start_time": 1345.657,
      "text": " and I also found that it's generally useful to some people. The feedback that I get is that people tell me I had this really important question and I found this allowed me to make progress on it and I feel much better now about these questions. This clarified something for me that has plagued me for years and put me on track to solving it or this has inspired the following work"
    },
    {
      "end_time": 1400.043,
      "index": 56,
      "start_time": 1373.695,
      "text": " So it's a form of publishing ideas and getting them into circulation in our global hive minds that is very informal in a way, but it's not useless. And also it leaves me in this instance, at least of the work of cutting, editing and so on. But anyways, I'm very grateful that you provide the service of curating our conversation and putting it in a form that is useful to other people."
    },
    {
      "end_time": 1425.247,
      "index": 57,
      "start_time": 1401.237,
      "text": " Yeah, yeah, there's something, well, two things I was thinking of. One is that, you know, I mean, I have conversations with people all day long about these issues, right? So people in my lab, collaborators, whatever. And most, of course, the vast majority of those conversations are not recorded, and they just sort of disappear into the ether. And then I take something away from it, and the other person takes something away from it. But I've often I've often thought that wouldn't it be wouldn't isn't it a shame that all of this"
    },
    {
      "end_time": 1455.009,
      "index": 58,
      "start_time": 1425.247,
      "text": " is just kind of disappears and it would be amazing to have a record of it and of course not every conversation is gold but a lot of them are useful and interesting and there are plenty of people that could be interested and could benefit from it. So I really like this aspect that we can have conversations and then they're sort of canned and they're out there for people who are interested. The other kind of aspect of it which I don't really understand but it's kind of neat"
    },
    {
      "end_time": 1459.77,
      "index": 59,
      "start_time": 1455.009,
      "text": " Is that if when somebody when somebody asks me to pre record a talk?"
    },
    {
      "end_time": 1489.872,
      "index": 60,
      "start_time": 1460.299,
      "text": " It takes a crazy amount of time because because I keep stopping and realizing I could have said that better. Let me start from the beginning. And and it's just it's it's an incredible ordeal. Whereas something like this that's real time, I'm sure has as many, you know, mistakes and things that I would have rather fixed later. But but you can't do that. Right. So you just sort of go with it and that's it. And then it's done and you can move on. So so I like I like that real time aspect of it, because it just it just helps you to get the ideas out without getting hung up and trying to redo things 50 times."
    },
    {
      "end_time": 1512.585,
      "index": 61,
      "start_time": 1491.459,
      "text": " Yeah, it's a format that allows tentativity. If we publish, we have a culture in sciences that requires us to publish the things that we can hope to prove and make the best proof that we can. But when we have anything complicated, especially when we take our engineering stance, we often cannot prove how things work."
    },
    {
      "end_time": 1528.473,
      "index": 62,
      "start_time": 1513.439,
      "text": " Instead our answers are in the realm of the possible and we need to discuss the possibilities and there is value in understanding these possibilities to direct our future experiments and the practical work that we do to see what's actually the case."
    },
    {
      "end_time": 1553.797,
      "index": 63,
      "start_time": 1528.746,
      "text": " And we don't really have a publication format for that. We don't get neuroscientists to publish their ideas on how the mind works because nobody has a theory that they can prove. And as a result, there is basically a vacuum where theories should be. And the theory building happens informally in conversations that basically requires personal contact, which is a big issue once conferences went virtual because that contact diminished."
    },
    {
      "end_time": 1583.746,
      "index": 64,
      "start_time": 1554.241,
      "text": " And you get a lot of important ideas by reading the publications and so on. But this what could be or connecting the dots or possibilities or ideas that might be proven wrong later that we just exchange as in the status of ideas. That is something that has a good place in a podcast. Now, is this podcast, not this TOE podcast, but podcast in general something new? So for instance, I was thinking about this and I, well, podcasts go back a while and Rogan invented this long form format or popularized it. However,"
    },
    {
      "end_time": 1607.329,
      "index": 65,
      "start_time": 1584.411,
      "text": " On television, there are interviews. So there's Oprah, and those are long one hour, there's 60 minutes. And then back in the 90s, there was a three and a half hour, it's essentially a podcast like Charlie Rose, three and a half hour conversation. It's like a field location with Freeman Dyson, Daniel Dennett, Stephen Jay Gould, like the Rupert Sheldrake, all of those on the same one"
    },
    {
      "end_time": 1634.258,
      "index": 66,
      "start_time": 1607.875,
      "text": " I think it's like it's like blogging blogging is also not new."
    },
    {
      "end_time": 1650.162,
      "index": 67,
      "start_time": 1634.616,
      "text": " Being able to write text that you publish and people can follow what you are writing and so on, did exist in some sense before, but the internet made it possible to publish this for everyone. You don't need a publisher anymore."
    },
    {
      "end_time": 1678.114,
      "index": 68,
      "start_time": 1650.162,
      "text": " and you don't need a TV studio anymore. You don't need a broadcast station that is recording your talk show and sends it to an audience. There is no competition with all the other talk shows because there is no limitations on how many people can broadcast at the same time. And this allows an enormous diversity of thoughts and small productions that are done at a very low cost, lowering the threshold for putting something out there and seeing what happens."
    },
    {
      "end_time": 1689.241,
      "index": 69,
      "start_time": 1678.114,
      "text": " So in this sense, it's the ecosystem that emerged is new, because the variable change that changed the cost of producing a talk show. Right. Michael, you agree?"
    },
    {
      "end_time": 1716.032,
      "index": 70,
      "start_time": 1689.633,
      "text": " Yeah, yeah, I mean, yes, that and all of that and also just the fact that, you know, as you just said, these kind of like long form things were fairly rare. So most of the time, if you're going to be in one of the traditional media, they tell you, okay, you've got the you've got, you know, three minutes, you know, we're going to cut all this stuff, and we're going to boil it down to three minutes. And this is often incredibly frustrating. And I understand, I mean, we're drowned in information. And so"
    },
    {
      "end_time": 1744.599,
      "index": 71,
      "start_time": 1716.032,
      "text": " So there is obviously a place for very short statement on things, but the kind of stuff that we're talking about cannot be boiled down to, you know, TV sound bites or any of it's just not. And so the ability to have these long form things so that anybody who wants to really dig in can hear what the actual thought is, as opposed to something that's that's been just, you know, boiled into into a very, very, very short statement, I think is invaluable, just being able to have it out there for people to find."
    },
    {
      "end_time": 1762.176,
      "index": 72,
      "start_time": 1745.009,
      "text": " Now a brief note from two sponsors."
    },
    {
      "end_time": 1778.285,
      "index": 73,
      "start_time": 1762.176,
      "text": " Roman offers clinically proven medication to help treat hair loss. All of course from the comfort and privacy of your own home. Roman offers prescription medication and over the counter treatments. They also offer specially formulated shampoos and conditioners with ingredients that fortify and moisturize the hair to look fuller"
    },
    {
      "end_time": 1803.609,
      "index": 74,
      "start_time": 1778.285,
      "text": " Research shows that 80% of men who use prescription hair loss medication had no further hair loss after two years. Roman is licensed and the whole process is straightforward and discreet. Plans start at $20 a month. Currently, Roman has a special offer for toe listeners that is you. Use the link ro.co.curt to get 20% off your first order. Again, that's ro.co.curt."
    },
    {
      "end_time": 1820.811,
      "index": 75,
      "start_time": 1803.609,
      "text": " The link is in the description and you get 20% off. As the Toe project grows, we get plenty of sponsors coming. And I thought, you know, this one is a fascinating company. Our new sponsor is Masterworks. Masterworks is the only platform that allows you to invest in multi-million dollar works of art by Picasso, Bansky, and more."
    },
    {
      "end_time": 1839.326,
      "index": 76,
      "start_time": 1820.811,
      "text": " Masterworks is giving you access to invest in fine art, which is usually only accessible to multimillionaires or billionaires. The art that you see hanging in museums can now be partially owned by you. The inventive part is that you don't need to know the details of art or investing. Masterworks makes the whole process straightforward with a clean interface and exceptional customer service."
    },
    {
      "end_time": 1864.974,
      "index": 77,
      "start_time": 1839.326,
      "text": " They're innovating as more traditional investments suffer. Last month, we verified a sell which had a 21.5% return. So for instance, if you were to put $10,000 in, you would now have $12,000. Welcome to our new sponsor Masterworks and the link to them is in the description. Just so you know, there's in fact a waitlist to join their platform right now. However, toll listeners can skip the waitlist by visiting the link in the description, as well as by using the promo code theories of everything. Again, the link is in the description."
    },
    {
      "end_time": 1890.794,
      "index": 78,
      "start_time": 1864.974,
      "text": " What's some stance of yours, some belief that has changed most drastically in the past few years, let's say three, and it could be anywhere from something abstruse and academic to more colloquial, like I didn't realize the value of children or I overvalued children. Now I'm stuck with them. Like, geez, that was a mistake. Yeah. So something where I changed my mind was RNA based memory transfer."
    },
    {
      "end_time": 1909.599,
      "index": 79,
      "start_time": 1891.817,
      "text": " And I think it's a super interesting idea in this context, because it's close to stuff that Michael has been working on and is interested in. There have been some experiments in the Soviet Union, I think in the 70s, where scientists took planaria"
    },
    {
      "end_time": 1939.838,
      "index": 80,
      "start_time": 1910.23,
      "text": " trained them to learn something. I think they learned how to be afraid of electric shocks and things like that. And then they put their brains into a blender, extracted the RNA, injected other planaria with it, and these other planaria had learned it. And I learned about this as a kid when I, in the 1980s, read Soviet science fiction literature. I grew up in Eastern Germany. And the evil scientist harvested the brains of geniuses"
    },
    {
      "end_time": 1953.387,
      "index": 81,
      "start_time": 1940.213,
      "text": " and injected himself is RNA extracted from these brains and thereby acquired the skills. And even though I'm pretty sure this probably doesn't work if you do it at this level."
    },
    {
      "end_time": 1972.807,
      "index": 82,
      "start_time": 1953.712,
      "text": " This was inspired by this original research and I later heard nothing about this anymore and so I dismissed it as similar things as I read in Sputnik and other Russian publications which create their own mythological universe about ball lightning that is"
    },
    {
      "end_time": 2001.869,
      "index": 83,
      "start_time": 1972.807,
      "text": " agentic and possibly sentient and so on and dismiss this all as basically another universe of another religious digest culture that is producing its own ideas that then later on get dissolved once science advances because everybody knows its synapses, its connections between neurons that matter. The RNA is not that important for the information processing. It might change some state, but you cannot learn something by extracting RNA and re-injecting it into the next organism because how would that work if it's done in the synapses?"
    },
    {
      "end_time": 2027.5,
      "index": 84,
      "start_time": 2002.5,
      "text": " and then recently there was some papers which replicated the original research and has been replicated from time to time in different types of organisms but to my knowledge not in of course macaques or not even mice but so it's not clear if their brains work according to the same principles as planaria but planaria are not"
    },
    {
      "end_time": 2055.708,
      "index": 85,
      "start_time": 2027.824,
      "text": " extremely simple organisms, only a handful of neurons, they are something intermediate. So their main architecture is different from ours and the functioning principles of their neurons might be slightly different, but it's worth following this idea and going down that rabbit hole. And then I looked from my computer science engineering perspective and I realized that there are always things about the synaptic story that I find confusing because they're very difficult to implement."
    },
    {
      "end_time": 2082.432,
      "index": 86,
      "start_time": 2056.63,
      "text": " For instance, weight sharing. As a computer scientist, I require weight sharing. I don't know how to get around this. If I want to entrain myself as computational primitives in the local area of my brain, for instance, the ability to rotate something, rotation is some operator that I apply on the pattern that allows this pattern to be represented in a slightly different way to have this object rotated a few degrees."
    },
    {
      "end_time": 2101.22,
      "index": 87,
      "start_time": 2082.807,
      "text": " But an object doesn't consist of a single point, it consists of many features that all need to get the same rotation applied to them using the same mathematical primitives. So how do you implement the same operator across an entire brain area? Do you make many copies of the same pattern?"
    },
    {
      "end_time": 2130.538,
      "index": 88,
      "start_time": 2101.22,
      "text": " And computer scientists solve that with convolutional neural networks, which basically use the same weights again and again in different areas, only training them once and making them available everywhere. And that would be very difficult to implement in synapses. Maybe there are ways, but it's not straightforward. Another thing is if we see how training works in babies, they learn something and then they get rid of the surplus synapses. Initially, they have much more connectivity than they need."
    },
    {
      "end_time": 2151.903,
      "index": 89,
      "start_time": 2131.032,
      "text": " And when they get after they've trained, they optimize the way in which the wiring works by discarding the things they don't need to compute what they want to compute. So it's like calling the synapses. It does not freeze or edge the learning into the brain, but it optimizes the energy usage of the brain. Another issue is that"
    },
    {
      "end_time": 2173.78,
      "index": 90,
      "start_time": 2152.295,
      "text": " Patterns of activation are not completely stable in the brain. In the cortex, if you look, you find that they might be moving the next day or even rotate a little bit, which is also difficult to do with synapses. You cannot read out the weights and copy them somewhere else in an easy, straightforward fashion. And another issue is defragmentation. If you learn, for instance, your body map into a brain area,"
    },
    {
      "end_time": 2204.258,
      "index": 91,
      "start_time": 2174.411,
      "text": " And then somebody changes your body map because you have an accident and lose a finger or somebody gives you an artificial limb and you start to integrate this into your body map. How do you shift all the representations around? How do you make space for something else and move it? Or also initially when you set up your maps by a happier learning, how do you make sure that the neighborhoods are always correct and you don't need to realign anything? And I guess you need some kind of realignment. And all these things seem to be possible when you switch to a different paradigm."
    },
    {
      "end_time": 2229.07,
      "index": 92,
      "start_time": 2205.06,
      "text": " And so if you take this RNA base series seriously, go down this rabbit hole, what you get is the neurons are not learning a local function over its neighbors, but they are learning how to respond to the shape of an incoming activation front, a spatial temporal pattern in their neighborhood. And they are densely enough connected, so the neighborhood is just a space around them."
    },
    {
      "end_time": 2258.217,
      "index": 93,
      "start_time": 2229.514,
      "text": " And in this space, they basically interpret this according to a certain topology to say this is maybe a convolution that gives me two and a half D or it gives me two D or one D or whatever. The type of function is that they want to compute and they learn how to fire in response to those patterns and thereby modulate the patterns when they're passed on. So the neurons act something like self-modulating ether, so which wavefronts propagate."
    },
    {
      "end_time": 2272.807,
      "index": 94,
      "start_time": 2258.643,
      "text": " that perform the computations. And they store the responses to the distributions of incoming signals possibly in RNA. So you have little mixtapes, little tape fragments that they store in a summa."
    },
    {
      "end_time": 2300.299,
      "index": 95,
      "start_time": 2273.268,
      "text": " and that it can make more of very cheaply and easily. If they are successful mixtapes and they're useful computational primitives that they discovered, they can distribute this to other neurons through the entire cortex. So neurons of the same type will gain the knowledge to apply the same computational primitives. And that is something I don't know if the brain is doing that and the human brain is using these principles or if it's using them a lot and how important this is and how many other mechanisms exist."
    },
    {
      "end_time": 2326.715,
      "index": 96,
      "start_time": 2300.572,
      "text": " But it's a mechanism that we haven't, to my knowledge, tried very much in AI and computer science. And it would work. There is something that is a very close analog that is a neural cellular automaton. So basically, instead of learning weight shifts or weight changes between adjacent neurons, what you learn is global functions that tell neurons on how to respond to patterns in the neighborhood."
    },
    {
      "end_time": 2357.688,
      "index": 97,
      "start_time": 2327.722,
      "text": " And these functions are the same for every point in your matrix. And you can learn arbitrary functions in this way. And what's nice about is that you only need to learn computational primitives once. Our current neural networks need to learn the same linear algebra over and over again in many different corners of the neural network, because you need vector algebra for many kinds of operations that we perform, for instance, operations in space, where we shift things around or rotate them."
    },
    {
      "end_time": 2386.067,
      "index": 98,
      "start_time": 2358.285,
      "text": " And if they could exchange these useful operations with each other and just apply an operator whenever the environment dictates that this would be a good idea to try to apply this operator right now in this context, that could speed up learning, that could make training much more sample efficient. And so it's something super interesting to try. And this is one of the rabbit holes I recently fell down, where I changed my thinking based on some experiment from neuroscience."
    },
    {
      "end_time": 2394.343,
      "index": 99,
      "start_time": 2386.391,
      "text": " that doesn't have very big impact for the mainstream of neuroscience, but that I found reflected in Michael's work with Planaria."
    },
    {
      "end_time": 2425.213,
      "index": 100,
      "start_time": 2396.203,
      "text": " Yeah, that's super interesting stuff. I can sprinkle a few details onto this. So the original finding in planaria was a guy named James McConnell at Michigan, actually in the US. And then that was in the 60s, the early 60s. And then there was some really interesting Russian work that picked it up after that. We reproduced some of it recently using modern quantitative automation and things like this."
    },
    {
      "end_time": 2450.623,
      "index": 101,
      "start_time": 2425.213,
      "text": " One of the really cool aspects of this, and there's a whole community, by the way, with people like Randy Gallistil and Sam Gershman and, of course, Glantzman, David Glantzman. That story of memory in the precise details of the synapses, that story is really starting to crack, actually, for a number of reasons. One of the cool things that was done in the Russian work, and it was also done later on by"
    },
    {
      "end_time": 2455.52,
      "index": 102,
      "start_time": 2450.93,
      "text": " Doug Blackiston who's in my lab now as a staff scientist and other people is this."
    },
    {
      "end_time": 2482.466,
      "index": 103,
      "start_time": 2455.913,
      "text": " you can certain certain animals that go through larval stages right so you can taste so what the russians were using um beetle beetle larvae and uh and doug and other people used uh used used moths and butterflies so what happens is you train you train the larva right so so so here you've got a butterfly a caterpillar so so this caterpillar lives in a two-dimensional world it's a soft-bodied robot it lives in a two-dimensional world that eats leaves and so on right and so you train this thing for a particular task"
    },
    {
      "end_time": 2505.998,
      "index": 104,
      "start_time": 2482.466,
      "text": " Well during metamorphosis it needs to become a moth or butterfly which it lives in a three-dimensional world plus it's a hard-bodied creature so the controller is completely different right for running this for running a caterpillar versus a butterfly so so during that process what happens is the brain is basically dissolved so most of the connections are broken most of the cells are gone they die you you put together a brand new brain itself assembles"
    },
    {
      "end_time": 2530.879,
      "index": 105,
      "start_time": 2506.527,
      "text": " And you can ask all sorts of interesting philosophical questions of what it's like to be a creature whose brain is undergoing this massive change. But the information remains. And so one can ask, okay, this is, you know, certainly for computer science, it's amazing to have a memory medium that can survive this radical remodeling and reconstruction. And there's the RNA story, but also,"
    },
    {
      "end_time": 2549.548,
      "index": 106,
      "start_time": 2531.476,
      "text": " You had mentioned, you know, does this work for mammals? So there was a guy in the 70s and 80s, there was a guy named George Ungar who did tons of, he's got tons of papers, he reproduced it in rats. So his was Fear of the Dark and he actually, by establishing this essay and then"
    },
    {
      "end_time": 2570.162,
      "index": 107,
      "start_time": 2549.548,
      "text": " You know fractionating their brains and and and extracting this this activity now he thought it was a peptide not not RNA so he he ended up with a with a thing called scotaphobin which turns out to be I think an eight mer peptide or something and the claim was that you can transfer this scotaphobia you can synthesize it and then transfer it from brain to brain."
    },
    {
      "end_time": 2599.292,
      "index": 108,
      "start_time": 2570.572,
      "text": " And that's, and that's, you know, that's, that's, that's what he thought it was. And then of course, I think David Glantzman favors RNA again. But yeah, I agree with you. I think, I think that's a, that's a, that's a super important story of how it is that this kind of information can survive. Uh, just, just massive remodeling of the, of the cognitive substrate in planaria. What we, what we did in planaria, you know, they, they have a true centralized brain. They have all the same neurotransmitters that we have. They're not a simple, a simple organism."
    },
    {
      "end_time": 2623.217,
      "index": 109,
      "start_time": 2599.923,
      "text": " What we did was McConnell's first experiments, which is to train them on something and we train them to recognize a laser etched kind of bumpy pattern on the bottom of the dish and to recognize that that's where their food was going to be found. So they made this association between this pattern and getting food. And then we cut their heads off and we took the tails and the tails sit there for 10 days doing nothing. And then eventually they grow a new brain."
    },
    {
      "end_time": 2631.476,
      "index": 110,
      "start_time": 2623.677,
      "text": " And what happens is that information is then imprinted onto the new brain and then you can recover behavioral evidence that they remember the information."
    },
    {
      "end_time": 2655.196,
      "index": 111,
      "start_time": 2632.261,
      "text": " So that's pretty cool too because it suggests that we don't know if the information is everywhere or if it's in other places in the peripheral nervous system or in the nerve core that we don't know where it is yet. But it's clear that it can move around, that the information can move around in the body because it can be in the posterior half and then imprinted onto the brain which actually drives all the behaviors."
    },
    {
      "end_time": 2678.848,
      "index": 112,
      "start_time": 2655.93,
      "text": " So thinking about that, I totally agree that this is a really important rabbit hole for asking. But there's an interesting puzzle here, which is this. It's one thing to remember things that are evolutionarily adaptive, like fear of the dark and things like this. But imagine, and this hasn't really been done well, but imagine for a moment"
    },
    {
      "end_time": 2704.787,
      "index": 113,
      "start_time": 2679.189,
      "text": " if we could train them to something that is completely novel, let's say, let's say we train them. Three yellow life flashes means take a step to your left, otherwise you get shocked, something like that. And let's say they learn to do it. We haven't done this yet. But let's say, let's say this could work. One of the big puzzles is going to be when you extract whatever it is that you extract, let's say it's RNA or protein, whatever it is, you stick it into the brain of a recipient host,"
    },
    {
      "end_time": 2710.06,
      "index": 114,
      "start_time": 2705.333,
      "text": " And in order for that memory to transfer, one of the things that the host has to be able to do is has to be able to decode it."
    },
    {
      "end_time": 2740.35,
      "index": 115,
      "start_time": 2710.35,
      "text": " In order to decode it, it's one thing if we share the same codebook, and by evolution, we could have the same codebook for things that come up all the time, like fear of the dark, fear, things like that. But how would the recipient look at a weird, some kind of crazy hairpin RNA structure and analyze it and be like, oh, yes, that's three light flashes, and then step to the left, I see. So you would need to be able to interpret somehow this structure and convert it back to the behavior."
    },
    {
      "end_time": 2769.957,
      "index": 116,
      "start_time": 2740.35,
      "text": " And for behaviors that are truly arbitrary, that might be, I don't know actually how that would work. And so I think the frontier of this field is going to be to have a really convincing demonstration of a transfer of a memory that doesn't have a plausible pre-existing shared evolutionary decoding, because otherwise you have a real puzzle as to how the decoding is going to work. And then even without the transfer, you can also think of it a different way,"
    },
    {
      "end_time": 2797.278,
      "index": 117,
      "start_time": 2770.316,
      "text": " Every memory is like a message is like basically a transplanted message from your past self to your future self, meaning that you still have to decode your memories, whatever your memories are, in an important sense, you have to, you know, those n grams, you have to decode them somehow. So that that whole issue of encoding and decoding, whatever the substrate of memory is, is, you know, maybe one of the most important questions there are. One of the ways we can think about these n grams, I think that there are"
    },
    {
      "end_time": 2822.227,
      "index": 118,
      "start_time": 2797.619,
      "text": " Priors that condition what kinds of features are being spawned in which context. For instance, when we see a new scene, the way that perception seems to be working is that we spawn lots and lots of feature controllers that then organize into objects that are controlled at the level of the scene. And this is basically a game engine that is forming in our brain."
    },
    {
      "end_time": 2850.316,
      "index": 119,
      "start_time": 2822.227,
      "text": " that is creating a population of interacting objects that are tuned to track our perceptual data at the lowest levels. So all the patterns that we get from our retina and so on are samples, noisy samples that are difficult to interpret, but we are matching them into these hierarchies of features that are translated into objects that assign every feature to exactly one object and every pixel, so to speak, to exactly one."
    },
    {
      "end_time": 2874.667,
      "index": 120,
      "start_time": 2850.316,
      "text": " except in the case of transparency and use this to interpret the scene that is happening in front of us. And when we are in the dark, what happens is that we spawn lots of object controllers without being able to disprove them, because there is no data that forces us to reject them. And if you have a vivid imagination, especially as a child, you will fill this darkness automatically with lots of objects, many of which will be scary."
    },
    {
      "end_time": 2891.391,
      "index": 121,
      "start_time": 2875.572,
      "text": " And so I think that lots of the fear of the dark doesn't need a lot of encoding in our brain. It is just an artifact of the fact that there are scary things in the world which we can learn to represent at an early age and that we cannot just prove them that they just will just spawn."
    },
    {
      "end_time": 2921.357,
      "index": 122,
      "start_time": 2892.585,
      "text": " I remember this vividly as a child that whenever I had to go into the dark basement to get some food in our house in the countryside, that this darkness automatically filled with all sorts of shapes and things and possibilities. And it took me later to learn that you need to be much more afraid of the ghosts that can hide in the light. So what would be the implications of if you were able to transfer memory for something that's"
    },
    {
      "end_time": 2928.797,
      "index": 123,
      "start_time": 2922.056,
      "text": " not trivial so nothing that's like an archetype of fear of the dark between uh... mammal like rats"
    },
    {
      "end_time": 2960.282,
      "index": 124,
      "start_time": 2930.333,
      "text": " And when I say transfer memory, I mean, in this way that you blend up the brain. And also, can you explain what's meant by, I think I understand what it means to blend the brain of a planaria, but I don't think that's the same process that's going on in rats. Maybe it is. Well, Ungar did exactly the same thing. He would train rats for particular tasks. He would extract the brain, literally liquefy it to extract the chemical contents. He would then either inject the whole extract or a filtered extract where you would divide it up. You'd set fractionate it. So here's the RNAs, here's the proteins."
    },
    {
      "end_time": 2979.514,
      "index": 125,
      "start_time": 2960.282,
      "text": " here you know other things uh and and then he would inject that liquid directly into the brains of recipient rats so you know when you do that you lose you lose spatial structure on the input because you just blended your brain whatever spatial structure there was you just destroyed it also on the recipient"
    },
    {
      "end_time": 2998.831,
      "index": 126,
      "start_time": 2979.514,
      "text": " You just inject it. You're not finding that particular place where you're going to stick. You just inject this thing right in the middle of the brain. Who knows where it goes, where the fluid goes. There's no spatial specificity there whatsoever. If that works, what you're counting on is the ability of"
    },
    {
      "end_time": 3020.845,
      "index": 127,
      "start_time": 2999.189,
      "text": " the brain to take"
    },
    {
      "end_time": 3045.998,
      "index": 128,
      "start_time": 3021.374,
      "text": " you're basically asking the cells to take it up almost as as a primitive animal would with taste or touch you right that that's kind of distributed all over the body and you can sort of pick it up anywhere and then you have to process this information. So so so you've got those issues right off the bat right that you've destroyed the incoming spatial structure you you can't really count on where it's going to land in the brain and and then the third thing is as you just mentioned is the idea that"
    },
    {
      "end_time": 3056.63,
      "index": 129,
      "start_time": 3046.613,
      "text": " especially if we start with information"
    },
    {
      "end_time": 3086.305,
      "index": 130,
      "start_time": 3057.108,
      "text": " you know, kind of invented, you know, the three light flashes means move to your left. I mean, there's never there's never been an evolutionary reason to have that encoded. Like, as you just said, having a fear of the dark is absolutely a natural kind of thing that showed that that you can expect. But and then there are many other things like that. But but something something as as contrived as you know, three light flashes, and then you move to your left, there's no reason to think that we have a built in way to recognize that. So when you as a recipient brain or handed this weird"
    },
    {
      "end_time": 3116.408,
      "index": 131,
      "start_time": 3086.613,
      "text": " a molecule with a particular structure or a set of molecules, being able to analyze that, having the cells in your brain or other parts of the body actually, that could analyze that and recover that original information would be extremely puzzling. I actually don't know how that would work. And I'm a big fan of unlikely sounding experiments that have implications if they would work. So this is something that I think should absolutely be done. And at some point we'll do it, but we haven't done it yet."
    },
    {
      "end_time": 3134.241,
      "index": 132,
      "start_time": 3116.92,
      "text": " So how far did the research in my school? What is the complexity of things that could be transmitted via this route? I don't remember everything that he did. The vast majority of he did not go"
    },
    {
      "end_time": 3163.029,
      "index": 133,
      "start_time": 3134.923,
      "text": " far to test all the complexities what he tried to do was because as you can imagine he faced incredible opposition right so so so everybody you know sort of wanted to critique this thing so he spent all of his time on he picked one simple assay which was the sphere of the dart thing and then he just he just bashed it for for 20 years to just finally try to kind of crack that into the into the paradigm he did not as far as i know do lots of different assays to try and make it more complex"
    },
    {
      "end_time": 3188.251,
      "index": 134,
      "start_time": 3163.029,
      "text": " I think it's very ripe for investigation. This is the kind of... Did anyone else build upon his work? Not that I know. I mean, David Glansman is the best modern person who works on this, right? So he does a plesia and he does RNA. So he favors RNA. There's a little bit of work from Oded Rahavi in Israel with C. elegans. He's kind of looking into that."
    },
    {
      "end_time": 3215.862,
      "index": 135,
      "start_time": 3188.541,
      "text": " There's related work that has to do with cryogenics, which is this idea that if memories are a particular kind of dynamic electrical state, then some sort of cryogenic freezing is probably going to disrupt that. Whereas if it's a stable molecule, then it should survive. So again, I think there are people interested in that aspect of it, but I'm not sure they've done anything with it."
    },
    {
      "end_time": 3239.155,
      "index": 136,
      "start_time": 3216.561,
      "text": " There's also Gaurav Venkataraman. I think he is at Berkeley. He told me that he has been working on this for several years, but he said it's sociologically tricky. And that's to me fascinating that we should care about that. What does he mean by that? What do you care about? What stupid people think?"
    },
    {
      "end_time": 3267.09,
      "index": 137,
      "start_time": 3239.838,
      "text": " If this possibility exists that this works, the upside is so big that it's criminal to not research this. I think it's a disaster that you can read introductory textbooks on neuroscience and never ever hear about any of these experiments. Everybody who gets the introductory stuff on neuroscience only knows about information stored in the connectome. And this leads to, for instance, the Blue Brain Project,"
    },
    {
      "end_time": 3294.326,
      "index": 138,
      "start_time": 3267.415,
      "text": " If RNA-based memory transfer is a thing, then this entire project is doomed. Because you cannot get the story out of just recording the connectome. Most of the research right now is focused on reconstructing the connectome as it was circuitry and hoping that we can get the functionality of information processing and deduce the specificity of the particular brain, what it has learned from the connections between neurons."
    },
    {
      "end_time": 3312.892,
      "index": 139,
      "start_time": 3294.326,
      "text": " What if it turns out this doesn't matter? What if you just need connections that are dense enough and so basically stochastic lattice that is somewhat randomly wired and what matters is what the new ones are doing with the information that they're getting through this ether through this lattice. This changes the entire way in which we need to look at things."
    },
    {
      "end_time": 3341.92,
      "index": 140,
      "start_time": 3312.892,
      "text": " And if this possibility exists, and if this possibility is just 1%, but there is some experiment that points in this direction, it is ridiculous to not pursue this with high pressure and focus on it and support research that goes in this direction. Basically, what's useful is not so much answering questions in science, it's discovering questions, it's discovering new uncertainty. Reducing the uncertainty is much easier than discovering new areas of where you thought that you were certain."
    },
    {
      "end_time": 3348.541,
      "index": 141,
      "start_time": 3342.534,
      "text": " but that allow you to get new insights. And it seems to me that a lot of neuroscience is stuck."
    },
    {
      "end_time": 3374.053,
      "index": 142,
      "start_time": 3348.814,
      "text": " that does not produce results that seem to accumulate in an obvious way towards a theory on how the brain processes information. So the neuroscientists don't deliver input to the researchers and the transformer is not the result of reading a lot of neuroscience. It's really mostly the result of people's thinking about statistics of data processing."
    },
    {
      "end_time": 3401.63,
      "index": 143,
      "start_time": 3374.053,
      "text": " And it would be great if we would focus on ideas that are promising and new and that have the power to shake existing paradigms. Yeah, this is, you know, this is this is so important. And it's not just neuroscience in developmental biology, we have exactly the same thing. And I'll just give you two very simple examples of it, where and I tell the students when I give talks to students, I say, Isn't it amazing that that in your whole"
    },
    {
      "end_time": 3415.009,
      "index": 144,
      "start_time": 3402.159,
      "text": " So here's a couple of examples. One example is that"
    },
    {
      "end_time": 3441.937,
      "index": 145,
      "start_time": 3415.196,
      "text": " as of trophic memory in deer. So there are species of deer that every year they regenerate, you know, the whole, so they make this antler rack on their heads, the whole thing falls off and then it regrows the next year. So these two guys, Bobenak, which are a father and son team that did these experiments for 40 years, and actually have all these antlers in my lab now, because when the younger one retired, I asked him, he sent me all these things, all these antlers. The idea is this,"
    },
    {
      "end_time": 3469.718,
      "index": 146,
      "start_time": 3442.312,
      "text": " What you can do is you take a knife and somewhere in this branch structure you make a wound and the bone will heal and you get a little callus and that's it for that year. Then the whole thing drops off and then next year it starts to grow and it will make an ectopic tine, an ectopic branch at the point where you injured it last year. This goes on for five or six years and then eventually it goes away and you get a normal rack again."
    },
    {
      "end_time": 3490.998,
      "index": 147,
      "start_time": 3470.213,
      "text": " The amazing thing about it is that the standard models for patterning for morphogenesis are these kind of gene regulatory networks and genetic kinds of biochemical gradients and so on. If you try to come up with a model for this, so for encoding"
    },
    {
      "end_time": 3513.695,
      "index": 148,
      "start_time": 3491.527,
      "text": " an arbitrary point within a branch structure that your cells at the scalp have to remember for months after the whole thing is dropped off and then not only remember it but then implement it so that when the bone starts to grow something says oh yes that's the you know the start another another time growing to your left exactly exactly here right coming trying to try to make a model of this using the standard tools of the field is"
    },
    {
      "end_time": 3529.531,
      "index": 149,
      "start_time": 3513.695,
      "text": " uh just just incredibly difficult and this is that that's and there are other examples of this but this kind of non non-genetic memory that's just very difficult to explain with standard models the other thing which is any i think an even bigger scandal is the whole situation with planaria um"
    },
    {
      "end_time": 3559.753,
      "index": 150,
      "start_time": 3530.265,
      "text": " planaria, some species of planaria, the way they reproduce is they tear themselves in half, each half regenerates the missing piece. And now you've got two, that's how they reproduce. So if you're going to do that, what you end up avoiding is Weisman's barrier, this idea that when we get mutations in our body, our children don't inherit those mutations, right? So this means that any mutation that doesn't kill the stem cell in the body gets amplified as that cell contributes to regrowing the worm. So as a result of this, for 400 million years, these planaria have accumulated mutations."
    },
    {
      "end_time": 3578.524,
      "index": 151,
      "start_time": 3559.957,
      "text": " Their genomes are an incredible mess. Their cells are basically mixoploid, meaning they're like a tumor. Every cell has a different number of chromosomes, potentially. They just look horrible. As an end result, you've got an animal that is immortal, incredibly good at regenerating with 100% fidelity."
    },
    {
      "end_time": 3597.483,
      "index": 152,
      "start_time": 3578.524,
      "text": " And very resistant to cancer now this is all of this is the exact opposite of the message you get from from from a typical course through through biology which is says that what is the genome for the genome is for setting your body structure if you mess with the genome that information goes away you get you get aging you get cancer."
    },
    {
      "end_time": 3621.305,
      "index": 153,
      "start_time": 3597.483,
      "text": " Right. Why does the animal with the worst genome have the best anatomical fidelity? I mean, that's just, and I think we actually, as of a few months ago, we actually, I think, have some insight into this, but it's been bugging me for years. And this is the kind of thing that nobody ever talks about because it goes against the general assumption of what genomes actually do and what they're for. And this complete lack of"
    },
    {
      "end_time": 3649.753,
      "index": 154,
      "start_time": 3621.305,
      "text": " correlation between the genome in fact an anti correlation between the genome quality and the incredible ability of this of this animal to uh to to to to have a healthy anatomy yeah what is that insight that you mentioned you acquired a few months ago preliminary so so okay uh in the in the name of uh you know throwing out uh kind of new unproven ideas right so this is you know this this is this is just my my conjecture we've we've done some we've done some computational modeling of it which which i initially this was a um"
    },
    {
      "end_time": 3678.234,
      "index": 155,
      "start_time": 3650.111,
      "text": " a very clever student that I work with the name Lakshwin who I did did some models with me and I initially thought it was a bug and then I realized that no actually this is this is this is the feature the idea is this imagine so we've been working for a long time on a concept of competency among embryonic parts and what this means is basically the idea that there are there are homeostatic feedback loops"
    },
    {
      "end_time": 3688.302,
      "index": 156,
      "start_time": 3678.541,
      "text": " Among various cells and tissues and organs that attempt to reach specific outcomes in anatomical morphing space despite various perturbations."
    },
    {
      "end_time": 3710.265,
      "index": 157,
      "start_time": 3688.541,
      "text": " The idea is that if you have a tadpole and you do something to it, whether by a mutation or by a drug or something, you do something to it where the eye is a little off kilter or the mouth is a little off. All of these organs pretty much know where they're supposed to be. They will try to minimize distance from other landmarks and they will remodel and eventually you get a normal frog so that they will sort of"
    },
    {
      "end_time": 3735.486,
      "index": 158,
      "start_time": 3710.265,
      "text": " recover the correct anatomy, despite starting off in the wrong position, or even things like changes in the number of cells or the size of cells, they're really good at getting their job done, despite various changes, right? So okay, so they have these competencies to optimize specific things like like their position and structure and things like that. So, so that's it. So that's competency. Now, here's, here's the interesting thing."
    },
    {
      "end_time": 3763.012,
      "index": 159,
      "start_time": 3736.084,
      "text": " Imagine that you have a species that has some degree of that competency and so you've got an individual of that species comes up for selection, fitness is high, looks pretty good but here's the problem, selection doesn't know whether the fitness is high because his genome was amazing or the fitness is high because the genome was actually so-so but the competency sort of made up for it and now everything kind of got back to where it needs to go."
    },
    {
      "end_time": 3792.125,
      "index": 160,
      "start_time": 3763.012,
      "text": " So what the competency apparently does is shield information from evolution about the actual genome. It makes it harder to pick the best genomes because your individuals that perform well don't necessarily have the best genomes. What they do have is competency. So what happens in our simulations is that if you start off with even a little bit of that competency, evolution loses some power in selecting the best genomes"
    },
    {
      "end_time": 3819.633,
      "index": 161,
      "start_time": 3792.125,
      "text": " but where all the work tends to happen is increasing the competency. So then the competency goes up. So the cells are even better at, and the tissues are even better at getting the job done despite the bad genome. That makes it even worse. That makes it even harder for evolution to see the best genomes, which relieves some of the pressure on having a good genome, but it basically puts all the pressure on being really competent. So basically what happens is that"
    },
    {
      "end_time": 3830.828,
      "index": 162,
      "start_time": 3820.145,
      "text": " The genetic fitness basically levels out at a really suboptimal level and in fact the pressure is off of it so it's tolerant to all kinds of craziness."
    },
    {
      "end_time": 3860.367,
      "index": 163,
      "start_time": 3831.34,
      "text": " But the competency and the mechanisms of competency get pushed up really high. So in many animals, and there are other factors that sort of push against this ratchet, but it becomes a positive feedback loop. It becomes a ratchet for optimal performance despite a suboptimal genome. And so in some animals, this sort of evens out at a particular point. But I think what happened in planaria is that this whole process ran away to its ultimate conclusion. The ultimate conclusion is"
    },
    {
      "end_time": 3877.466,
      "index": 164,
      "start_time": 3861.203,
      "text": " The competency algorithm became so good that basically whatever the genome is, it's really good at creating and maintaining a proper worm because it is already being evolved in the presence of a genome whose quality we cannot control. So in computer science speak,"
    },
    {
      "end_time": 3899.224,
      "index": 165,
      "start_time": 3877.466,
      "text": " It's kind of like i'm in steve frank put me on to this analogy it's kinda like what happens in rate arrays when you have a nice rate array where the software make sure that you don't lose any data the pressure is off to have really really high quality media and so now you can tolerate you can tolerate media with lots of mistakes because the because the software takes care of it in the in the rate and the end of."
    },
    {
      "end_time": 3926.903,
      "index": 166,
      "start_time": 3899.65,
      "text": " the architecture takes care of it. So basically what happens is you've got this animal where that runaway feedback loop went so far that the algorithm is amazing and it's been evolved specifically for the ability to do what it needs to do, even though the hardware is kind of crap. And it's incredibly tolerant. So this has a number of implications that, to my knowledge, have never been explained before. For example,"
    },
    {
      "end_time": 3950.043,
      "index": 167,
      "start_time": 3927.483,
      "text": " In every other kind of animal, you can call a stock center and you can get mutants. So you can get mice with kinky tails, you can get flies with red eyes, and you can get chickens without toes, and you can get humans come with various albinos and things. There's always mutants that you can get. Planaria,"
    },
    {
      "end_time": 3970.384,
      "index": 168,
      "start_time": 3950.452,
      "text": " There are no abnormal lines of planaria anywhere except for the only exception is our two-headed line and that one's not genetic, that one's bioelectric. So isn't it amazing that nobody has been able, despite 120 years of experiments with planaria, nobody has isolated"
    },
    {
      "end_time": 3990.043,
      "index": 169,
      "start_time": 3970.384,
      "text": " a line of plan area that is anything other than a perfect plan area and i think this is why i think it's because they have been actually selected for being able to do what they need to do despite the fact that the that the that the hardware is just very junky and so so that's my that's my current that's my current current take on it and and and really"
    },
    {
      "end_time": 4003.712,
      "index": 170,
      "start_time": 3990.555,
      "text": " It puts more emphasis on the algorithm and the decision making among that cellular collective of what are we going to build and what's the algorithm for making sure that we're all working to build the correct thing."
    },
    {
      "end_time": 4022.585,
      "index": 171,
      "start_time": 4004.633,
      "text": " If you translate this idea into computer science, a way to look at it is imagine that you find some computers that have hard disks that are very, very noisy and where the hard disk basically makes lots and lots of mistakes in encoding things and bits often flip and so on."
    },
    {
      "end_time": 4045.128,
      "index": 172,
      "start_time": 4022.585,
      "text": " and you will find that these computers still work and they work in pretty much the same way as the other computers that you have. And there is an orthodox sect of computer scientists that thinks it is necessary that every bit on the hard disk is completely reliable or reliable to such a degree that you only have a mistake once every 100 trillion copies"
    },
    {
      "end_time": 4073.234,
      "index": 173,
      "start_time": 4045.128,
      "text": " And you can have an error correction code running on the hard disk at the low level that corrects this. And after some point, it doesn't become efficient anymore. So you need to have reliable hard disks to be able to have computers that work like this. But how would these other computers work? And it basically means that you create a virtual structure on top of the noisy structure that is correcting for whatever degree of uncertainty you have or the degree of randomness that gets injected into your substrate."
    },
    {
      "end_time": 4100.64,
      "index": 174,
      "start_time": 4073.695,
      "text": " David, Dave Eckley has a very nice metaphor for this. You know him, maybe? Yeah, I know him. Yeah, he's a, I think, beautiful artist who explores complexity by tinkering with computational models. I really find his work very inspiring. And he has this idea of best effort computing. So in his view, our own nervous system is the best effort computer. It's one that does not rely on the other neurons around you working perfectly."
    },
    {
      "end_time": 4124.462,
      "index": 175,
      "start_time": 4100.64,
      "text": " But make an effort to be better than random. And then you stack the probabilities empirically by having a system that evolves to measure, in effect, the unreliability of its components, and then stack the probabilities until you get the system to be deterministic enough to do what you're doing with what to do with it. And so you"
    },
    {
      "end_time": 4142.244,
      "index": 176,
      "start_time": 4125.009,
      "text": " If you have a system that is, as in the planaria, inherently very noisy, where the genome is an unreliable witness of what should be done in the body, you just need to interpret it in a way that stacks the probabilities, that is evaluating things with much more error tolerance."
    },
    {
      "end_time": 4167.568,
      "index": 177,
      "start_time": 4142.244,
      "text": " And maybe this is always the case. Maybe there is a continuum. Maybe not. It's also possible that there is some kind of phase shift where you switch from organisms with reliable genomes to organisms with noisy genomes. And you basically use a completely different way to construct the organism as a result. But it's a very interesting hypothesis then to see if this is a radical thing or a gradual thing that happens in all organisms to some degree."
    },
    {
      "end_time": 4196.749,
      "index": 178,
      "start_time": 4167.568,
      "text": " What I also like about this description that you give about how the organism emerges, it maps in some sense also in how perception works in our own mind. At the moment, machine learning is mostly focused on recognizing images or individual frames and you feed in information frame by frame and the information is actually disconnected. A system like Dali2 is trained by giving it several hundreds of millions of images."
    },
    {
      "end_time": 4223.985,
      "index": 179,
      "start_time": 4197.432,
      "text": " And they are disconnected. They are not adjacent images in the space of images. And maybe could not probably learn from giving 600 million images in a dark room and only looking at this introduced the structure of the world from this. Whereas Dali can, which gives testament to the power of our statistical methods and hardware that we have, that far surpasses, I think, the combined power and reliability of brains, which probably would not be able to integrate so much information over such a big distance."
    },
    {
      "end_time": 4248.712,
      "index": 180,
      "start_time": 4224.582,
      "text": " For us, the world is learnable because its adjacent frames are correlated. Basically, information gets preserved in the world through time. And we only need to learn the way in which the information gets transmogrified. And these transmogrifications of information means that we have a dynamic world in which the static image is an exception. The identity function is a special case of how the universe changes. And we mostly learn change."
    },
    {
      "end_time": 4271.152,
      "index": 181,
      "start_time": 4249.224,
      "text": " I just got visited by my cat. And my cat has difficulty to recognize static objects compared to moving objects, where it's much, much easier to see a moving ball than a ball that is lying still. And it's because it's much easier to segment it out the environment when it moves. So the task of learning on a moving environment, a dynamic environment is much easier because it imposes constraints on the world."
    },
    {
      "end_time": 4292.415,
      "index": 182,
      "start_time": 4271.596,
      "text": " And so how do we represent a moving world compared to a static world? The semantics of features changes. An object is basically composed of features that can be objects themselves. And the scene is a decomposition of all the features that we see into a complete set of objects that explain the entirety of the scene."
    },
    {
      "end_time": 4307.739,
      "index": 183,
      "start_time": 4292.415,
      "text": " The interaction between them and causality is the interaction between objects, right? And in a static image, these objects don't do anything. They don't interact with each other. They just stand in some kind of relationship that you need to infer, which is super difficult because you only have this static snapshot."
    },
    {
      "end_time": 4326.988,
      "index": 184,
      "start_time": 4308.148,
      "text": " And so the features are classifiers that tell you whether a feature is a hand or a foot or a pen or a sun or a flashlight or whatever, and how they relate to the larger scene in which, again, you have a static relationship in which you need to classify the object based on the features that contribute to them."
    },
    {
      "end_time": 4354.514,
      "index": 185,
      "start_time": 4326.988,
      "text": " And you need to find some kind of description where you interpret features which are usually ambiguous and could be many different things depending on the context in which you interpret them into one optimal global configuration, right? But if the scene is moving, this changes a little bit. What happens now is that the features become operators. They're no longer classifiers that tell you how your internal state needs to change, how your world needs to change, how your simulation of the universe in your mind needs to change to track the sensory patterns."
    },
    {
      "end_time": 4370.111,
      "index": 186,
      "start_time": 4354.957,
      "text": " So a feature now is a change operator, a transformation. And the feature is in some sense a controller that tells you how the bits are moving in your local model of the universe. And they organize in a hierarchy of controllers."
    },
    {
      "end_time": 4396.988,
      "index": 187,
      "start_time": 4371.152,
      "text": " That's incredibly interesting because"
    },
    {
      "end_time": 4409.189,
      "index": 188,
      "start_time": 4397.534,
      "text": " You know as soon as you started saying that I was starting to think that the virtualization that enables right so the earlier part of which was saying the virtualization of"
    },
    {
      "end_time": 4431.954,
      "index": 189,
      "start_time": 4410.469,
      "text": " the information that allows you to deal with unreliable hardware and everything. The bioelectric circuits that we deal with are a great candidate for that because actually we see exactly that. We see a bioelectric pattern that is very resistant to changes in the details and make sure that everybody does the right thing under a wide range of"
    },
    {
      "end_time": 4441.749,
      "index": 190,
      "start_time": 4431.954,
      "text": " Did you know different defects and so on but but but even more than that the other thing that you were just emphasizing this the fact that we learn the delta right and that and that we're looking for change."
    },
    {
      "end_time": 4466.032,
      "index": 191,
      "start_time": 4442.244,
      "text": " Very interesting. If you pivot the whole thing from the temporal domain to the spatial domain, so in development, when we look at these bioelectric patterns, now these patterns are across space, not across time. So unlike in neuroscience where everything is kind of in the temporal domain for neurons, these things, these are static voltage patterns across tissue, right, across the whole thing."
    },
    {
      "end_time": 4495.759,
      "index": 192,
      "start_time": 4466.032,
      "text": " So for the longest time, we asked this question, how are these read out? How do cells actually read these? Because one possibility early, this was a very early hypothesis 20 years ago, was that maybe the local voltage tells every cell what to be. So it's like a paint by numbers kind of thing. And each voltage value corresponds to some kind of outcome. That turned out to be false. What we did find is that, and we have computational models of how this works now,"
    },
    {
      "end_time": 4515.077,
      "index": 193,
      "start_time": 4496.596,
      "text": " What is read out is the delta, the difference between regions. It doesn't care, nobody cares about what the absolute voltage is, what is read out in terms of outcomes for downstream cell behavior, gene expression, all that. What is actually read out is the voltage difference between two adjacent domains."
    },
    {
      "end_time": 4533.968,
      "index": 194,
      "start_time": 4515.077,
      "text": " So that is exactly actually what it's doing, just in the spatial domain, it only keys off of the delta. And what is in what is learned from that is exactly just as you as you were saying, it modifies the controller for what's downstream of that. And there may be multiple ones that are sort of moving around and Cohen having I mean, it's a very"
    },
    {
      "end_time": 4555.435,
      "index": 195,
      "start_time": 4534.548,
      "text": " It's a very compelling picture, actually, and way to look at some of the some of the simulations that that we've been doing about how the bioelectric data are interpreted by the rest of the cells. You know, it's very interesting. Could we take a couple of couple minute break? Yeah, sure. Okay, I got a new coffee. All right. Speaking of coffee, a brief note from our sponsor."
    },
    {
      "end_time": 4575.265,
      "index": 196,
      "start_time": 4556.118,
      "text": " Coffee helps me work, it helps me fast from carbs, it's become one of the best parts of my day consistently. That's why I'm delighted that we're collaborating with Trade Coffee. They partner with top independent roasters to freshly roast and send the finest coffee in the country directly to your home on your preferred schedule. This matters to me as I work from home."
    },
    {
      "end_time": 4601.834,
      "index": 197,
      "start_time": 4575.265,
      "text": " Their team of experts do all the work testing hundreds of disparate coffees to land on a final curated collection of 450 exceptional coffees. I chose these three and the team at Trade Coffee worked to create a special lineup for theories of everything for the Toh audience based on some questions they asked me such as how much caffeine do I enjoy and what's the bitterness ratio etc. You can get that lineup or if that's not let's say your cup of coffee"
    },
    {
      "end_time": 4628.046,
      "index": 198,
      "start_time": 4601.834,
      "text": " Then you can take your own quiz on their website to find a set that matches your specific profile. If you'd like to support small businesses and brew the best cup of coffee you've ever made at home, then it's time to try Trade Coffee. Right now, Trade is offering our listeners $30 off your first order plus free shipping at www.drinktrade.com slash everything. That's www.drinktrade.com slash everything for $30 off."
    },
    {
      "end_time": 4650.213,
      "index": 199,
      "start_time": 4628.507,
      "text": " So Professor Levin used the word competence earlier, and I'd like you to define that. Yeah. In order to define it, I want to put out two concepts to this. One idea is that, to me, and this goes back to what we were talking about before as the engineering stance on things, I"
    },
    {
      "end_time": 4665.435,
      "index": 200,
      "start_time": 4650.93,
      "text": " think that useful cognitive claims such as something, you know, when you say this system has whatever or it can whatever, right, as far as various types of cognitive capacities, I think those kinds of claims are really engineering claims."
    },
    {
      "end_time": 4689.36,
      "index": 201,
      "start_time": 4665.776,
      "text": " That is, when you tell me that something is competent at a particular level, maybe, right? So you can think about like Wiener and Rosenbluth scale of cognition that goes from simple, you know, simple passive materials and then reflexes and then all the way up to kind of second order metacognition and all that. When you tell me that something is on that ladder and where it is, what you're really telling me is,"
    },
    {
      "end_time": 4701.442,
      "index": 202,
      "start_time": 4689.36,
      "text": " if i want to predict its behavior or i want to use it in an engineering context or i want to interact with it or relate to it in some way this is what i can expect so right so that's what you're really telling me so all of these terms"
    },
    {
      "end_time": 4723.336,
      "index": 203,
      "start_time": 4701.937,
      "text": " what they really are, are engineering protocols. So if you tell me that something has the capacity to do associative learning or whatever, what you're telling me is that, hey, you can do something more with this than you could with a mechanical clock. You can provide certain types of stimuli or experiences and you can expect it to do this or that afterwards."
    },
    {
      "end_time": 4752.875,
      "index": 204,
      "start_time": 4723.336,
      "text": " Or if you tell me that something is a homeostat, that means that, hey, I can count on it to keep some variable at a particular range without having to be there myself to control it all the way. It has a certain autonomy. Now, how much, right? And if you tell me that something is really intelligent and it can do X, Y, Z, then I know that, okay, you're telling me that it has even more autonomous behavior in certain contexts. So all of these terms, to me, what they really are, they're not, and that has an important implication. The implication is that they're"
    },
    {
      "end_time": 4775.299,
      "index": 205,
      "start_time": 4753.319,
      "text": " observer dependent, that you've picked some kind of problem space, you've picked some kind of perspective, and from that problem space and that perspective, you're telling me that given certain goal states, this system has that much competency to pursue those goal states. And different observers can have different views on this for any given system. So for example,"
    },
    {
      "end_time": 4803.302,
      "index": 206,
      "start_time": 4775.299,
      "text": " is somebody might look at a brain like let's say a human brain and say well i'm pretty sure the only thing this this is a paperweight so it's really pretty much just competent in going down gravitational gradient so all i can do is hold down paper that that's it and somebody else will look at and say you missed the whole point you missed the whole point this thing has competencies in behavioral space and linguistic space right so these are all um empirically testable uh engineering claims about what you can expect the system to do so when i say competency what i mean is"
    },
    {
      "end_time": 4821.544,
      "index": 207,
      "start_time": 4803.916,
      "text": " We specify a space, a problem space, and at the time when we were talking about this, the problem space that I was talking about was the anatomical morphous space. That was the space we were talking about. So the space of possible anatomical configurations and specifically navigating that morphous space. So you start off"
    },
    {
      "end_time": 4838.456,
      "index": 208,
      "start_time": 4821.544,
      "text": " As an egg or you start off as a damaged limb or whatever and you navigate that more for space into the correct structure. So when I say competency, I mean you have the ability to deploy certain kinds of tricks to navigate that more for space with some level of"
    },
    {
      "end_time": 4864.48,
      "index": 209,
      "start_time": 4838.456,
      "text": " Performance that i can count on and so the competency might be really low or it might be really high and i would have to make specific claims about what i mean here's an example of a copy and there are many you know if you just think about the behavioral science of navigation there are many competencies you can think about does it you know does it does it know ahead of time where it's going does it have a memory of where it's been or is it a very simple you know sort of reflex arc is all it has or"
    },
    {
      "end_time": 4877.193,
      "index": 210,
      "start_time": 4864.48,
      "text": " Here's one example of a pretty cool competency that a lot of biological systems have. If we take some cells that are in the tail of a tadpole and we"
    },
    {
      "end_time": 4896.067,
      "index": 211,
      "start_time": 4877.705,
      "text": " Give them a particular we modify their ion channels with such that they now acquire the goal of navigating to an eye fate in more in this more of a space, meaning that they're going to make an eye. These things, in fact, will create an eye and they'll make an eye in the tail on the gut wherever you want."
    },
    {
      "end_time": 4909.462,
      "index": 212,
      "start_time": 4896.596,
      "text": " But one of the amazing aspects is if I only modify a few cells, not enough to make an actual eye, just a handful of cells. And we've done this and you can see this work."
    },
    {
      "end_time": 4935.794,
      "index": 213,
      "start_time": 4910.282,
      "text": " One of the competencies they have is to recruit local neighbors that were themselves not in any way manipulated to help them achieve that goal. It's a little bit like in an ant colony, right? There's this idea of recruitment in ants and termites is an idea of recruitment where individuals can recruit others and talk about a flexible collective intelligence. This is it. You've re-specified the goal for that set of cells, but one of the things that they do without us"
    },
    {
      "end_time": 4953.422,
      "index": 214,
      "start_time": 4936.152,
      "text": " telling them how to do it or having to micromanage it, they already have the competency to recruit as many cells as they need to get the job done. So that's a very nice for an engineer, that's a very nice competency because it means that I don't need to worry about taking care of getting exactly the right number of cells."
    },
    {
      "end_time": 4983.763,
      "index": 215,
      "start_time": 4953.831,
      "text": " If I'm a little bit over, that's fine. If I'm way under, also fine. The system has that competency of recruiting other cells to get the job done. So that's what I meant. So to me, to make any kind of a cognitive claim, you have to specify the problem space. You have to specify the goal towards which it's expressing competencies. And then you can make a claim about, well, how competent is it to get to that goal? And somebody, I wish I could remember who it was, but somebody made this really nice analogy about kind of the ends of that spectrum."
    },
    {
      "end_time": 5008.558,
      "index": 216,
      "start_time": 4983.985,
      "text": " They said two magnets try to get together and Romeo and Juliet try to get together. But the degree of flexible problem solving that you can expect out of those two systems is incredibly different. And within that range, there are all kinds of in-between systems that may be better or worse and may deploy different kinds of strategies. Can they avoid local optima? Can they have a memory of where they've been? Can they look further than their local environment? A million different things."
    },
    {
      "end_time": 5031.578,
      "index": 217,
      "start_time": 5008.746,
      "text": " So the way in which you use the word competency could be treated as the capacity of a system for adaptive control."
    },
    {
      "end_time": 5055.043,
      "index": 218,
      "start_time": 5032.415,
      "text": " One issue that I have with the notion of goals and goal directedness is that sometimes you only have a tendency in a system to go in a certain direction. And so it's directed, but the goal is something that can be emergent. Sometimes it's not sometimes there is an explicit representation the system of a discrete event that is associated or a class of events."
    },
    {
      "end_time": 5075.486,
      "index": 219,
      "start_time": 5055.401,
      "text": " This is"
    },
    {
      "end_time": 5102.466,
      "index": 220,
      "start_time": 5075.947,
      "text": " tendency in our behavior. We could also say we have the goal of finding food, but that is a rationalization that is maybe stretching things sometimes. So sometimes a better distinction for me is going from a simple controller to an agent. And I try to, because we are very good at discovering agency in the world, what does it actually mean when we discover agency and when we discover our own agency and start to amplify it."
    },
    {
      "end_time": 5130.282,
      "index": 221,
      "start_time": 5103.268,
      "text": " by making models of who we are and how we deal with the world and with others and so on. The minimal definition of agent that I found, it's a controller for future states. The thermostat doesn't have a goal by itself, right? It just has a target value and a sensor that tells its deviation from the target value and when that exceeds a certain threshold, the heating is turned on. And if it goes below a certain threshold, the heating is turned off again and this is it."
    },
    {
      "end_time": 5136.425,
      "index": 222,
      "start_time": 5130.367,
      "text": " So the thermostat is not an agent. It only reacts to the present frame. It's only a reactive system."
    },
    {
      "end_time": 5166.015,
      "index": 223,
      "start_time": 5137.193,
      "text": " Whereas an agent is proactive, which means that it's trying to not just minimize the current deviation from the target value, but the integral over a time span, the future deviation. So it builds an expectation about how an action is going to change this trajectory of the universe. And over that trajectory, it tries to figure out some measure of how big the compound target deviation is going to be."
    },
    {
      "end_time": 5189.497,
      "index": 224,
      "start_time": 5166.357,
      "text": " And so as a result, you get a branching universe and the branches in this universe, some of these branches depend on actions that are available to you and that translate into decisions that you can make that move you into more or less preferable wealth states. And suddenly you have a system with emergent beliefs, desires and intentions. But to make that happen, to move from a controller to agency,"
    },
    {
      "end_time": 5219.77,
      "index": 225,
      "start_time": 5190.043,
      "text": " agent just really being a controller with an integrated set point generator and the ability to control future states that requires that you can make models that are counterfactual. So because the future universe doesn't exist right now, you need to create a counterfactual universe, the future model of the future universe, maybe even a model of the past universe that allows you to reason about possible future universes and so on. And to make these counterfactual causal models of the universe,"
    },
    {
      "end_time": 5249.787,
      "index": 226,
      "start_time": 5219.957,
      "text": " You need to have a Turing machine. So without a computer, without something that is Turing complete, that insulates you from the causal structure of your substrate, that allows you to build representations regardless of what the universe says right now around you. You need to have that machine. And the simplest system in nature that has Turing machine integrated is the cell. So it's very difficult to find a system in nature that is an agent, that is not made from cells."
    },
    {
      "end_time": 5274.974,
      "index": 227,
      "start_time": 5250.213,
      "text": " as a result. Maybe there are systems in nature that are able to compute things and make models, but I'm not aware of any. So the simplest one that I know that can do this reliably is the sales or arrangement of sales. And that can possess agency, which is an interesting thing that explains this coincidence that living things are agents and vice versa."
    },
    {
      "end_time": 5301.152,
      "index": 228,
      "start_time": 5275.435,
      "text": " that the agent that we discover are mostly living things, or there are robots that have computers built into them, or virtual robots that rely on computation. So the ability to make models of the future is the prerequisite for agency. And to make arbitrary models, which means structures that embody causal simulations of some sort, that requires computation."
    },
    {
      "end_time": 5323.473,
      "index": 229,
      "start_time": 5302.841,
      "text": " Yeah, I'm on board with that ladder, that taxonomy of goals and so on. One interesting thing about goals, and as you say, some are emergent and some are not, there's an interesting planarian version of this, which is this. We"
    },
    {
      "end_time": 5337.261,
      "index": 230,
      "start_time": 5324.411,
      "text": " We made this hypothesis about, so within planaria, you chop it up into pieces and every piece regenerates exactly the right rest of the worm, right? So if you chop it into pieces, each piece will have one head, one tail."
    },
    {
      "end_time": 5357.312,
      "index": 231,
      "start_time": 5337.637,
      "text": " And then, of course, what happens is it stops when it reaches a correct planarian, then it stops. And so we started to think that there are a couple of possibilities. One possibility is that this is a purely emergent process and that the goal of rebuilding ahead is an emergent thing that comes about as a consequence of other things."
    },
    {
      "end_time": 5386.869,
      "index": 232,
      "start_time": 5357.671,
      "text": " or could there be a an actual explicit representation of what a correct planarian is that serves as a set point as an encoded as an explicitly encoded set point for these cells to follow and and because it's a cellular collective we were communicating electrically we thought well maybe maybe what it's doing is basically storing a memory of what did you like you would in in a neural circuit you're storing a memory of what it should be so we started looking for this and this is what we found and this is this is kind of i think"
    },
    {
      "end_time": 5417.005,
      "index": 233,
      "start_time": 5387.363,
      "text": " One important type of goal in a goal seeking system is a goal that you can rewrite without changing the hardware and the system will now pursue that goal instead of something else. In a purely emergent system, that doesn't work, right? If you have a cellular automaton or a fractal or something that does some kind of complex thing, if you want to change what that complex thing is, you have to figure out how to change the local rules. That's very hard in most cases. But what we found in planaria is that we can literally"
    },
    {
      "end_time": 5429.309,
      "index": 234,
      "start_time": 5417.517,
      "text": " Using a voltage reporter die, we can look at the worm and we can see now the pattern, and it's a distributed pattern, but we can see the pattern that tells this animal how many heads it's supposed to have."
    },
    {
      "end_time": 5454.77,
      "index": 235,
      "start_time": 5429.599,
      "text": " And what you can do is you can go in and using a brief transient manipulation of the ion channels with drugs, with ion channel drugs that, and we have a computational model that tells you what those drugs should be, that briefly changes the electrical state of the circuit. But the circuit is amazing. Once you've changed that state, it holds. So by default, in a standard planarian, it always says one head."
    },
    {
      "end_time": 5469.394,
      "index": 236,
      "start_time": 5455.043,
      "text": " But it's kind of like a flip-flop in that when you temporarily shift it, it holds and you can push it to a state that says two heads. So now something very interesting happens. Two interesting things. One is that if you take those worms and you cut those into pieces,"
    },
    {
      "end_time": 5496.681,
      "index": 237,
      "start_time": 5469.821,
      "text": " you get two headed worms, even though the genetics are, the hardware is all wild type. There's nothing wrong with the hardware. All the proteins are the same. All the genetics is the same, but the electric circuit now says make two heads instead of one. And so this is in an interesting way. It is an explicit goal because you can rewrite it because much like with your thermostat, there's an interface for changing what the goal state is. And then you don't even need to know how the rest of the thermostat works. As long as you know how to use your, how to modify that interface, the system takes care of the rest."
    },
    {
      "end_time": 5527.022,
      "index": 238,
      "start_time": 5497.346,
      "text": " The other interesting thing is, and I love what you said about the counterfactuals, what you can do is you can change that electrical pattern in an intact worm and not cut it for a long time. And if you do that, when you look at that pattern, that is a counterfactual pattern because that two-headed pattern is not a readout of the current state. It says two heads, but the animal only has one head. It's a normal planarian. So that pattern memory is not a readout of what the animal is doing right now."
    },
    {
      "end_time": 5553.524,
      "index": 239,
      "start_time": 5527.022,
      "text": " It is a representation of what the animal will do in the future if it happens to get injured and you may never cut it or you may cut it but if you do then the pattern becomes then the cells consult the pattern and build a two-headed worm and then it becomes a you know the current state but until then it's this weird like primitive it's a primitive counterfactual system because it's able to a body of a planarian is able to store at least two different"
    },
    {
      "end_time": 5583.217,
      "index": 240,
      "start_time": 5554.121,
      "text": " representations of what a, probably many more, but we found two so far, what a correct planarian should look like. It can have a memory of a one-headed planarian or a memory of a two-headed planarian, and both of those can live in exactly the same hardware and exactly the same body. The other kind of cool thing about this, and I'll just mention this even though this is, you know, disclaimer, this is not published yet, so this is, you know, take all this with a grain of salt, but the latest thing you can do is"
    },
    {
      "end_time": 5609.48,
      "index": 241,
      "start_time": 5583.473,
      "text": " You can actually treat it with some of the same compounds that are used in neuroscience in humans and in rats as memory blockers. So things that block recall or memory consolidation. And when you do that, you can make the animal forget how many heads it's supposed to have. And then they basically turn into a featureless circle when you can just wipe the pattern memory completely. Were they using exactly the same techniques you would use in a rat or a human?"
    },
    {
      "end_time": 5639.548,
      "index": 242,
      "start_time": 5609.957,
      "text": " They just forget what to do when they turn into they fail to break symmetry and they just become a circle. So yeah, I think what you were saying is right on with this ability to store counterfactual states that are not true now, but may represent aspects of the future. I think that's a very important capacity. Another important notion is a constraint satisfaction. A constraint is a rule that tells you whether two things are compatible or not."
    },
    {
      "end_time": 5662.824,
      "index": 243,
      "start_time": 5639.804,
      "text": " and the constraint is satisfied if they are compatible. So you basically have a number of conditions that you establish by measuring them somehow, for instance, whether you have a head or multiple heads, and you try to find a solution where you can end up with exactly one head. And if you end up with exactly one head based on the starting state, then you have managed to find a way to satisfy your constraints."
    },
    {
      "end_time": 5688.695,
      "index": 244,
      "start_time": 5663.353,
      "text": " And so in a sense, what you call a competency is the ability of a system to take a region of the states of the space of the universe from basically some local region of possible state that he was can be in and move that region to a smaller region that is acceptable. So there is a region on the universe state space, but you have only one head."
    },
    {
      "end_time": 5713.473,
      "index": 245,
      "start_time": 5689.411,
      "text": " And there's a larger region where you don't have any head at all, but the starting state of your organism. And then you try to get from A to B. So you get from this larger region to the one in which you want to be. Of course, if you have one head, you want to stay in the region in which you have one head, which of course is usually much easier. But the ability to basically to condense the space to get to bridge over many regions into the target region."
    },
    {
      "end_time": 5742.21,
      "index": 246,
      "start_time": 5713.78,
      "text": " is what comes down to this, what this competency is. The system basically has an emergent wanting to go in this region and it's trying to move there. And so there are constraints at the level of the substrate that are battling with the functional constraints that the organism wants to realize to fulfill its function. And sometimes you cannot satisfy this and you end up with two heads because you don't know which one you get rid of or how to digest one of the heads and so on. And you end up with some Siamese twin."
    },
    {
      "end_time": 5772.841,
      "index": 247,
      "start_time": 5743.319,
      "text": " And so this is an interesting constraint that you have to solve for when you are dealing with reality and how you battle with the substrate until you get to the functional solution that you evolve for. Yeah, that's interesting. I mean, we've also found that there are, so we look at exactly this, the navigation, this kind of navigation and more of a space, how you get from here to there and what paths are possible to get from here to there and so on."
    },
    {
      "end_time": 5775.111,
      "index": 248,
      "start_time": 5773.285,
      "text": " One of the things that we found is that"
    },
    {
      "end_time": 5805.009,
      "index": 249,
      "start_time": 5775.657,
      "text": " There are regions of that space that belong to other species and you can push a planarian with a standard wild type genome into the gold state of a completely different species. So we can get them to grow a head. So there's a species that normally has a triangular head. You can make it grow a round head like a different species or a flat head or whatever. So those are about 100 to 150 million years of evolutionary distance and you can do it"
    },
    {
      "end_time": 5821.391,
      "index": 250,
      "start_time": 5805.009,
      "text": " you know, within a few days just by perturbing that electrical circuit so that it lands in the wrong space. And then outside of that there are regions that don't belong to planaria at all. So planaria are normally nice and flat. We've made planarians that are"
    },
    {
      "end_time": 5835.367,
      "index": 251,
      "start_time": 5821.92,
      "text": " They look like they are cylinder like a like a ski cap, you know, they become like a like a like a hemisphere or really weird ones that are spiky. They're they're like a ball with spikes on it. There are all kinds of other regions in that space that you can that you can push them to."
    },
    {
      "end_time": 5854.343,
      "index": 252,
      "start_time": 5835.725,
      "text": " and so those are new those are not species that they diverge from those are no one's ever cited to my knowledge that's yes there's no such there are no such species it's easier to you and we've done this in frog too you can you can push tadpoles to make it to look like those of other species or you can make"
    },
    {
      "end_time": 5876.8,
      "index": 253,
      "start_time": 5854.343,
      "text": " Yeah, that's a whole interesting thing for evolution anyway, right? One species birth defect is a perfectly reasonable different species. So we can make tadpoles with a rounded tail, which for a Xenopus tadpole is a terrible tail, but for a zebrafish, that's exactly the right tail. So you can sort of imagine evolution manipulating the different"
    },
    {
      "end_time": 5905.196,
      "index": 254,
      "start_time": 5876.988,
      "text": " information processing, whether by the by electrical circuits or other other machinery that help the system explore that more of a space and, you know, start to start to start to move away from from whatever the you know, that speciation is moving away from your standard attractor that you're usually land on. How does this relate to intelligence? Well, intelligence is the ability to make models and usually in the service of control, at least that's the way I would"
    },
    {
      "end_time": 5934.667,
      "index": 255,
      "start_time": 5905.589,
      "text": " Explain intelligence. There are other definitions, but it's the simplest one that I found. It also accounts for the fact that many intelligent people are not very good at getting things done. Intelligence and goal rationality are somewhat orthogonal. Excessive intelligence is often a prosthesis for bad regulation. Have you read The Intelligence Trap? No. Okay, the author makes a similar case and he's coming on shortly."
    },
    {
      "end_time": 5958.285,
      "index": 256,
      "start_time": 5935.196,
      "text": " Essentially saying that there are certain traps that people with high IQs have that are not beneficial for them as biological beings. They're mainly cognitive biases. So for instance, it's extremely interesting. So let's just give one of the biases to say you're either liberal biased or you're conservative biased. And then you were to give a test where there's some data that says that on the surface, it shows that the data shows that gun control prevents gun violence."
    },
    {
      "end_time": 5986.971,
      "index": 257,
      "start_time": 5958.643,
      "text": " Well, the liberals are more likely to say, yes, this data does show that. But if you're conservative, you're more likely to find, oh, actually, the subtleties in the data show that gun control increases gun violence. And then they thought, OK, well, let's just switch this to make it such that the superficial data suggests that gun control increases violence. You need to look at the data carefully to show that it actually prevents violence. Well, the conservatives in that case would be more quickly to say, oh, look, the gun control increases violence. And the liberals would find the loophole."
    },
    {
      "end_time": 6017.022,
      "index": 258,
      "start_time": 5987.449,
      "text": " Well, that's one of the reasons why I don't mind interviewing people who are biased, because to me, they're more able to find a justification for something that may be true, but I and others are so, well, we all have our own biases, we're so inclined in some other direction that we just were blind to it. But anyway, the point is to affirm what you're saying, Yoshua. Okay, so I know Michael has a hard cut off at 2pm. So I want to ask the question for AGI, that is Artificial General Intelligence. It seems as though we're far away or that"
    },
    {
      "end_time": 6047.261,
      "index": 259,
      "start_time": 6017.483,
      "text": " First of all, I don't know how far we are for AGI. It could be that the existing paradigms are sufficient to brute force it."
    },
    {
      "end_time": 6071.698,
      "index": 260,
      "start_time": 6048.114,
      "text": " But we don't know that yet, so we are going to find out in the next few months. But it could also be that we need to rewrite the stack to build systems that work in real time, that are entangled with the environment, that can build shared representations with the environment, and that we need to rewrite the stack. And there are actually a number of questions that I'd like to ask Michael."
    },
    {
      "end_time": 6096.118,
      "index": 261,
      "start_time": 6072.125,
      "text": " I noticed that Michael is wisely reluctant to use certain words like consciousness a lot. And it's because a lot of people are very opinionated about what these concepts mean and you first have to deal with these opinions before you come down to saying, oh, here I have the following proposal for implementing reflexive attention as a tool to form coherence and a representation. And this leads to the same phenomena as what you call consciousness."
    },
    {
      "end_time": 6125.759,
      "index": 262,
      "start_time": 6096.715,
      "text": " Right, so that is a detailed discussion. Maybe you don't want to have that discussion in every forum and rather than having this discussion, you may be looking at how to create coherence using a reflexive attention process that makes a real-time model of what it's attending to and the fact that it's attending to it so it remains coherent but for itself. So this is a concrete thing, but I wonder how to implement this in a self-organized fashion if the substrate that you have are individual agents."
    },
    {
      "end_time": 6152.278,
      "index": 263,
      "start_time": 6126.254,
      "text": " There is a similarity here between societies and brains and social networks. That is, if you have self-interested agents in a way that try to survive and that get their rewards from other agents that are similar to them structurally, and they have the capacity to learn to some degree, and that capacity is"
    },
    {
      "end_time": 6180.435,
      "index": 264,
      "start_time": 6152.671,
      "text": " sufficient so they can, in the aggregate, learn arbitrary programs, arbitrary computable functions. And it's sufficient enough so they can converge on the functions that they need to as a group reap rewards that apply to the whole group because they have a shared destiny, like the poor little cells that are locked in the same skull and they're all going to die together if they fuck up. So they have to get along, they have to form an organization"
    },
    {
      "end_time": 6205.52,
      "index": 265,
      "start_time": 6180.776,
      "text": " that is distributing rewards among each other. And this gives us a search space for possible systems that can exist. And the search space is mostly given, I think, by the minimal agent that is able to learn how to distribute rewards efficiently while doing something useful, using these rewards to change how you do something useful. So you have an emergent form of governance in these systems."
    },
    {
      "end_time": 6220.162,
      "index": 266,
      "start_time": 6205.879,
      "text": " It's not some centralized control that is imposed on the system from the outside as an existing machine learning approaches and AI approaches. But this only is an emergent pattern in the interactions between the individual small units, small reinforcement learning agents."
    },
    {
      "end_time": 6250.282,
      "index": 267,
      "start_time": 6220.896,
      "text": " And this control architecture leads to hierarchical government. It's not fully decentralized in any way. There are centralized structures that distribute rewards, for instance via the dopaminergic system, in a very centralized top-down manner. And that's because every regulation has an optimal layer where it needs to take place. Some stuff needs to be decided very high up. Some stuff needs to be optimally regulated very low down, depending on the incentives. Game theoretically, a government is an agent that imposes an offset on your payoff metrics to"
    },
    {
      "end_time": 6279.889,
      "index": 268,
      "start_time": 6250.691,
      "text": " make your Nash equilibrium compatible with the globally best outcome. To do this, you need to have agents that are sensitive to rewards. It's super interesting to think about these reward infrastructures. Elon Musk has bought Twitter, I think, because he has realized that Twitter is the network among all the social networks that is closest to a global brain. It's totally mind blowing to realize that he basically trades a bunch of wealthy stock for the opportunity to become Pope."
    },
    {
      "end_time": 6307.534,
      "index": 269,
      "start_time": 6280.265,
      "text": " Pope of a religion that has more active participants than Catholicism, even daily practicing people who enter this church and think together. And it's the thing that is completely incoherent at this point, almost completely incoherent. There are bubbles of sentience, but for the most part, this thing is just screeching at itself. And now there is the question, can we fix the incentives of Twitter to turn it into a global brain? And Elon Musk is global brain built. He believes that this is the case."
    },
    {
      "end_time": 6314.206,
      "index": 270,
      "start_time": 6307.722,
      "text": " And that's the experiment that he's trying to do, which makes me super excited, right? This might fail. There's a very big chance that it fails."
    },
    {
      "end_time": 6341.903,
      "index": 271,
      "start_time": 6314.667,
      "text": " But there is also the chance that we get the global brain, that we get emerging collective intelligence that is working in real time using the internet in a way that didn't exist before. So super fascinating thing that might happen here. And it's fascinating that very few people are seeing that Elon Musk is crazy enough to spend 44 billion dollars on that experiment just because he can and has nothing else to do and thinks it's meaningful to do it more meaningful than having so much money in the bank."
    },
    {
      "end_time": 6367.824,
      "index": 272,
      "start_time": 6342.449,
      "text": " And so this makes me interested in this test bed for rules. And this is something that translates into the way in which society is organized, because social media is not different from society, not separate from it. Problem of governing social media is exactly the same thing as governing a society. You need a right form of government, you need a legal system. Ultimately, you need representation and all these issues, right? It's not just the moderation team"
    },
    {
      "end_time": 6398.08,
      "index": 273,
      "start_time": 6368.217,
      "text": " And the same thing is also true for the brain. What is the government of the brain that emerges in what Gary Edelman calls neural Darwinism among different forms of organization in the mind until you have a model of a self organizing agent that discovers that what it's computing is driving the behavior of an agent in the real world and it's covers a first person perspective and so on. How does that work? How can we get a system that is looking for the right incentive architecture? And that is basically the main topic where I think that"
    },
    {
      "end_time": 6412.193,
      "index": 274,
      "start_time": 6398.592,
      "text": " where Michael's research is pointing from my perspective that is super interesting. We have this overlap between looking at cells and looking at the world of humans and animals and stuff in general."
    },
    {
      "end_time": 6432.858,
      "index": 275,
      "start_time": 6413.865,
      "text": " Yeah, super interesting. So Chris Fields and I are working on a framework to understand where collective agents first come from. How do they organize themselves?"
    },
    {
      "end_time": 6459.309,
      "index": 276,
      "start_time": 6432.858,
      "text": " And we've got a model already about this idea of rewards and rewarding other cells with neurotransmitters and things like this to keep copies of themselves nearby because they're the most predictable. So this idea of reducing surprise, well, what's the least surprising thing? It's a copy of yourself. And so you can sort of, Chris calls it the imperial model of multicellularity. But one thing to really think about here is"
    },
    {
      "end_time": 6483.302,
      "index": 277,
      "start_time": 6459.804,
      "text": " Imagine an embryo. This is an amniote embryo, let's say a human or a bird or something like that. And what you have there is you have a flat disk of 10,000, 50,000 cells. And when people look at it, you say, what is that? They say it's an embryo, one embryo. Well, the reason it's one embryo is that under normal conditions, what's going to happen is that in this disk,"
    },
    {
      "end_time": 6508.916,
      "index": 278,
      "start_time": 6483.643,
      "text": " One cell is symmetry breaking. One cell is going to decide that it's the organizer. It's going to do local activation, long range inhibition. It's going to tell all the other cells, you're not the organizer, I'm the organizer. And as a result, you get one special point that begins the process that's going to walk through this memorphous space and create a particular large scale structure with two eyes and four legs and whatever else it's going to have."
    },
    {
      "end_time": 6526.237,
      "index": 279,
      "start_time": 6509.36,
      "text": " But here's the interesting thing, those cells, that's not really one embryo, that's a weird kind of Freudian ocean of potentiality. What I mean by that is if you take, and I did this as a grad student, you can take a needle and you can put a little scratch through that blastoderm, put a little scratch through it,"
    },
    {
      "end_time": 6548.131,
      "index": 280,
      "start_time": 6526.647,
      "text": " What will happen is the cells on either side of that scratch don't feel each other, they don't hear each other's signals, so that symmetry breaking process will happen twice, once on each end, and then when it heals together, what you end up with is two conjoined twins because each side organized an embryo and now you've got two conjoined twins. Now, many interesting things happen there."
    },
    {
      "end_time": 6567.346,
      "index": 281,
      "start_time": 6548.626,
      "text": " One is that every cell is some other cells and external environment. So in order to make an embryo, you have to self organize a system that puts an arbitrary boundary between itself and the outside world. You have to decide where do I end and the world begins. And it's not given to you, you know, somehow,"
    },
    {
      "end_time": 6583.285,
      "index": 282,
      "start_time": 6567.346,
      "text": " From outside for a biological system, every biological system has to figure this out for itself. Unlike modern robotics or whatever where it's very clear. Here's where you are. Here's where the world is. These are your effectors. These are your sensors. Here's the boundary of the outside world. Living things don't have any of that. They have to figure out all of this out from scratch."
    },
    {
      "end_time": 6610.435,
      "index": 283,
      "start_time": 6583.285,
      "text": " The benefit to being able to figure it out from scratch, having to figure out from scratch, is that you are then compatible with all kinds of weird initial conditions. For example, if I separate you in half, you can make twins. You don't have a total failure because now you have half the number of cells. You can make twins, you can make triplets, probably many more than that. So if you ask the question, you look at that blasted room and you ask how many individuals are there,"
    },
    {
      "end_time": 6630.333,
      "index": 284,
      "start_time": 6610.435,
      "text": " You actually don't know it could be zero, it could be one, it could be some small number of individuals. That process of auto polices has to happen. And here are a number of things that are uniquely biological that I think relate to the kind of flexibility plasticity that you need for AGI."
    },
    {
      "end_time": 6655.452,
      "index": 285,
      "start_time": 6631.015,
      "text": " In whatever space, it doesn't have to be the same space that we work in, but your boundaries are not set for you by an outside creator. You have to figure out where your boundaries are, where is the outside world. So you make hypotheses about where you end and where the world begins. You don't actually know what your structure is, kind of like Vanguard's robots from 2006 where they didn't know their structure and they had to make hypotheses about, well, do I have wheels? Do I have legs? What do I have? And then"
    },
    {
      "end_time": 6675.845,
      "index": 286,
      "start_time": 6655.452,
      "text": " Make a model based on basically babbling right like the way that babies babble so so you have to figure out you have to make a model of where the boundaries you have to make a model of what your structure is. You are energy limited which most of the most AI and robotics nowadays are not when your energy and time limited it means that you cannot."
    },
    {
      "end_time": 6688.541,
      "index": 287,
      "start_time": 6675.845,
      "text": " Pay attention to everything you are forced to course grain in some way and lose a lot of information and compress it down so you have to you have to choose a lens a course screening lens on the world and figure out how you're going to represent things."
    },
    {
      "end_time": 6711.63,
      "index": 288,
      "start_time": 6689.172,
      "text": " And all of this has to, and there are many more things that we could talk about, but all of these things are self-constructions from the very beginning. And then you start to act in various spaces, which again are not predefined for you. You have to solve problems that are metabolic, physiological, anatomical, maybe behavioral if you have muscles."
    },
    {
      "end_time": 6742.125,
      "index": 289,
      "start_time": 6712.125,
      "text": " But nobody's defining the space for you. For example, if you're a bacterium, Chris Fields points this out, if you're a bacterium and you're in some sort of chemical gradient, you want to increase the amount of sugar in your environment, you could act in three-dimensional space by physically swimming up the gradient, or you can act in transcriptional space by turning on other genes that are better at converting whatever sugar happens to be around, and that solves your metabolic problem instead of... So you have these hybrid problem spaces."
    },
    {
      "end_time": 6763.831,
      "index": 290,
      "start_time": 6742.585,
      "text": " All of this, I think what contributes in a strong sense to all the things that we were just talking about is the fact that everything in biology is self-constructed from the beginning. You can't rely on, you don't know ahead of time. When you're a new creature born into the world, and we have many examples of this kind of stuff, you don't know how many cells you have, how big your cells are. You can't count on any of the priors."
    },
    {
      "end_time": 6784.77,
      "index": 291,
      "start_time": 6763.831,
      "text": " You have this weird thing that evolution makes these machines that don't take the past history too seriously. It doesn't overtrain on them. It makes problem-solving machines that use whatever hardware you have. This is why we can make weird chimeras and cyborgs, and you can mix and match biology in every way."
    },
    {
      "end_time": 6812.295,
      "index": 292,
      "start_time": 6785.299,
      "text": " with other living things or with nonliving things, because all of this is interoperable, because it does not make assumptions about what you have to have, it tries to solve whatever problem is given, it plays the hands that it's dealt. And that results in that assumption that you cannot trust what you come into the world with, you cannot assume that the hardware is what it is, gives rise to a lot of that intelligence, I think, and a lot of that plasticity."
    },
    {
      "end_time": 6833.763,
      "index": 293,
      "start_time": 6812.739,
      "text": " So if you translate this into necessary and sufficient conditions, what seems to be necessary for the emergence of general intelligence in a bunch of cells or units is that each of them is a small agent, which means it's able to behave with an expectation of minimizing future target"
    },
    {
      "end_time": 6861.8,
      "index": 294,
      "start_time": 6834.002,
      "text": " value deviations that learns that the configurations environment that single anticipated reward. Next thing these units need to be not just agents, they need to be connected to each other. And they need to get the rewards or proxy rewards, something that allows them to anticipate whether the organism is going to feed them in the future from other units that also adaptive. So you need multiple message types and the ability to recognize and send them with a certain degree of reliability."
    },
    {
      "end_time": 6893.097,
      "index": 295,
      "start_time": 6864.036,
      "text": " What else do you need? You need enough of them, of course. What's not clear to me is how deterministic do the units need to be? How much memory do they need to be? How much state can they store? How deep in time does their recollection need to go? And how much forward in time do they need to be able to form expectations? So we see how large is this activation front that they can with this shape of the distribution that they can learn."
    },
    {
      "end_time": 6919.326,
      "index": 296,
      "start_time": 6893.797,
      "text": " and have to learn to make this whole thing happen. And so basically conditions that are necessary are relatively simple. If you just wait for long enough and get a such a system to percolate, I imagine that the compound agency will at some level emerge on the system, just in a competition of possibilities in the same way as emerging agency has emerged on Twitter in a way with"
    },
    {
      "end_time": 6939.019,
      "index": 297,
      "start_time": 6919.684,
      "text": " the world."
    },
    {
      "end_time": 6967.637,
      "index": 298,
      "start_time": 6939.616,
      "text": " going to organize groups of people into behavioral things. It's really interesting to look at Twitter as something like a mind at some level, right? It's working slower, but it would probably be possible to make a simulation of these dynamics in a more abstract way and to use this for arbitrary problem solving. And so what would an experiment look like in which we start with these necessary conditions and narrow down the sufficient conditions?"
    },
    {
      "end_time": 6990.913,
      "index": 299,
      "start_time": 6969.326,
      "text": " Yeah, right on. And that's, I mean, yeah, we're doing some of that stuff, some of that kind of modeling. I apologize. I've got to run here. Thank you both for coming out for this. I appreciate it. Thank you so much. And thank you for bringing us together. So a great, great conversation. I really enjoyed it. So, yeah, thank you. Likewise. I enjoyed it very much. Thank you, Kurt. Thank you so much, Kurt. Thanks, Joshua."
    },
    {
      "end_time": 7012.278,
      "index": 300,
      "start_time": 6991.391,
      "text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc."
    },
    {
      "end_time": 7039.206,
      "index": 301,
      "start_time": 7012.278,
      "text": " It shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. If you'd like to support more conversations like this, then do consider visiting theories of everything dot org. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you."
    },
    {
      "end_time": 7054.531,
      "index": 302,
      "start_time": 7048.882,
      "text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store."
    },
    {
      "end_time": 7078.951,
      "index": 303,
      "start_time": 7058.865,
      "text": " . . . . . . ."
    }
  ]
}

No transcript available.