Audio Player
Starting at:
Michael Levin Λ Anil Seth: Your Brain Isn’t a Computer and That Changes Everything
September 22, 2025
•
1:12:33
•
undefined
Audio:
Download MP3
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
165 sentences
11,098 words
Method: api-polled
Transcription time: 70m 52s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
I make the additional really weird claim that I don't think algorithms capture everything we need to know about life. We've to go on that the idea of the brain as a computer is a metaphor and not the thing itself. There's no bright line between what it does and what it is. That would not be what I would have predicted.
This is a monumental theolocution. For the first time ever, Professor Anil Seth and Professor Michael Levin are conversing and performing research in real time and we get to be the flies on the wall. Anil says that the brain as a computer metaphor has blinded us for decades. You can't extract the software from the substrate.
This means that silicone consciousness may be impossible not because machines lack dualistic souls, though. But wait, Michael disagrees. He thinks that machines may be able to access the same platonic space that biological systems tap into.
The magic isn't restricted to carbon. Both professors are now building and studying xenobots together. These are living robots made from skin cells that self-organize, exhibiting behaviors evolution never programmed. Do they dream? Do they have preferences? Are they conscious? On this episode of Theories of Everything, we explore their radical collaboration, including questions like how split-brain patients may prove consciousness fragments and multiplies, and the terrifying possibility
that large language models are doing something entirely different from what their output suggests, tasks no programmer asked for, no steps in the code demand for, but perhaps where that quote unquote magic lies. Remember to hit that subscribe button if you like videos exploring fundamental reality.
All right, we're going to talk about aliens. We're going to talk about cyborgs, modules in the brain, split hemisphere patients, if I'm not mistaken, at unconscious processing. We're going to get to all of that. To set the stage, I'd like to know what's exciting you both research-wise currently, something you're pursuing. So, Anil, why don't we start with you, please? Well, thanks, Joe. Two things, I guess. One thing that seems to be exciting a lot of people these days, which is the possibility of AI being conscious.
And whether it's something that AI systems can have or whether, as I tend to think, that it's something more bound up with our nature as living creatures. And the other thing that's exciting me actually just came to mind in your little list of topics there is the question of islands of consciousness. So there's a lot of work on things like split brain patients, patients with brain damage and so on. But a question that me and a couple of colleagues, Tim Mayne, Marcella Massimini have been wondering is,
Can other isolated neural systems that may have conscious experiences and one candidate for this is called hemispherotomy, which is kind of neurosurgical operation where you have bits of the brain detached, disconnected from all other parts of the brain, but you still have neural activity. These parts of the brain is still part of the living organism.
Are they islands of a web so we've been exploring that theoretically and very recently with some evidence from brain imaging of people following this neurosurgical operation. Michael. Yeah, so so a couple of things on the experimental front.
I'm really excited about some novel systems that we're setting up as compositional agents. So putting together different living and non-living components using AI and other interfaces to allow them to not just communicate with each other, but we hope form a kind of collective intelligence. And then we can ask some interesting questions about what kind of inner perspective this new intelligence might be.
Just in general, you know, complimenting the work with them before around distributing and let's say separating out the different pieces like it was just saying of the brain and so on. The flip side of that, which is putting together new kinds of beings that haven't existed before and asking what their behavioral compliments are, what their capacities are, what their goals are, their preferences, what do they pay attention to, these kinds of things.
And just in general, really digging into this idea of, for lack of a better word, intrinsic motivations, and asking in novel creatures that don't have the benefit of a lengthy evolutionary history that presumably set some of their cognitive properties, where do these things come from? And how do we predict them? How do we recognize them? I should have said, of course, one of the things that's really exciting to me is stuff that Mike and I have been talking about together.
For some of the systems that he's building, some of the things he's actually talking about, do they sort of self-organize in ways which seem to abate the laws of psychophysics and other other sorts of situations where we might attribute things like intrinsic motivation to evolve systems? This is a big question about, you know, are laws of perception, are they adapted to specific environmental situations or are they somehow
Tell us about some of these experiments.
We haven't been done yet. The logic is to take some simple observations of phenomena that are very widespread in perception across many evolved species, whether it's a human being, a mouse,
I don't know, probably a bacterium or something like that. So there are things like where they're all affecting as well. So the idea that the perceived intensity of the stimulus scales, moderately with its actual magnitude. Now, this is something that seems very, very general. Is this something that we can look for evidence for in some of these completely
How should we contact systems of the kind that Mike is generating and so that'd be one example that a whole bunch of other examples we have can we find things like susceptibility to visual illusions or things like that in these systems.
kind of very general perceptual and learning phenomena that we might be able to examine whether they happen in these systems which don't have this straightforward evolutionary trajectory. I think that's the basic project. Yeah, exactly. And these comprise xenobots, anthropobots, and even weirder constructs that we can start to put together by
You building technological interfaces between radically different kinds of beings that allow them to, you know, sort of like a, like a, like an artificial corpus colostrum that takes two different things and tries to bind them into, into one novel collective thing and seeing, and seeing whether some of their properties and their behavioral competencies match the things that people have been studying as, as a Neil said, psychophysics and, you know, all the things out of a behavioral pan book, basically. Yeah. And this kind of relates this, you know, this,
I'm for the following reasons.
And I tend to think differently. We both do it. And so these are ways of just looking at what's the dynamic and functional potential that's sort of intrinsic to the stuff we're made of. That provides the basis for our cognitive, our perceptual, ultimately our conscious abilities and properties. So these are experiments of way of getting at that. What's just there that evolution can then make use of?
Anil, can you make the elevator pitch for people who are already familiar with the argument that, look, the processing that's going on in our brain is just processing. It could potentially be translated to a computer. If consciousness is similarly information processing, then we have something that's quote unquote substrate independent. So you're making the claim that it's not so clear. Maybe there is a dependence to the substrate. Can you make that case? And then also, Michael, I know that you have several questions you'd like to ask Anil and feel free at any point to.
I'll try and make the point. That's the case I'm trying to make. It's quite tricky because it goes against such a deeply embedded assumption that the brain is basically a computer made of meat and the things that it does, the only things that it does that are relevant for things like cognition and consciousness are computations, are forms of information processing. If you start from that perspective,
It leads you to this idea that there is this substrate independency and what that means to unpack that, that just means that the stuff we're made of doesn't really matter, it's the computations that matter and if it's a substrate kind of computations then fine. These two sort of ideas go together because one of the whole
Motivations for a computational view is substrate independency. Turing's formulation of computation is formulated in terms of it being independent of any particular material.
The anodized pitch really is that we've kind of forgotten that the idea of the brain as a computer is a metaphor and not the thing itself. It's a sort of marriage of mathematical convenience. And the closer you look at real biological systems, as Mike's work beautifully exemplifies, the less that this idea of substrate independency makes any real sense. There's no bright line
In a brain or biological system in general between what you might call the mind and the wet web between what it does and what it is. And if there's no clear way to separate in a system, what it does from what it is, then it's very, very much less clear that one should think that computation is all that matters. Because for computation to be all that matters, you kind of have to have the sharp separation between
software and hardware but what it does and what it is and you can't do that and there's less reason to think the computation is what matters and there's less reason to think that then there's equally less reason to think that you could implement what matters in a substrate independent way on something else. You can of course still use computers to simulate a brain in whatever level of detail you want but that's
That's neither here nor there. It's a very useful thing to do. We both do this. We do this all the time. But you can simulate anything using a computer. That's one reason computers are great. But that doesn't mean you will instantiate the phenomenon. You only do that if computation really is all that matters. And I think that's very much up for... I think it's been a very deeply held assumption, but I think it's likely wrong.
Yeah, I mean, I agree with everything that Neil said, but I take it in a slightly different direction. So I think it's critical to remember that, yeah, everything we think about as computation is a metaphor. It's a formal model. And so we have to ask ourselves, what does this model help us do and what is it hiding? In other words, what is it preventing us from seeing? And I agree that this metaphor does not capture everything that we need to know about and we need to
to use to technology and so on about life. I think the computational paradigm and the notion of algorithms and so on does not capture everything we need to know about life. But I make the additional really weird claim is that I don't think it captures everything we need to know about machines either. In other words, we tend to think, at least the people I meet tend to think that we have this set of metaphors that are for machines and their algorithms,
And they don't really apply to biology. Certainly people say, well, they don't apply to me. I'm creative and whatever else they are. But there is a corner of the universe that is boring, mechanical. It only does what the algorithm says it should do. And for those kinds of things, these metaphors are perfect. They capture everything there is to know. So I agree with Anil on the first part, but I doubt the second part. I think that a lot of what we have
in our theories of computation is a pretty reasonable theory of what I call the front end. I think most of what we deal with are actually thin clients in a certain sense. They're interfaces to something much deeper, which we can call the platonic space. I don't love the name, but I say it that way because then at least the mathematicians know what I'm talking about. But I think that even, and we have some work already published and more work coming soon in the next few months on this, showing that
Yeah, the standard way of looking at algorithms don't even tell the story of so-called machines. And so whatever it is, and I have guesses, but of course we don't know, whatever it is that allows mind to come through biological interfaces and not be captured by these formal models,
I think these other systems that we call machines and certainly cyborgs and hybrids also so i think they get some of the magic to not gonna be like i like us it's going to be different but i don't think they escape these ingressions either. This is why i think i find lights work so interesting because it's provocative in this direction i think i think he's. Please you know i've always think he has summarizes what i said very well is is that.
We underestimate the richness of biological systems if you force them into what's often called the machine metaphor by which we really mean that all that matters is this sort of Turing computation algorithm thing. But I think it's, it is equally true that we limit our imagination about what machines might do as well by doing this. And there's a whole kind of alternative history of AI, which was really grounded in 20th century cybernetics.
much more to do about dynamical systems, attractors, feedback system, all things you can still simulate computationally, which are fundamentally not arising from the algorithmic way of thinking about things. There are also really interesting mathematical properties like emergence and so on, which I think apply, can both help us understand, but also might be design principles
for machines of various kinds, which, again, don't really fit into an algorithmic view of things. So it might as well beautifully show that even something we think of as phenomenically algorithmic, like correcting if I'm wrong, might be the bubble sort of stuff. So this is an algorithm that anyone in computer science, when at one learns to code, to sort things into a particular order, has really interesting emerging properties that could be, that other things can be built on top of.
So yeah, I don't think, I think it's for me, it's like there's this, there's this nice iterative back and forth where we can learn to think of both biology and machines differently. And of course that might give us richer metaphors through which we can use one lens to understand the other.
Would you say then that we have the idea of machine and that a Turing machine is a strict subset of that idea of machine? I mean, a Turing machine is an abstraction, right? Turing machines were never sort of supposed to exist, you know, as things. They're infinite tape and things like that. So you've got, you know, a Turing machine. The idea is you, in one sense, you're mapping a bunch of numbers onto another bunch of numbers.
And then the universal Turing machine does this through this moving head and an infinite title. It was never really supposed to exist as a physical machine. I think that's where part of the problem has sort of come from. But an algorithm in that sense, yeah, I think that's a subset. When you realize a Turing machine, that's a subset of possible machines.
Yes, when you realize a Turing machine, it will be a subset of all possible machines just because it's a particular Turing machine. But when you realize a universal Turing machine as well, that's also in a subset of possible machines. So if you don't mind spelling out to the audience, the idea of hypercomputation, and would you then say that biological creatures or cells or what have you are doing something that is hypercomputational? And feel free to take this in a different direction, Michael, as well, if you like.
Would you tell me what you're thinking of when you use the word hypercomputation? I've heard it used to imply different things. So if something can solve the halting problem, it would be an example of a hypercomputer, something that can decide problems that a Turing machine or a universal Turing machine can't. Right. So sort of super Turing in some sense. That's one way in which machines can be non
Turing or state Turing world. But I think there are many other systems that are just not captured by this way. They don't have to be based on the halting problem. Strictly anything that is stochastic, anything that is continuous is beyond this world of strict universal Turing machines. There are all kinds of extensions that try
Try to go back, but there are also functions that things do that necessarily involve particular material substrates. So take something like metabolism and metabolism is not mapping some range of numbers, whether they're continuous or random to another number. It involves actual transformation of a particular kind of substance into another kind of substance. That's just.
non-turing and what is a fairly trivial way that that kind of thing might be very important for particular classes of machines or systems whether they're biological or not. So I think there's different spaces of what you might call non-turing processes. Only some of these are these kinds of high-term computation whole thing problem solving things where you might say you've got some sort of fancy quantum stuff going on.
But I think things differ about that, things differ, right? I mean, there are some people that would say that they'll actually, unless you're talking about this, this, like a computation, the sense you've mentioned that everything else is sort of a relatively feasible extension of, of Turing as is. So there's, there's definitely debate in that area.
I would go in a slightly different direction and emphasize something that does not lean on quantum mechanics, does not lean on stochasticity, and does not lean on hyper-touring
or anything like that. And also let's even step back from the conventional living things are so complex that you can always find more mechanisms and so on. I want to look at an extremely minimal model and the reason that we chose this was precisely because it's such a minimal model. I wanted to sort of maximize the shock value
of this thing for our intuitions. And this is the work that my student Teining Zhang and Adam Goldstein and I did on sorting algorithms, which is what Anil mentioned. And there's a couple more things like it coming in the next few months. The sorting algorithm is, it's like bubble sort, selection sort, these kinds of things. CS students in CS 101 have been studying these for, I don't know, 60 years probably.
No one noticed what we noticed because the assumption has always been this thing does what we asked it to do.
And a lot of what I'm trying to emphasize is specifically running against that assumption that, yeah, it sorts the numbers all right. But if you back off from this assumption that all it does is what the steps of the algorithm ask it to do, then you find some new things. And computer scientists are well aware of emergent complexity, emergent unpredictability, cellular automata do all kinds of funky things, and some of the rules are chaotic and all this kind of stuff.
That's not what I'm talking about. I'm not talking about emerging complexity, unpredictability, or even perverse instantiation, which should be a life people find all the time. I'm talking about things that any behavioral scientist would recognize as within their domain if you didn't tell them that this came from a deterministic algorithm. And so I can go into details if you want, but a couple of things are sailing here.
What these algorithms are also doing while they're sorting your numbers are also a couple of interesting, I call them side quests because we didn't, there's nothing, there are no steps in the algorithm asking them to do this. In fact, if you try to write an algorithm to force them to do it, it would be a whole bunch of extra work, which is actually quite interesting because I think we're getting free compute here. That's a whole other thing that I think is a very testable. It's a crazy and it's a nice testable prediction because it's so weird and unexpected.
They are doing some other things that are not directly related to what you've asked them to do. That's really important because it means that these language models, for example, when we say nowadays a lot of people think language models,
We, people tend to assume that the thing that the language model talks about is some kind of clue as to its inner nature, right? And people say, well, you know, my GPT said to me that it was conscious or wasn't conscious or whatever. My, my point is the thing you force it to do may have zero to do with what's actually going on now in biologicals. That's not, that's not true because evolution, I think works really hard to make sure that the signs that we, and the communications that we do,
Are related to our interstate and things like that so in biology those things are tied closely together i think we've we've on and we've disconnected and what we are now making our things that.
What's that that look like they're talking and whatever and then they are but i'm not sure any of those things are at all a guy to what's going on inside and if i if a dumb bubble store which is six lines of code fully deterministic nowhere to hide six lines of code if that thing is doing things that we did not expect and we did not ask it to do and what i ask i mean there are no steps in the algorithm to do what it's doing.
then who knows what these language models are doing, but I'm pretty sure that just watching the language output is not a really good guide to what's happening. I think we have to go back to the very beginning and we have to apply the kinds of things that Anil was talking about, which is basic behavioral testing in various spaces. I think our imagination is really poor at this. I think we have to just be really creative as far as asking, what is this thing actually doing?
Specifically in the spaces between the algorithm because because the thing is It has to it's a little bit like this is this is a crazy Kind of a crazy analogy that I came up with the other day, you know, this this notion of steganography So when steganography you take and let's say a piece of data, let's say it's an image It's a jpeg and it looks like whatever it looks like
there are bits within that image that if you were to change those bits it wouldn't look any different right there's some there's some degrees of freedom in there that you can move things around and the image would still look the same and so what people do is they hide information in there and maybe it's your signature that you are the one who took the picture or maybe
Maybe it's a code because you're a spy. Whatever it is, you hide information in there. But the iron rule is you can't mess up the primary picture. You can sneak stuff into the degrees of freedom, but you can't mess with the primary data pattern because then it will be obvious that something's there. I kind of have a feeling that this is what's going on, not just with computer algorithms, but with everything. There is the primary thing it's supposed to do.
and anything else that it gets to do has to be compatible with that primary thing it isn't magic you can't break the laws of physics you can't go against the algorithm right you're not doing things that the algorithm forbids but there's a but there but it turns out i think that there's this that there are these weird like
empty spaces between the algorithm where you can do, and I mean, doesn't that describe to some extent our existential, you know, you have a certain bit of time in this world, you have to be consistent with those laws of physics, eventually, your physical body gets ground down to, you know, to entropy and whatever. But until then, you can do some cool stuff that isn't forbidden by the laws of physics, nor is it prescribed by those laws, I don't think. And so you get this, you get you get this is this is what I think is really interesting about these things. And the algorithm itself, to the extent that
it has to do the algorithm that limits what else it can do in a sense, what it's doing is in spite of the algorithm, not because of it. And so I agree with Anil here, I'm not a competition. I don't think anything is conscious because of the algorithm. If anything, I think the mental properties it has is in spite of the thing we force it to do. And so
I'll stop here. One thing which I'm most proud of in that paper that I think was kind of cool is that we figured out a way to let off the pressure on the algorithm a little bit to see what would happen. And the way you do that now, how would you do that? It has to follow the algorithm. How could you possibly let off the pressure? What we did was we allowed duplicate numbers in the sort. And what that allows you to do is you still have all the fives that will have to end up before the sixes and that has to go before the sevens and so on. But how you arrange those is now
not really constrained by the algorithm. You don't change the algorithm, you just allow multiple repeats within the, and what did we see? We saw that the crazy thing it was doing, which I call clustering, I could tell you what that is, but it doesn't matter. It went up higher than when we didn't let it do that. And so I really think, and this comes back to the AI thing, I really think that it's a lot like raising kids in the following sense. To the extent that you force them to do specific things,
You squelch down on the intrinsic motivation. Some kid that's forced to be in a class all day, you're not going to get to see what else he would be doing otherwise. Maybe it's out playing soccer, who knows what it would be. And so to the extent that we force these things to do specific things, we are actually not going, we're reducing what else they might do. And that's what we need to develop is the tools to detect and to facilitate this intrinsic motivation. And then you get into alignment and all of that.
There reminds me a lot of when I was doing my postdoc 20 years ago, coming across while being told by my mentor at the time, Gerald Edelman, about the distinction between redundancy and degeneracy. I think this is very apposite here. So, you know, engineering people often talk about having redundancy within a system. So if a system is designed to do something, to follow some steps in an algorithm,
Well, then you might want multiple copies in case something goes wrong, you have a backup. But the backup is doing the same thing. It's redundant in that sense. Biological systems don't seem to be like that. They exhibit degeneracy rather than redundancy. That is, they may have multiple ways of doing the same thing in context A, but in context B, these multiple ways of doing the same thing now do different things. So this is hinting at the same thing that
that although it looks like they're doing the same thing there's actually some spaces in between somewhere that you won't see unless you look in different contexts otherwise you'll only see the same process that might look like an algorithm and it's that degeneracy
that gives biological systems their kind of open-endedness, their ability to adapt to novel situations and so on. It might be related to what Mike is calling an intrinsic motivation that you have to have some kind of degeneracy rather than redundancy to systems. I mean, what's interesting to me is that people are often, with the exception of a few like diehard reductionist materialists, whatever, people are generally pretty willing to grant living things that.
Right. And they're okay with saying that, uh, you know, live living things, especially brainy living things get to get to do some of those things. But what I've, what I'm now finding is that people get very upset when, when you, when I suggest that the same thing might be true all the way down. It seems to be very important that we have this distinction. No, those are, that's the dead matter and the mere machines. We are special. We can do this thing. And my point is not, I'm not trying to mechanize.
Living things i'm going in the opposite direction i'm saying there's not there's not less mine than you think there is i think there's more with but but but actually especially people. You know kind of organist thinkers who really resist the mechanization of life and all the stuff they really get really upset if you really get upset at this last part because because.
If I suppose we're not as special, if it goes all out, like, I'm not sure. I think, you know, there's some kind of, um, uh, scarcity mindset that, that there's just, there's not enough mind for, for, for, you know, for all of us. Maybe I think it might, it might be that there's still this worry that even if you're like, say bubble sort again, bubble sort is still implemented on like standard computers. Right. So one way of.
potentially misunderstanding what you're saying is you're then basically allowing computational functionalism by the back door again in some ways by saying, look, you know, an algorithm like bubble sort has actually all the things that you need, or it has so much more going on than what one might think. So let's not be too quick to rule out substrate independent algorithms as sufficient for
other things that might seem otherwise hard to explain. Well, I think you're right. And I think people could but in that that that would be a misinterpretation of what I'm saying. I am not saying that that it's doing that because of the algorithm, right? So the standard computationalist theory is you are conscious because your algorithm is doing workspace theory, whatever, you know, whatever it's doing, right? That's why you are I'm saying the exact opposite. I'm saying that
even something as stripped down and forced into the stupid algorithm, there are still spaces there through which whatever this is that I, you know, whatever this magic is that we're talking about is able to squeeze in even there. There are minimal versions of it that will shine through even there. And if you provide a more, a different interface, and I don't want to just say more complex because I don't think it's just complexity, maybe it's materials, maybe it's some other stuff,
But if you provide better interfaces, such as living materials, well then, sure, you'll get way more. But this stuff seeps into even the most constrained systems, I think. So let's get to aliens, Michael. I don't know what to say. People email me sometimes asking to talk to my alien handlers. There's that. But I don't know anything about aliens other than to say that
It seems implausible to me, not being an expert on the exobiology or whatever, it seems implausible to me that the only kind of life is the life that we're familiar with here or cognition. I expect that elsewhere in the universe there will be extremely alien forms of mind that are not carbon, I can get even weirder, but not the kinds of things that we're used to here.
I think our imagination is terrible for that kind of thing i mean sci-fi does it does okay sometimes but yeah i just you know anything that's anything that's tied to the specifics of life on earth i think is is almost certainly too narrow as a as a criterion for these kinds of things. I'm just always go back to the Fermi paradox and like you know where where is everybody but. I'm which always worries me because it just sort of suggest to me that.
I think it's also very implausible that we're the only example of life, but then the evidence for intelligent life that has been able to broadcast structured energy out into the universe seems lacking. Where the hell is everybody? So, of course, the conclusion from this is that life might be very prevalent in many places, certainly not only here, but that there is quite difficult to get
Get life to the stage where it lasts long enough to persist and become cognitively sophisticated. I have no idea and I find that existentially concerning and just a great sort of shaker of the snow globe for reminding us that we really need to take care of our own planet and civilization first because it might not be very common to get to the kinds of things we are even if
It's exotic in a different way somewhere else. I think the universe is much more likely to be filled with grey goo than, you know, Mike Levins with eight legs and octopode form. So, Anil, if I was to take your cells and put them into a dish, some would form xenobots, some would die, most probably would die, and some may just wander about or what have you.
Have you become multiple agents at that point or were you always multiple agents pretending to be one? I don't think pretending to be one. I think it's an excellent question.
Whether you can have multiple kind of coarse grainings of agency simultaneously I think is quite interesting. I don't see why not in a sense. I think there can be sub-organismic levels of agency in my constituents but there's something sort of enslaving of these finer grains of description in things like
Organisms, things pull together, the parts pull together as a whole in a way that doesn't happen if you dissociate me into my constituent cells. So I don't see a contradiction between cells having agency and an organism having agency and a society having agency.
and perhaps a global society having some kind of agency, these things can all coexist and have a reality simultaneously, but they will affect each other. So agency at a macro level will probably constrain the agency that's available at the micro levels. And you have a book on consciousness, which I'll place on screen and a link in the description right now. So you've probably heard of the identity theory of consciousness.
My understanding is that it just says mental states are simply the same as physical states, they're not caused by they're not emergent from there are just identical to them. What do you make of that i'm curious for both of you. Why don't you get the theory i think i think things like identity. Inquire theory and more metaphysical positions and then actual fairies and.
For me, I like to wear metaphysics lightly, if at all. I don't think you get very far. To say that a mental state is or a conscious state is identical to a physical state, I mean, who knows? In some sense, it might be trivially true. In another sense, it might be absolutely completely wrong. But what I do think is it's not
It doesn't give you anything in particular to do or anywhere to go. So instead of sort of arguing about whether theories like that are correct or incorrect, I prefer to ask whether they're useful or not useful. And I don't think identity theory is that useful. I'm broadly a pragmatic materialist, which is to say that I'm pretty convinced that conscious states have something to do with the stuff, with physical stuff.
and we certainly know empirically there are correlations and causal relations between what if you do something to the brain something will happen in conscious experience at least in human beings who knows you know maybe consciousness is more general than biological systems but i think pragmatic materialism is a productively useful thing to do and we can we can go about the business of trying to explain properties of consciousness in terms of properties of biological systems
And we'll see how far we get. And this depends. Then we have to face the question of what are the properties of biological systems that give us explanatory predictive grip on properties of consciousness. For a bunch of people, the assumption is it's just the computations to bring us back to the early part of the conversation. But there could be many other things that actually give us explanatory and predictive grip about consciousness that aren't the computations.
That's the view that I'm interested in exploring and we'll see whether it's useful or not. Yeah, I agree with that. I think it's less than a theory than it is a linguistic claim. You're just saying something about the definitions. I find it kind of unhelpful. It's a little bit like saying that airline ticket prices
What are those? Well, let's associate them with some physical states and what explains them? Well, the constants at the beginning of the Big Bang plus some randomness. In a certain sense, kind of. In another sense, how much insight are you going to get as far as why these prices are going up or down if you have this view? I think probably zero. And so, like Anil, I'm interested in metaphors and I think that all these things are metaphors, but I'm interested in metaphors that help us discover new things.
I don't see how equating them linguistically with physical states is doing the trick. I don't think that works in biology for the cognitive non-consciousness specific things and I don't see it helping here either.
Hola, Miami! When's the last time you've been in Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free Wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit bigdealevent.com for more details.
Mike, you said you had some questions about split hemisphere patients for Anil. Well, I don't know what's okay. It's not so much specifically about split hemisphere patients, but I guess it's the thing I brought up in email. I was just wondering, I was listening to a talk, I forget whose talk it is, and somebody was saying, look, there are all these unconscious processes during reading, during driving, whatever, there's all these unconscious processes. And I was just curious what you think about that, because it seems to me critical to say,
Conscious to whom in other words they might well be unconscious to the main left hemisphere whatever that's that's verbally reporting this insane wow i did drove all the way from from home but you know my office and i wasn't conscious of any of that and so you say okay great there's this like unconscious but well it's not conscious to you.
But neither are my conscious states conscious to you. So how do we know that all of these things aren't the subsystems of the brain and mind that execute them? How do we know they don't have an experience they can't verbalize? So I was just curious about that because it seems like it's just a foregone assumption. And it seems like really begging the question if we just assume that because you don't feel them that they don't
You know that they don't and it's the same and the reason it's of interest to me is that that's what people say about our body organs too, right? So I mean claim that for the exact same reason we give each other the benefit of reasons four or five reasons that we give each other benefit of the doubt about consciousness. You should take that seriously about your various body organs and people say well, I don't feel my liver being conscious. Of course now you don't feel me being conscious either. So I was just curious. What what do you think about that?
Yeah, we had just for the people listening, we'd started this nice dialogue by email just a couple of days ago. So I think it raises some really important questions about how we use the words. Unfortunately, I do think it's a little bit linguistic here. We talk about the conscious and the unconscious. And of course, they mean different things in different contexts. So when it comes to, let's say, split hemisphere patients, the intuition is there are two separate
Conscious agents, just only one of them has the ability to behaviorally report through language what it's experiencing. But it's partly because each hemisphere has kind of the full complement of resources that one might think of as necessary that this becomes a sort of plausible position. Then there's other uses of conscious versus unconscious. There's a whole history of them.
A lot of the history of consciousness science is trying to contrast conscious from unconscious perception. So, you know, you'll show an image and somebody will say, yeah, I see it. And then you mask it in some way, manipulate it in some way. And people say, no, I didn't see it, but you can still see parts of the brain responding. And the logic is, well, the contrast that you get there between when something was consciously seen in the same image or the same sound was not consciously experienced
If you look at the difference in the brain, that difference has to do with, with consciousness. That's the whole strategy of looking for the, for the neural correlates of consciousness. But then you might ask, well, how do you know that the unconscious perception was in fact unconscious? It may have just been unconscious to the, to the subject as a whole, but there may have been an inaccessible conscious experience happening. So I think this is logically perfectly possible, but then you have this whole, well, how do you,
You then link that not only to a brute correlation but you have to then come up with some theoretical reason and that will depend on your theory like a theory like global workspace might say okay look the reason that the conscious perception was reportably conscious was because it engaged the global workspace and a theory is that
Things are conscious in virtue of accessing this global workspace. So you have some sort of theoretical reason for saying that the unconscious is in fact unconscious. But of course then you risk a little bit of circularity, right? That your evidence for global work space is based on the theoretical explanation that makes one conscious and the other not conscious. So you have to have multiple sources of evidence. All this to say is it's a
It's a very good question and it came up in the thing I mentioned right at the beginning. We have these hemisporotomy patients whose parts of their brain are completely disconnected. So they, by definition, can't respond to things. They can't generate any response. They're sort of the opposite of language models in this sense, right? They can't give us any persuasive behavioral evidence because they're not connected to anything. Yet, they are part of a brain that was at one point conscious.
And all that's happened really, in the limit, they're damaged as well. I mean, there's other things going on is they've been disconnected. So plausibly, at least for me, they're more likely to be conscious, but inaccessible, much more likely a priori than a language model is to be. And so we have to find indirect ways of trying to assess the likelihood of consciousness in these very disconnected hemispheres. And to cut a long story short, very short, because I know you've got to go in a sec, Mike.
When we look at EEG, and this is work done with colleagues in the University of Milan, it looks like these isolated hemispheres are in states of very, very deep sleep. So we see slow waves, very prominent slow waves, sharp spectral exponents and so on. But how do we know that that is in fact unconscious? Because there are a few examples of human beings where we actually see slow waves at the same time as consciousness in DMT, for instance, and things like that.
So it's iterative. It's very hard to be definitive and it's an excellent question. I think we don't know until we start looking at systems radically different from a psychology undergraduate looking at a monitor, which we still do and that's very useful, but we have to look at these other things as well. We don't really know what assumptions we're making when we interpret the data from just look for the car keys where the light is. You might miss the bigger picture.
Now Mike, before you get going, suppose I gave you both unlimited resources to design some experiment. What would you create?
Well, fundamentally, I think we need an environment, a closed-loop environment in which to exercise all kinds of, the Zenibots and AnthroBots are just the beginning, there's so much weirder things that we're looking into, such that we might be able to recognize new kinds of cognitive preferences, goals, competencies, whatever, to which we're otherwise blind. I mean, you could imagine
Making this thing enormously rich and complex. Well, I mean, obviously fun Mike would be the same to do, but, but, you know, other than that, I think if you think about where the adjacent possible progress might be most, most rapid, what we lack, what we've lacked in neurosciences, the ability to look at high resolution in time and space and, um,
and across much of the brain at the same time, measuring from many neurons in time and space at the same time, in systems that we know are conscious or very high primates and other things. And there are just massive advances now, I think, in invasive neurophysiology, in different kinds of neuroimaging methods that we can sort of, optogenetics being one of them,
But I think really doubling down on manipulation and recording and high space time and coverage simultaneously, coupled with the development of new mathematical tools to understand these kinds of complex data sets. That's where I'd go. Lots to do there. Many of the people who watch this podcast are specialists in computer science, math, physics, philosophy.
I was going to ask just about advice for researchers, but you can frame it as advice to everyone. What advice do you have?
You know, I think for students, it's super important to kind of curate your curiosity, I think. I mean, I started this very general curiosity in consciousness, but then I think it was important to allow that curiosity to find other branches that then end up coming together in different ways. You know, I got very interested in other things too, in cybernetics, in things that at the time didn't seem to have
Much to do with this big question, but a lot of, I think, one way to carve out a successful career is to put different pieces together, to gain skills that are both techniques, methodologies, but also conceptual toolboxes to
that you then can reassemble in different ways that other people might not have had the opportunity to do so. So really there's two interconnected things which is don't lose sight of the big picture of what you want to do but also be flexible and try and develop curiosity and adjacent things that might come in handy and also learn to do stuff. I think
Many advances in science have come about through advances in methods first. And if we learn methods, we will learn the right questions to ask. And I think that's maybe the thing that I'm still trying to learn to do as a researcher, which is the thing I find really hard. It's finding the right questions, not finding the answers to the questions that you have.
That for me is still the real struggle. Can you give an example, one of method that you wish you had, for instance, it could be that you wish you had learned earlier in your career or just a general example of something that would be beneficial to a student. So method. And then also you mentioned that asking questions. So then also something, an example would be what's something where you were pursuing the answer, but you realized that it should have been a better question.
So I try and give examples that connect both of these things. So an example of something that I wish I had gained some expertise in earlier is psychophysics. Now, this is the standard experimental thing. I caricatured it a bit early, undergraduates sitting in front of a monitor pressing buttons and so on. But the methods of psychophysics are probably the longest established experimental methods of studying consciousness.
How do we interpret data for people pushing buttons when you show them things? I mean, it's very simple, but there's a huge amount of literature that goes back to the 19th century there. And, you know, I made, I think a ton of mistakes and certainly a ton of inefficiencies kind of improvising my way through this literature or through my own work because I hadn't gained the skills early enough. So that's one example.
something I wish I'd done differently. I think it would have allowed me to ask better questions experimentally. The thing that went well was I picked up, I learned, trained myself and then asked other people to help me learn information theory and Grainger causality modeling. This is a mathematical sort of framework for understanding
Information flow causal interactions in between nodes of a network in complex systems. Generally these methods were mainly used certainly at the time in the early 2000s when I encountered them, they were primarily still used in economics, econometrics, not in neuroscience. There are a couple of papers basically saying, hold on a minute, we might be able to look at apply these methods in neuroscience. And I was, I just got curious about that, not because I thought there was a big clue to consciousness there.
But I thought, hold on, that's really interesting. You know, people assume that they look at coherence or mutual information or correlation between brain regions, but might not be interested in causal information flow, you know, arrows, lines with arrows that are not going both directions. So I was lucky to know people who could help me learn this stuff. And it's become quite a strong part of
What I've done over the years now working with mathematicians who know this stuff much better than me, but we've done a lot in applying these methods in neuroscience now and giving people the tools to apply them for themselves. And it's also fed back into other things to ask. This is the other example. So different questions, right? So one question that I've been asking for years and I think it's
Getting some
Toolbox of information theory and granger causality is actually turned out to be very useful in figuring out how to do this to come up with measures of emergence that allow us to ask questions about emergence in a more quantitative and operational way.
I remember you and a few other people had a paper on this within the past two years or so, correct? That's right. I've been working actually with two different groups of people on two different approaches. The main one is with my colleague Lionel Barnett, who I've worked with for many years now, who's a mathematician. I actually wrote a paper on this 15 years ago using Granger causality to measure emergence and I was very pleased with myself at the time. I thought this is great. Here's this concept and here's a way to
Implement it mathematically and it kind of got a bit of attention but not much and then Lionel pointed out to me that it was basically flawed in all sorts of ways and came up with a related idea that does something much more rigorously and it's a slightly different thing and we're still working on it to figure out how to extend it but
It's mathematically a much more serious enterprise now, but what it does basically, it says, okay, you've got a complex system. An example that's often used is you have a flock of birds. There may be birds flying around in the sky and sometimes that looks like they're flocking and other times it doesn't. Can you quantify that? And of course you could say, well, it's in the eye of the observer. Fine, it's in the eye of the observer, but so
Basically, you know, so is everything really, there's still a difference between a flock and a non flock. And if we can, if we can quantify that, then and generalize it. So, you know, maybe there's something about neurons that have an essence of this flockiness, but maybe not now in space in three dimensions, but in some other dynamical space in some other dimensional space. And the approach to this that Lionel and I,
was to come up with a measure we call dynamical independence, which is when a zoomed out level of description, a coarse graining as physicists like to say, a higher level of description of a system, if its evolution over time is statistically independent of what its constituent parts are doing, then it in some sense has a life of its own.
then it is in some sense emergent, dynamically independent. And it turns out that the utility of this approach is that we can apply it in a purely data-driven way without making any presuppositions of saying, oh yeah, there's a flock, is it emergent? We can just identify potential emergent properties in a system and see how they look in different states.
And just to where we're at right now for me is a hugely exciting thing, actually, which is that often people say, well, conscious states are emergent from from that neural underpinnings. The brain is in some sense, conscious brain is in some sense more than some of its parts. That all sounds very nice. And I'm sure I've said stuff like this many times before. But now with the tools that Lionel developed and applied with a PhD student of ours and who's also working with others in Paris, Thomas Andrion,
We find something quite different actually, which is that when the brain is in a conscious wakeful state, there's less prominence of these so-called dynamical independent course grainings than when the brain is unconscious in anesthesia, which is sort of not the, it's, the slogan would be a little bit not what we were expecting. Like emergence is lower in consciousness.
than in unconsciousness. That would not be what I would have predicted a few years ago or even two years ago, one year ago I'm not sure. But it's looked when we operationalize emergence this specific way and with this specific data that's what we find but then that raises other interesting questions and I think this is the beauty of actually
operationalizing these things making them quantitative because now we have another set of questions just like ah maybe it's this is because in the conscious state actually when you don't have emergence in the way we're quantifying it what you actually have is something called scale integration where there's actually what's happening at the macro and what's happening at the micro and much more independent there's much
Less separation of scales and this takes us right back to what we were talking about with Mike and indeed the whole idea of conscious AI that you know I said right at the top that in brains there seems to be it's harder to separate what they do from what they are in a sense this is a way of quantifying that hardness and it seems when the brain conscious it's even harder to separate what it does from what it is you have this deeper integration of scales vertically not across time or across space but across levels of
Description of the system. And so for me, this is opening like a whole range of questions that haven't really been asked. Certainly I haven't asked him before. Um, it's a different way of looking at a system like this and it all turns on, um, having this mathematical method available. And for me that goes right back to the serendipity of being curious about range of causality 20 years ago. Hmm.
There's some research that says that when one takes psychedelics, it probably depends on the psychedelic, that the brain is less active even though your conscious experience is greater somehow. Is this related to that or have you not studied emergence when it comes to the brain under psychedelics?
It's a little related, so we have a little bit in collaboration. We don't have the license to collect our own data under psychedelics, but we've collaborated with people like Robin Cart-Harris and others who have. We have not yet, but this is very much on the cards, we have not yet applied this same measure that I was just talking about to the psychedelics data, but there's no reason we can't.
What we have done is we've applied other measures that have often been used in things like sleep and anesthesia as well that measure what we call signal diversity. And the story here is that when you lose consciousness, your brain activity seems to become more predictable. So the repertoire of states that it inhabits is lower.
Running a business comes with a lot of what ifs.
But luckily, there's a simple answer to them. Shopify. It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need. From website design and marketing to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality. Turn those what-ifs into sign up for your $1 per month trial at Shopify.com special offer. When we applied this, this was now nearly
eight or nine years ago to data from psilocybin lsd we found the opposite that the brain activity became even less predictable so more diverse more different patterns less compressible higher levels of complexity so that's one clue but but it's to me it's still very preliminary this method of measuring signal diversity is quite
Precarious it depends if you do a different way you tend to get different results. But I think you know there are other things we looked for we didn't find in the psychedelics data set you know I was expecting to see for instance just much greater information flow from the front of the brain to the back I thought you know you that that might explain.
Now, lastly, speaking of surprise minimization,
What else has surprised you lately in consciousness research? What has surprised me? I mean, we can put it in a sidebar. I think the thing that surprised everybody, this is only tangentially related, is how simultaneously impressive and unimpressive language models are. Okay.
They're really very different from how I thought they would be. They can do a lot more, but they also have sort of still bizarre failure modes and so on. So I just would not have expected the trajectory of language models to be as salient as it has been. That's certainly been a big surprise. What else has been surprising? I don't know.
It's a really good question. I'm not sure that anything massively stands out to me. I'm sure something will come to mind as soon as we finish this conversation. As it does. There have been other things which have turned out kind of in ways that one might have expected. There was this huge adversarial collaboration between integrated information theory and global workspace theory. This big effort to compare these two big theories of consciousness.
And of course that's turning out that there's evidence for and against both and there's no decisive blow against either. And that's probably exactly what one would have expected, though there's still a lot of interesting and surprising things there in the details. But yeah, I don't know. There's lots of things that are
I would say small-scale surprising that's like, oh, I didn't expect that experiment to go this way or that way, but I can't think of anything massively. The AI thing is sort of dominating my surprise minimization landscape at the moment. Thank you both for spending so much time with me and the audience. Thank you so much. Yeah, much appreciated. Thank you. Thank you, Mike. See you both. Yeah, see you. Hi there.
Kurt here. If you'd like more content from Theories of Everything and the very best listening experience, then be sure to check out my sub stack at kurtjymungle.org. Some of the top perks are that every week you get brand new episodes ahead of time. You also get bonus written content exclusively for our members. That's C-U-R-T.
You can also just search my name and the word SUBSTACK on Google. Since I started that SUBSTACK, it somehow already became number two in the science category. Now, SUBSTACK, for those who are unfamiliar, is like a newsletter. One that's beautifully formatted, there's zero spam,
This is the best place to follow the content of this channel that isn't anywhere else. It's not on YouTube. It's not on Patreon. It's exclusive to the Substack. It's free. There are ways for you to support me on Substack if you want, and you'll get special bonuses if you do. Several people ask me like, hey Kurt, you've spoken to so many people in the field of theoretical physics, of philosophy, of consciousness. What are your thoughts, man?
Well, while I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics. And it's the perfect way to support me directly. KurtJaymungle.org or search KurtJaymungle substack on Google. Oh, and I've received several messages, emails and comments from professors and researchers saying that they recommend theories of everything to their students.
That's fantastic. If you're a professor or a lecturer or what have you, and there's a particular standout episode that students can benefit from or your friends, please do share. And of course, a huge thank you to our advertising sponsor, The Economist. Visit economist.com slash totoe to get a massive discount on their annual subscription. I subscribe to The Economist and you'll love it as well.
Tou is actually the only podcast that they currently partner with. So it's a huge honor for me. And for you, you're getting an exclusive discount. That's economist.com slash toe. And finally, you should know this podcast is on iTunes. It's on Spotify. It's on all the audio platforms. All you have to do is type in theories of everything and you'll find it.
I know my last name is complicated, so maybe you don't want to type in Jymungle, but you can type in theories of everything and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read in the comment that toe listeners also gain from replaying. So how about instead you relisten on one of those platforms like iTunes, Spotify, Google podcasts, whatever podcast catcher you use. I'm there with you. Thank you for listening.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 673,
"audio_duration_seconds": 4251.66,
"completed_at": "2025-11-30T19:54:31Z",
"segments": [
{
"end_time": 26.203,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
},
{
"end_time": 53.234,
"index": 1,
"start_time": 26.203,
"text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 53.558,
"text": " As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 84.48,
"index": 3,
"start_time": 66.135,
"text": " I make the additional really weird claim that I don't think algorithms capture everything we need to know about life. We've to go on that the idea of the brain as a computer is a metaphor and not the thing itself. There's no bright line between what it does and what it is. That would not be what I would have predicted."
},
{
"end_time": 108.012,
"index": 4,
"start_time": 86.186,
"text": " This is a monumental theolocution. For the first time ever, Professor Anil Seth and Professor Michael Levin are conversing and performing research in real time and we get to be the flies on the wall. Anil says that the brain as a computer metaphor has blinded us for decades. You can't extract the software from the substrate."
},
{
"end_time": 122.585,
"index": 5,
"start_time": 108.456,
"text": " This means that silicone consciousness may be impossible not because machines lack dualistic souls, though. But wait, Michael disagrees. He thinks that machines may be able to access the same platonic space that biological systems tap into."
},
{
"end_time": 151.476,
"index": 6,
"start_time": 122.858,
"text": " The magic isn't restricted to carbon. Both professors are now building and studying xenobots together. These are living robots made from skin cells that self-organize, exhibiting behaviors evolution never programmed. Do they dream? Do they have preferences? Are they conscious? On this episode of Theories of Everything, we explore their radical collaboration, including questions like how split-brain patients may prove consciousness fragments and multiplies, and the terrifying possibility"
},
{
"end_time": 170.316,
"index": 7,
"start_time": 151.647,
"text": " that large language models are doing something entirely different from what their output suggests, tasks no programmer asked for, no steps in the code demand for, but perhaps where that quote unquote magic lies. Remember to hit that subscribe button if you like videos exploring fundamental reality."
},
{
"end_time": 200.947,
"index": 8,
"start_time": 171.834,
"text": " All right, we're going to talk about aliens. We're going to talk about cyborgs, modules in the brain, split hemisphere patients, if I'm not mistaken, at unconscious processing. We're going to get to all of that. To set the stage, I'd like to know what's exciting you both research-wise currently, something you're pursuing. So, Anil, why don't we start with you, please? Well, thanks, Joe. Two things, I guess. One thing that seems to be exciting a lot of people these days, which is the possibility of AI being conscious."
},
{
"end_time": 230.998,
"index": 9,
"start_time": 201.305,
"text": " And whether it's something that AI systems can have or whether, as I tend to think, that it's something more bound up with our nature as living creatures. And the other thing that's exciting me actually just came to mind in your little list of topics there is the question of islands of consciousness. So there's a lot of work on things like split brain patients, patients with brain damage and so on. But a question that me and a couple of colleagues, Tim Mayne, Marcella Massimini have been wondering is,"
},
{
"end_time": 253.575,
"index": 10,
"start_time": 231.988,
"text": " Can other isolated neural systems that may have conscious experiences and one candidate for this is called hemispherotomy, which is kind of neurosurgical operation where you have bits of the brain detached, disconnected from all other parts of the brain, but you still have neural activity. These parts of the brain is still part of the living organism."
},
{
"end_time": 272.739,
"index": 11,
"start_time": 253.899,
"text": " Are they islands of a web so we've been exploring that theoretically and very recently with some evidence from brain imaging of people following this neurosurgical operation. Michael. Yeah, so so a couple of things on the experimental front."
},
{
"end_time": 300.247,
"index": 12,
"start_time": 273.166,
"text": " I'm really excited about some novel systems that we're setting up as compositional agents. So putting together different living and non-living components using AI and other interfaces to allow them to not just communicate with each other, but we hope form a kind of collective intelligence. And then we can ask some interesting questions about what kind of inner perspective this new intelligence might be."
},
{
"end_time": 328.797,
"index": 13,
"start_time": 300.247,
"text": " Just in general, you know, complimenting the work with them before around distributing and let's say separating out the different pieces like it was just saying of the brain and so on. The flip side of that, which is putting together new kinds of beings that haven't existed before and asking what their behavioral compliments are, what their capacities are, what their goals are, their preferences, what do they pay attention to, these kinds of things."
},
{
"end_time": 357.739,
"index": 14,
"start_time": 329.394,
"text": " And just in general, really digging into this idea of, for lack of a better word, intrinsic motivations, and asking in novel creatures that don't have the benefit of a lengthy evolutionary history that presumably set some of their cognitive properties, where do these things come from? And how do we predict them? How do we recognize them? I should have said, of course, one of the things that's really exciting to me is stuff that Mike and I have been talking about together."
},
{
"end_time": 385.418,
"index": 15,
"start_time": 358.336,
"text": " For some of the systems that he's building, some of the things he's actually talking about, do they sort of self-organize in ways which seem to abate the laws of psychophysics and other other sorts of situations where we might attribute things like intrinsic motivation to evolve systems? This is a big question about, you know, are laws of perception, are they adapted to specific environmental situations or are they somehow"
},
{
"end_time": 400.35,
"index": 16,
"start_time": 386.067,
"text": " Tell us about some of these experiments."
},
{
"end_time": 418.677,
"index": 17,
"start_time": 401.732,
"text": " We haven't been done yet. The logic is to take some simple observations of phenomena that are very widespread in perception across many evolved species, whether it's a human being, a mouse,"
},
{
"end_time": 447.261,
"index": 18,
"start_time": 418.916,
"text": " I don't know, probably a bacterium or something like that. So there are things like where they're all affecting as well. So the idea that the perceived intensity of the stimulus scales, moderately with its actual magnitude. Now, this is something that seems very, very general. Is this something that we can look for evidence for in some of these completely"
},
{
"end_time": 461.51,
"index": 19,
"start_time": 447.568,
"text": " How should we contact systems of the kind that Mike is generating and so that'd be one example that a whole bunch of other examples we have can we find things like susceptibility to visual illusions or things like that in these systems."
},
{
"end_time": 489.428,
"index": 20,
"start_time": 462.278,
"text": " kind of very general perceptual and learning phenomena that we might be able to examine whether they happen in these systems which don't have this straightforward evolutionary trajectory. I think that's the basic project. Yeah, exactly. And these comprise xenobots, anthropobots, and even weirder constructs that we can start to put together by"
},
{
"end_time": 519.326,
"index": 21,
"start_time": 489.855,
"text": " You building technological interfaces between radically different kinds of beings that allow them to, you know, sort of like a, like a, like an artificial corpus colostrum that takes two different things and tries to bind them into, into one novel collective thing and seeing, and seeing whether some of their properties and their behavioral competencies match the things that people have been studying as, as a Neil said, psychophysics and, you know, all the things out of a behavioral pan book, basically. Yeah. And this kind of relates this, you know, this,"
},
{
"end_time": 540.913,
"index": 22,
"start_time": 519.77,
"text": " I'm for the following reasons."
},
{
"end_time": 569.138,
"index": 23,
"start_time": 541.459,
"text": " And I tend to think differently. We both do it. And so these are ways of just looking at what's the dynamic and functional potential that's sort of intrinsic to the stuff we're made of. That provides the basis for our cognitive, our perceptual, ultimately our conscious abilities and properties. So these are experiments of way of getting at that. What's just there that evolution can then make use of?"
},
{
"end_time": 600.401,
"index": 24,
"start_time": 570.418,
"text": " Anil, can you make the elevator pitch for people who are already familiar with the argument that, look, the processing that's going on in our brain is just processing. It could potentially be translated to a computer. If consciousness is similarly information processing, then we have something that's quote unquote substrate independent. So you're making the claim that it's not so clear. Maybe there is a dependence to the substrate. Can you make that case? And then also, Michael, I know that you have several questions you'd like to ask Anil and feel free at any point to."
},
{
"end_time": 628.933,
"index": 25,
"start_time": 602.056,
"text": " I'll try and make the point. That's the case I'm trying to make. It's quite tricky because it goes against such a deeply embedded assumption that the brain is basically a computer made of meat and the things that it does, the only things that it does that are relevant for things like cognition and consciousness are computations, are forms of information processing. If you start from that perspective,"
},
{
"end_time": 649.804,
"index": 26,
"start_time": 629.616,
"text": " It leads you to this idea that there is this substrate independency and what that means to unpack that, that just means that the stuff we're made of doesn't really matter, it's the computations that matter and if it's a substrate kind of computations then fine. These two sort of ideas go together because one of the whole"
},
{
"end_time": 661.118,
"index": 27,
"start_time": 650.179,
"text": " Motivations for a computational view is substrate independency. Turing's formulation of computation is formulated in terms of it being independent of any particular material."
},
{
"end_time": 689.155,
"index": 28,
"start_time": 661.442,
"text": " The anodized pitch really is that we've kind of forgotten that the idea of the brain as a computer is a metaphor and not the thing itself. It's a sort of marriage of mathematical convenience. And the closer you look at real biological systems, as Mike's work beautifully exemplifies, the less that this idea of substrate independency makes any real sense. There's no bright line"
},
{
"end_time": 714.343,
"index": 29,
"start_time": 689.462,
"text": " In a brain or biological system in general between what you might call the mind and the wet web between what it does and what it is. And if there's no clear way to separate in a system, what it does from what it is, then it's very, very much less clear that one should think that computation is all that matters. Because for computation to be all that matters, you kind of have to have the sharp separation between"
},
{
"end_time": 739.838,
"index": 30,
"start_time": 714.77,
"text": " software and hardware but what it does and what it is and you can't do that and there's less reason to think the computation is what matters and there's less reason to think that then there's equally less reason to think that you could implement what matters in a substrate independent way on something else. You can of course still use computers to simulate a brain in whatever level of detail you want but that's"
},
{
"end_time": 764.206,
"index": 31,
"start_time": 740.094,
"text": " That's neither here nor there. It's a very useful thing to do. We both do this. We do this all the time. But you can simulate anything using a computer. That's one reason computers are great. But that doesn't mean you will instantiate the phenomenon. You only do that if computation really is all that matters. And I think that's very much up for... I think it's been a very deeply held assumption, but I think it's likely wrong."
},
{
"end_time": 795.162,
"index": 32,
"start_time": 765.486,
"text": " Yeah, I mean, I agree with everything that Neil said, but I take it in a slightly different direction. So I think it's critical to remember that, yeah, everything we think about as computation is a metaphor. It's a formal model. And so we have to ask ourselves, what does this model help us do and what is it hiding? In other words, what is it preventing us from seeing? And I agree that this metaphor does not capture everything that we need to know about and we need to"
},
{
"end_time": 824.377,
"index": 33,
"start_time": 795.708,
"text": " to use to technology and so on about life. I think the computational paradigm and the notion of algorithms and so on does not capture everything we need to know about life. But I make the additional really weird claim is that I don't think it captures everything we need to know about machines either. In other words, we tend to think, at least the people I meet tend to think that we have this set of metaphors that are for machines and their algorithms,"
},
{
"end_time": 849.172,
"index": 34,
"start_time": 824.753,
"text": " And they don't really apply to biology. Certainly people say, well, they don't apply to me. I'm creative and whatever else they are. But there is a corner of the universe that is boring, mechanical. It only does what the algorithm says it should do. And for those kinds of things, these metaphors are perfect. They capture everything there is to know. So I agree with Anil on the first part, but I doubt the second part. I think that a lot of what we have"
},
{
"end_time": 878.541,
"index": 35,
"start_time": 849.684,
"text": " in our theories of computation is a pretty reasonable theory of what I call the front end. I think most of what we deal with are actually thin clients in a certain sense. They're interfaces to something much deeper, which we can call the platonic space. I don't love the name, but I say it that way because then at least the mathematicians know what I'm talking about. But I think that even, and we have some work already published and more work coming soon in the next few months on this, showing that"
},
{
"end_time": 899.65,
"index": 36,
"start_time": 878.933,
"text": " Yeah, the standard way of looking at algorithms don't even tell the story of so-called machines. And so whatever it is, and I have guesses, but of course we don't know, whatever it is that allows mind to come through biological interfaces and not be captured by these formal models,"
},
{
"end_time": 925.469,
"index": 37,
"start_time": 899.65,
"text": " I think these other systems that we call machines and certainly cyborgs and hybrids also so i think they get some of the magic to not gonna be like i like us it's going to be different but i don't think they escape these ingressions either. This is why i think i find lights work so interesting because it's provocative in this direction i think i think he's. Please you know i've always think he has summarizes what i said very well is is that."
},
{
"end_time": 954.599,
"index": 38,
"start_time": 925.998,
"text": " We underestimate the richness of biological systems if you force them into what's often called the machine metaphor by which we really mean that all that matters is this sort of Turing computation algorithm thing. But I think it's, it is equally true that we limit our imagination about what machines might do as well by doing this. And there's a whole kind of alternative history of AI, which was really grounded in 20th century cybernetics."
},
{
"end_time": 983.166,
"index": 39,
"start_time": 955.06,
"text": " much more to do about dynamical systems, attractors, feedback system, all things you can still simulate computationally, which are fundamentally not arising from the algorithmic way of thinking about things. There are also really interesting mathematical properties like emergence and so on, which I think apply, can both help us understand, but also might be design principles"
},
{
"end_time": 1013.217,
"index": 40,
"start_time": 983.541,
"text": " for machines of various kinds, which, again, don't really fit into an algorithmic view of things. So it might as well beautifully show that even something we think of as phenomenically algorithmic, like correcting if I'm wrong, might be the bubble sort of stuff. So this is an algorithm that anyone in computer science, when at one learns to code, to sort things into a particular order, has really interesting emerging properties that could be, that other things can be built on top of."
},
{
"end_time": 1035.742,
"index": 41,
"start_time": 1013.626,
"text": " So yeah, I don't think, I think it's for me, it's like there's this, there's this nice iterative back and forth where we can learn to think of both biology and machines differently. And of course that might give us richer metaphors through which we can use one lens to understand the other."
},
{
"end_time": 1065.367,
"index": 42,
"start_time": 1037.398,
"text": " Would you say then that we have the idea of machine and that a Turing machine is a strict subset of that idea of machine? I mean, a Turing machine is an abstraction, right? Turing machines were never sort of supposed to exist, you know, as things. They're infinite tape and things like that. So you've got, you know, a Turing machine. The idea is you, in one sense, you're mapping a bunch of numbers onto another bunch of numbers."
},
{
"end_time": 1088.131,
"index": 43,
"start_time": 1065.623,
"text": " And then the universal Turing machine does this through this moving head and an infinite title. It was never really supposed to exist as a physical machine. I think that's where part of the problem has sort of come from. But an algorithm in that sense, yeah, I think that's a subset. When you realize a Turing machine, that's a subset of possible machines."
},
{
"end_time": 1118.046,
"index": 44,
"start_time": 1088.558,
"text": " Yes, when you realize a Turing machine, it will be a subset of all possible machines just because it's a particular Turing machine. But when you realize a universal Turing machine as well, that's also in a subset of possible machines. So if you don't mind spelling out to the audience, the idea of hypercomputation, and would you then say that biological creatures or cells or what have you are doing something that is hypercomputational? And feel free to take this in a different direction, Michael, as well, if you like."
},
{
"end_time": 1147.756,
"index": 45,
"start_time": 1118.831,
"text": " Would you tell me what you're thinking of when you use the word hypercomputation? I've heard it used to imply different things. So if something can solve the halting problem, it would be an example of a hypercomputer, something that can decide problems that a Turing machine or a universal Turing machine can't. Right. So sort of super Turing in some sense. That's one way in which machines can be non"
},
{
"end_time": 1176.766,
"index": 46,
"start_time": 1148.08,
"text": " Turing or state Turing world. But I think there are many other systems that are just not captured by this way. They don't have to be based on the halting problem. Strictly anything that is stochastic, anything that is continuous is beyond this world of strict universal Turing machines. There are all kinds of extensions that try"
},
{
"end_time": 1206.135,
"index": 47,
"start_time": 1177.09,
"text": " Try to go back, but there are also functions that things do that necessarily involve particular material substrates. So take something like metabolism and metabolism is not mapping some range of numbers, whether they're continuous or random to another number. It involves actual transformation of a particular kind of substance into another kind of substance. That's just."
},
{
"end_time": 1234.411,
"index": 48,
"start_time": 1206.63,
"text": " non-turing and what is a fairly trivial way that that kind of thing might be very important for particular classes of machines or systems whether they're biological or not. So I think there's different spaces of what you might call non-turing processes. Only some of these are these kinds of high-term computation whole thing problem solving things where you might say you've got some sort of fancy quantum stuff going on."
},
{
"end_time": 1259.019,
"index": 49,
"start_time": 1235.606,
"text": " But I think things differ about that, things differ, right? I mean, there are some people that would say that they'll actually, unless you're talking about this, this, like a computation, the sense you've mentioned that everything else is sort of a relatively feasible extension of, of Turing as is. So there's, there's definitely debate in that area."
},
{
"end_time": 1288.507,
"index": 50,
"start_time": 1259.582,
"text": " I would go in a slightly different direction and emphasize something that does not lean on quantum mechanics, does not lean on stochasticity, and does not lean on hyper-touring"
},
{
"end_time": 1312.807,
"index": 51,
"start_time": 1289.548,
"text": " or anything like that. And also let's even step back from the conventional living things are so complex that you can always find more mechanisms and so on. I want to look at an extremely minimal model and the reason that we chose this was precisely because it's such a minimal model. I wanted to sort of maximize the shock value"
},
{
"end_time": 1338.37,
"index": 52,
"start_time": 1312.807,
"text": " of this thing for our intuitions. And this is the work that my student Teining Zhang and Adam Goldstein and I did on sorting algorithms, which is what Anil mentioned. And there's a couple more things like it coming in the next few months. The sorting algorithm is, it's like bubble sort, selection sort, these kinds of things. CS students in CS 101 have been studying these for, I don't know, 60 years probably."
},
{
"end_time": 1348.933,
"index": 53,
"start_time": 1338.848,
"text": " No one noticed what we noticed because the assumption has always been this thing does what we asked it to do."
},
{
"end_time": 1378.029,
"index": 54,
"start_time": 1349.497,
"text": " And a lot of what I'm trying to emphasize is specifically running against that assumption that, yeah, it sorts the numbers all right. But if you back off from this assumption that all it does is what the steps of the algorithm ask it to do, then you find some new things. And computer scientists are well aware of emergent complexity, emergent unpredictability, cellular automata do all kinds of funky things, and some of the rules are chaotic and all this kind of stuff."
},
{
"end_time": 1403.148,
"index": 55,
"start_time": 1378.558,
"text": " That's not what I'm talking about. I'm not talking about emerging complexity, unpredictability, or even perverse instantiation, which should be a life people find all the time. I'm talking about things that any behavioral scientist would recognize as within their domain if you didn't tell them that this came from a deterministic algorithm. And so I can go into details if you want, but a couple of things are sailing here."
},
{
"end_time": 1434.002,
"index": 56,
"start_time": 1404.036,
"text": " What these algorithms are also doing while they're sorting your numbers are also a couple of interesting, I call them side quests because we didn't, there's nothing, there are no steps in the algorithm asking them to do this. In fact, if you try to write an algorithm to force them to do it, it would be a whole bunch of extra work, which is actually quite interesting because I think we're getting free compute here. That's a whole other thing that I think is a very testable. It's a crazy and it's a nice testable prediction because it's so weird and unexpected."
},
{
"end_time": 1447.961,
"index": 57,
"start_time": 1434.735,
"text": " They are doing some other things that are not directly related to what you've asked them to do. That's really important because it means that these language models, for example, when we say nowadays a lot of people think language models,"
},
{
"end_time": 1474.855,
"index": 58,
"start_time": 1448.404,
"text": " We, people tend to assume that the thing that the language model talks about is some kind of clue as to its inner nature, right? And people say, well, you know, my GPT said to me that it was conscious or wasn't conscious or whatever. My, my point is the thing you force it to do may have zero to do with what's actually going on now in biologicals. That's not, that's not true because evolution, I think works really hard to make sure that the signs that we, and the communications that we do,"
},
{
"end_time": 1485.759,
"index": 59,
"start_time": 1474.855,
"text": " Are related to our interstate and things like that so in biology those things are tied closely together i think we've we've on and we've disconnected and what we are now making our things that."
},
{
"end_time": 1509.104,
"index": 60,
"start_time": 1486.391,
"text": " What's that that look like they're talking and whatever and then they are but i'm not sure any of those things are at all a guy to what's going on inside and if i if a dumb bubble store which is six lines of code fully deterministic nowhere to hide six lines of code if that thing is doing things that we did not expect and we did not ask it to do and what i ask i mean there are no steps in the algorithm to do what it's doing."
},
{
"end_time": 1538.677,
"index": 61,
"start_time": 1509.718,
"text": " then who knows what these language models are doing, but I'm pretty sure that just watching the language output is not a really good guide to what's happening. I think we have to go back to the very beginning and we have to apply the kinds of things that Anil was talking about, which is basic behavioral testing in various spaces. I think our imagination is really poor at this. I think we have to just be really creative as far as asking, what is this thing actually doing?"
},
{
"end_time": 1559.565,
"index": 62,
"start_time": 1539.019,
"text": " Specifically in the spaces between the algorithm because because the thing is It has to it's a little bit like this is this is a crazy Kind of a crazy analogy that I came up with the other day, you know, this this notion of steganography So when steganography you take and let's say a piece of data, let's say it's an image It's a jpeg and it looks like whatever it looks like"
},
{
"end_time": 1575.469,
"index": 63,
"start_time": 1560.06,
"text": " there are bits within that image that if you were to change those bits it wouldn't look any different right there's some there's some degrees of freedom in there that you can move things around and the image would still look the same and so what people do is they hide information in there and maybe it's your signature that you are the one who took the picture or maybe"
},
{
"end_time": 1601.254,
"index": 64,
"start_time": 1575.469,
"text": " Maybe it's a code because you're a spy. Whatever it is, you hide information in there. But the iron rule is you can't mess up the primary picture. You can sneak stuff into the degrees of freedom, but you can't mess with the primary data pattern because then it will be obvious that something's there. I kind of have a feeling that this is what's going on, not just with computer algorithms, but with everything. There is the primary thing it's supposed to do."
},
{
"end_time": 1617.09,
"index": 65,
"start_time": 1601.596,
"text": " and anything else that it gets to do has to be compatible with that primary thing it isn't magic you can't break the laws of physics you can't go against the algorithm right you're not doing things that the algorithm forbids but there's a but there but it turns out i think that there's this that there are these weird like"
},
{
"end_time": 1647.005,
"index": 66,
"start_time": 1617.363,
"text": " empty spaces between the algorithm where you can do, and I mean, doesn't that describe to some extent our existential, you know, you have a certain bit of time in this world, you have to be consistent with those laws of physics, eventually, your physical body gets ground down to, you know, to entropy and whatever. But until then, you can do some cool stuff that isn't forbidden by the laws of physics, nor is it prescribed by those laws, I don't think. And so you get this, you get you get this is this is what I think is really interesting about these things. And the algorithm itself, to the extent that"
},
{
"end_time": 1664.855,
"index": 67,
"start_time": 1647.381,
"text": " it has to do the algorithm that limits what else it can do in a sense, what it's doing is in spite of the algorithm, not because of it. And so I agree with Anil here, I'm not a competition. I don't think anything is conscious because of the algorithm. If anything, I think the mental properties it has is in spite of the thing we force it to do. And so"
},
{
"end_time": 1694.36,
"index": 68,
"start_time": 1665.333,
"text": " I'll stop here. One thing which I'm most proud of in that paper that I think was kind of cool is that we figured out a way to let off the pressure on the algorithm a little bit to see what would happen. And the way you do that now, how would you do that? It has to follow the algorithm. How could you possibly let off the pressure? What we did was we allowed duplicate numbers in the sort. And what that allows you to do is you still have all the fives that will have to end up before the sixes and that has to go before the sevens and so on. But how you arrange those is now"
},
{
"end_time": 1722.858,
"index": 69,
"start_time": 1694.65,
"text": " not really constrained by the algorithm. You don't change the algorithm, you just allow multiple repeats within the, and what did we see? We saw that the crazy thing it was doing, which I call clustering, I could tell you what that is, but it doesn't matter. It went up higher than when we didn't let it do that. And so I really think, and this comes back to the AI thing, I really think that it's a lot like raising kids in the following sense. To the extent that you force them to do specific things,"
},
{
"end_time": 1751.288,
"index": 70,
"start_time": 1723.336,
"text": " You squelch down on the intrinsic motivation. Some kid that's forced to be in a class all day, you're not going to get to see what else he would be doing otherwise. Maybe it's out playing soccer, who knows what it would be. And so to the extent that we force these things to do specific things, we are actually not going, we're reducing what else they might do. And that's what we need to develop is the tools to detect and to facilitate this intrinsic motivation. And then you get into alignment and all of that."
},
{
"end_time": 1777.637,
"index": 71,
"start_time": 1752.432,
"text": " There reminds me a lot of when I was doing my postdoc 20 years ago, coming across while being told by my mentor at the time, Gerald Edelman, about the distinction between redundancy and degeneracy. I think this is very apposite here. So, you know, engineering people often talk about having redundancy within a system. So if a system is designed to do something, to follow some steps in an algorithm,"
},
{
"end_time": 1806.766,
"index": 72,
"start_time": 1778.148,
"text": " Well, then you might want multiple copies in case something goes wrong, you have a backup. But the backup is doing the same thing. It's redundant in that sense. Biological systems don't seem to be like that. They exhibit degeneracy rather than redundancy. That is, they may have multiple ways of doing the same thing in context A, but in context B, these multiple ways of doing the same thing now do different things. So this is hinting at the same thing that"
},
{
"end_time": 1826.613,
"index": 73,
"start_time": 1807.244,
"text": " that although it looks like they're doing the same thing there's actually some spaces in between somewhere that you won't see unless you look in different contexts otherwise you'll only see the same process that might look like an algorithm and it's that degeneracy"
},
{
"end_time": 1854.838,
"index": 74,
"start_time": 1827.005,
"text": " that gives biological systems their kind of open-endedness, their ability to adapt to novel situations and so on. It might be related to what Mike is calling an intrinsic motivation that you have to have some kind of degeneracy rather than redundancy to systems. I mean, what's interesting to me is that people are often, with the exception of a few like diehard reductionist materialists, whatever, people are generally pretty willing to grant living things that."
},
{
"end_time": 1883.029,
"index": 75,
"start_time": 1855.23,
"text": " Right. And they're okay with saying that, uh, you know, live living things, especially brainy living things get to get to do some of those things. But what I've, what I'm now finding is that people get very upset when, when you, when I suggest that the same thing might be true all the way down. It seems to be very important that we have this distinction. No, those are, that's the dead matter and the mere machines. We are special. We can do this thing. And my point is not, I'm not trying to mechanize."
},
{
"end_time": 1905.196,
"index": 76,
"start_time": 1883.029,
"text": " Living things i'm going in the opposite direction i'm saying there's not there's not less mine than you think there is i think there's more with but but but actually especially people. You know kind of organist thinkers who really resist the mechanization of life and all the stuff they really get really upset if you really get upset at this last part because because."
},
{
"end_time": 1932.466,
"index": 77,
"start_time": 1905.418,
"text": " If I suppose we're not as special, if it goes all out, like, I'm not sure. I think, you know, there's some kind of, um, uh, scarcity mindset that, that there's just, there's not enough mind for, for, for, you know, for all of us. Maybe I think it might, it might be that there's still this worry that even if you're like, say bubble sort again, bubble sort is still implemented on like standard computers. Right. So one way of."
},
{
"end_time": 1960.145,
"index": 78,
"start_time": 1933.131,
"text": " potentially misunderstanding what you're saying is you're then basically allowing computational functionalism by the back door again in some ways by saying, look, you know, an algorithm like bubble sort has actually all the things that you need, or it has so much more going on than what one might think. So let's not be too quick to rule out substrate independent algorithms as sufficient for"
},
{
"end_time": 1987.773,
"index": 79,
"start_time": 1960.998,
"text": " other things that might seem otherwise hard to explain. Well, I think you're right. And I think people could but in that that that would be a misinterpretation of what I'm saying. I am not saying that that it's doing that because of the algorithm, right? So the standard computationalist theory is you are conscious because your algorithm is doing workspace theory, whatever, you know, whatever it's doing, right? That's why you are I'm saying the exact opposite. I'm saying that"
},
{
"end_time": 2015.725,
"index": 80,
"start_time": 1988.336,
"text": " even something as stripped down and forced into the stupid algorithm, there are still spaces there through which whatever this is that I, you know, whatever this magic is that we're talking about is able to squeeze in even there. There are minimal versions of it that will shine through even there. And if you provide a more, a different interface, and I don't want to just say more complex because I don't think it's just complexity, maybe it's materials, maybe it's some other stuff,"
},
{
"end_time": 2042.824,
"index": 81,
"start_time": 2016.203,
"text": " But if you provide better interfaces, such as living materials, well then, sure, you'll get way more. But this stuff seeps into even the most constrained systems, I think. So let's get to aliens, Michael. I don't know what to say. People email me sometimes asking to talk to my alien handlers. There's that. But I don't know anything about aliens other than to say that"
},
{
"end_time": 2068.712,
"index": 82,
"start_time": 2043.148,
"text": " It seems implausible to me, not being an expert on the exobiology or whatever, it seems implausible to me that the only kind of life is the life that we're familiar with here or cognition. I expect that elsewhere in the universe there will be extremely alien forms of mind that are not carbon, I can get even weirder, but not the kinds of things that we're used to here."
},
{
"end_time": 2095.691,
"index": 83,
"start_time": 2068.712,
"text": " I think our imagination is terrible for that kind of thing i mean sci-fi does it does okay sometimes but yeah i just you know anything that's anything that's tied to the specifics of life on earth i think is is almost certainly too narrow as a as a criterion for these kinds of things. I'm just always go back to the Fermi paradox and like you know where where is everybody but. I'm which always worries me because it just sort of suggest to me that."
},
{
"end_time": 2126.34,
"index": 84,
"start_time": 2096.425,
"text": " I think it's also very implausible that we're the only example of life, but then the evidence for intelligent life that has been able to broadcast structured energy out into the universe seems lacking. Where the hell is everybody? So, of course, the conclusion from this is that life might be very prevalent in many places, certainly not only here, but that there is quite difficult to get"
},
{
"end_time": 2154.991,
"index": 85,
"start_time": 2126.732,
"text": " Get life to the stage where it lasts long enough to persist and become cognitively sophisticated. I have no idea and I find that existentially concerning and just a great sort of shaker of the snow globe for reminding us that we really need to take care of our own planet and civilization first because it might not be very common to get to the kinds of things we are even if"
},
{
"end_time": 2180.452,
"index": 86,
"start_time": 2156.084,
"text": " It's exotic in a different way somewhere else. I think the universe is much more likely to be filled with grey goo than, you know, Mike Levins with eight legs and octopode form. So, Anil, if I was to take your cells and put them into a dish, some would form xenobots, some would die, most probably would die, and some may just wander about or what have you."
},
{
"end_time": 2195.128,
"index": 87,
"start_time": 2180.862,
"text": " Have you become multiple agents at that point or were you always multiple agents pretending to be one? I don't think pretending to be one. I think it's an excellent question."
},
{
"end_time": 2223.712,
"index": 88,
"start_time": 2196.817,
"text": " Whether you can have multiple kind of coarse grainings of agency simultaneously I think is quite interesting. I don't see why not in a sense. I think there can be sub-organismic levels of agency in my constituents but there's something sort of enslaving of these finer grains of description in things like"
},
{
"end_time": 2242.995,
"index": 89,
"start_time": 2224.036,
"text": " Organisms, things pull together, the parts pull together as a whole in a way that doesn't happen if you dissociate me into my constituent cells. So I don't see a contradiction between cells having agency and an organism having agency and a society having agency."
},
{
"end_time": 2270.367,
"index": 90,
"start_time": 2243.524,
"text": " and perhaps a global society having some kind of agency, these things can all coexist and have a reality simultaneously, but they will affect each other. So agency at a macro level will probably constrain the agency that's available at the micro levels. And you have a book on consciousness, which I'll place on screen and a link in the description right now. So you've probably heard of the identity theory of consciousness."
},
{
"end_time": 2294.343,
"index": 91,
"start_time": 2270.555,
"text": " My understanding is that it just says mental states are simply the same as physical states, they're not caused by they're not emergent from there are just identical to them. What do you make of that i'm curious for both of you. Why don't you get the theory i think i think things like identity. Inquire theory and more metaphysical positions and then actual fairies and."
},
{
"end_time": 2320.367,
"index": 92,
"start_time": 2294.65,
"text": " For me, I like to wear metaphysics lightly, if at all. I don't think you get very far. To say that a mental state is or a conscious state is identical to a physical state, I mean, who knows? In some sense, it might be trivially true. In another sense, it might be absolutely completely wrong. But what I do think is it's not"
},
{
"end_time": 2349.855,
"index": 93,
"start_time": 2320.845,
"text": " It doesn't give you anything in particular to do or anywhere to go. So instead of sort of arguing about whether theories like that are correct or incorrect, I prefer to ask whether they're useful or not useful. And I don't think identity theory is that useful. I'm broadly a pragmatic materialist, which is to say that I'm pretty convinced that conscious states have something to do with the stuff, with physical stuff."
},
{
"end_time": 2378.985,
"index": 94,
"start_time": 2350.794,
"text": " and we certainly know empirically there are correlations and causal relations between what if you do something to the brain something will happen in conscious experience at least in human beings who knows you know maybe consciousness is more general than biological systems but i think pragmatic materialism is a productively useful thing to do and we can we can go about the business of trying to explain properties of consciousness in terms of properties of biological systems"
},
{
"end_time": 2409.087,
"index": 95,
"start_time": 2379.701,
"text": " And we'll see how far we get. And this depends. Then we have to face the question of what are the properties of biological systems that give us explanatory predictive grip on properties of consciousness. For a bunch of people, the assumption is it's just the computations to bring us back to the early part of the conversation. But there could be many other things that actually give us explanatory and predictive grip about consciousness that aren't the computations."
},
{
"end_time": 2435.503,
"index": 96,
"start_time": 2409.462,
"text": " That's the view that I'm interested in exploring and we'll see whether it's useful or not. Yeah, I agree with that. I think it's less than a theory than it is a linguistic claim. You're just saying something about the definitions. I find it kind of unhelpful. It's a little bit like saying that airline ticket prices"
},
{
"end_time": 2465.606,
"index": 97,
"start_time": 2435.674,
"text": " What are those? Well, let's associate them with some physical states and what explains them? Well, the constants at the beginning of the Big Bang plus some randomness. In a certain sense, kind of. In another sense, how much insight are you going to get as far as why these prices are going up or down if you have this view? I think probably zero. And so, like Anil, I'm interested in metaphors and I think that all these things are metaphors, but I'm interested in metaphors that help us discover new things."
},
{
"end_time": 2481.954,
"index": 98,
"start_time": 2466.135,
"text": " I don't see how equating them linguistically with physical states is doing the trick. I don't think that works in biology for the cognitive non-consciousness specific things and I don't see it helping here either."
},
{
"end_time": 2510.316,
"index": 99,
"start_time": 2483.78,
"text": " Hola, Miami! When's the last time you've been in Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free Wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit bigdealevent.com for more details."
},
{
"end_time": 2541.015,
"index": 100,
"start_time": 2511.698,
"text": " Mike, you said you had some questions about split hemisphere patients for Anil. Well, I don't know what's okay. It's not so much specifically about split hemisphere patients, but I guess it's the thing I brought up in email. I was just wondering, I was listening to a talk, I forget whose talk it is, and somebody was saying, look, there are all these unconscious processes during reading, during driving, whatever, there's all these unconscious processes. And I was just curious what you think about that, because it seems to me critical to say,"
},
{
"end_time": 2559.957,
"index": 101,
"start_time": 2541.391,
"text": " Conscious to whom in other words they might well be unconscious to the main left hemisphere whatever that's that's verbally reporting this insane wow i did drove all the way from from home but you know my office and i wasn't conscious of any of that and so you say okay great there's this like unconscious but well it's not conscious to you."
},
{
"end_time": 2583.592,
"index": 102,
"start_time": 2559.957,
"text": " But neither are my conscious states conscious to you. So how do we know that all of these things aren't the subsystems of the brain and mind that execute them? How do we know they don't have an experience they can't verbalize? So I was just curious about that because it seems like it's just a foregone assumption. And it seems like really begging the question if we just assume that because you don't feel them that they don't"
},
{
"end_time": 2606.135,
"index": 103,
"start_time": 2584.138,
"text": " You know that they don't and it's the same and the reason it's of interest to me is that that's what people say about our body organs too, right? So I mean claim that for the exact same reason we give each other the benefit of reasons four or five reasons that we give each other benefit of the doubt about consciousness. You should take that seriously about your various body organs and people say well, I don't feel my liver being conscious. Of course now you don't feel me being conscious either. So I was just curious. What what do you think about that?"
},
{
"end_time": 2635.623,
"index": 104,
"start_time": 2606.271,
"text": " Yeah, we had just for the people listening, we'd started this nice dialogue by email just a couple of days ago. So I think it raises some really important questions about how we use the words. Unfortunately, I do think it's a little bit linguistic here. We talk about the conscious and the unconscious. And of course, they mean different things in different contexts. So when it comes to, let's say, split hemisphere patients, the intuition is there are two separate"
},
{
"end_time": 2664.991,
"index": 105,
"start_time": 2635.998,
"text": " Conscious agents, just only one of them has the ability to behaviorally report through language what it's experiencing. But it's partly because each hemisphere has kind of the full complement of resources that one might think of as necessary that this becomes a sort of plausible position. Then there's other uses of conscious versus unconscious. There's a whole history of them."
},
{
"end_time": 2691.135,
"index": 106,
"start_time": 2665.759,
"text": " A lot of the history of consciousness science is trying to contrast conscious from unconscious perception. So, you know, you'll show an image and somebody will say, yeah, I see it. And then you mask it in some way, manipulate it in some way. And people say, no, I didn't see it, but you can still see parts of the brain responding. And the logic is, well, the contrast that you get there between when something was consciously seen in the same image or the same sound was not consciously experienced"
},
{
"end_time": 2720.52,
"index": 107,
"start_time": 2691.698,
"text": " If you look at the difference in the brain, that difference has to do with, with consciousness. That's the whole strategy of looking for the, for the neural correlates of consciousness. But then you might ask, well, how do you know that the unconscious perception was in fact unconscious? It may have just been unconscious to the, to the subject as a whole, but there may have been an inaccessible conscious experience happening. So I think this is logically perfectly possible, but then you have this whole, well, how do you,"
},
{
"end_time": 2739.855,
"index": 108,
"start_time": 2721.015,
"text": " You then link that not only to a brute correlation but you have to then come up with some theoretical reason and that will depend on your theory like a theory like global workspace might say okay look the reason that the conscious perception was reportably conscious was because it engaged the global workspace and a theory is that"
},
{
"end_time": 2766.937,
"index": 109,
"start_time": 2740.947,
"text": " Things are conscious in virtue of accessing this global workspace. So you have some sort of theoretical reason for saying that the unconscious is in fact unconscious. But of course then you risk a little bit of circularity, right? That your evidence for global work space is based on the theoretical explanation that makes one conscious and the other not conscious. So you have to have multiple sources of evidence. All this to say is it's a"
},
{
"end_time": 2796.63,
"index": 110,
"start_time": 2767.295,
"text": " It's a very good question and it came up in the thing I mentioned right at the beginning. We have these hemisporotomy patients whose parts of their brain are completely disconnected. So they, by definition, can't respond to things. They can't generate any response. They're sort of the opposite of language models in this sense, right? They can't give us any persuasive behavioral evidence because they're not connected to anything. Yet, they are part of a brain that was at one point conscious."
},
{
"end_time": 2826.51,
"index": 111,
"start_time": 2797.432,
"text": " And all that's happened really, in the limit, they're damaged as well. I mean, there's other things going on is they've been disconnected. So plausibly, at least for me, they're more likely to be conscious, but inaccessible, much more likely a priori than a language model is to be. And so we have to find indirect ways of trying to assess the likelihood of consciousness in these very disconnected hemispheres. And to cut a long story short, very short, because I know you've got to go in a sec, Mike."
},
{
"end_time": 2854.326,
"index": 112,
"start_time": 2826.903,
"text": " When we look at EEG, and this is work done with colleagues in the University of Milan, it looks like these isolated hemispheres are in states of very, very deep sleep. So we see slow waves, very prominent slow waves, sharp spectral exponents and so on. But how do we know that that is in fact unconscious? Because there are a few examples of human beings where we actually see slow waves at the same time as consciousness in DMT, for instance, and things like that."
},
{
"end_time": 2879.428,
"index": 113,
"start_time": 2854.565,
"text": " So it's iterative. It's very hard to be definitive and it's an excellent question. I think we don't know until we start looking at systems radically different from a psychology undergraduate looking at a monitor, which we still do and that's very useful, but we have to look at these other things as well. We don't really know what assumptions we're making when we interpret the data from just look for the car keys where the light is. You might miss the bigger picture."
},
{
"end_time": 2890.247,
"index": 114,
"start_time": 2880.708,
"text": " Now Mike, before you get going, suppose I gave you both unlimited resources to design some experiment. What would you create?"
},
{
"end_time": 2922.995,
"index": 115,
"start_time": 2893.951,
"text": " Well, fundamentally, I think we need an environment, a closed-loop environment in which to exercise all kinds of, the Zenibots and AnthroBots are just the beginning, there's so much weirder things that we're looking into, such that we might be able to recognize new kinds of cognitive preferences, goals, competencies, whatever, to which we're otherwise blind. I mean, you could imagine"
},
{
"end_time": 2951.681,
"index": 116,
"start_time": 2922.995,
"text": " Making this thing enormously rich and complex. Well, I mean, obviously fun Mike would be the same to do, but, but, you know, other than that, I think if you think about where the adjacent possible progress might be most, most rapid, what we lack, what we've lacked in neurosciences, the ability to look at high resolution in time and space and, um,"
},
{
"end_time": 2976.647,
"index": 117,
"start_time": 2952.346,
"text": " and across much of the brain at the same time, measuring from many neurons in time and space at the same time, in systems that we know are conscious or very high primates and other things. And there are just massive advances now, I think, in invasive neurophysiology, in different kinds of neuroimaging methods that we can sort of, optogenetics being one of them,"
},
{
"end_time": 3007.142,
"index": 118,
"start_time": 2977.176,
"text": " But I think really doubling down on manipulation and recording and high space time and coverage simultaneously, coupled with the development of new mathematical tools to understand these kinds of complex data sets. That's where I'd go. Lots to do there. Many of the people who watch this podcast are specialists in computer science, math, physics, philosophy."
},
{
"end_time": 3037.005,
"index": 119,
"start_time": 3007.602,
"text": " I was going to ask just about advice for researchers, but you can frame it as advice to everyone. What advice do you have?"
},
{
"end_time": 3067.5,
"index": 120,
"start_time": 3038.029,
"text": " You know, I think for students, it's super important to kind of curate your curiosity, I think. I mean, I started this very general curiosity in consciousness, but then I think it was important to allow that curiosity to find other branches that then end up coming together in different ways. You know, I got very interested in other things too, in cybernetics, in things that at the time didn't seem to have"
},
{
"end_time": 3092.176,
"index": 121,
"start_time": 3068.404,
"text": " Much to do with this big question, but a lot of, I think, one way to carve out a successful career is to put different pieces together, to gain skills that are both techniques, methodologies, but also conceptual toolboxes to"
},
{
"end_time": 3121.254,
"index": 122,
"start_time": 3092.551,
"text": " that you then can reassemble in different ways that other people might not have had the opportunity to do so. So really there's two interconnected things which is don't lose sight of the big picture of what you want to do but also be flexible and try and develop curiosity and adjacent things that might come in handy and also learn to do stuff. I think"
},
{
"end_time": 3149.787,
"index": 123,
"start_time": 3121.954,
"text": " Many advances in science have come about through advances in methods first. And if we learn methods, we will learn the right questions to ask. And I think that's maybe the thing that I'm still trying to learn to do as a researcher, which is the thing I find really hard. It's finding the right questions, not finding the answers to the questions that you have."
},
{
"end_time": 3179.206,
"index": 124,
"start_time": 3150.486,
"text": " That for me is still the real struggle. Can you give an example, one of method that you wish you had, for instance, it could be that you wish you had learned earlier in your career or just a general example of something that would be beneficial to a student. So method. And then also you mentioned that asking questions. So then also something, an example would be what's something where you were pursuing the answer, but you realized that it should have been a better question."
},
{
"end_time": 3207.773,
"index": 125,
"start_time": 3180.179,
"text": " So I try and give examples that connect both of these things. So an example of something that I wish I had gained some expertise in earlier is psychophysics. Now, this is the standard experimental thing. I caricatured it a bit early, undergraduates sitting in front of a monitor pressing buttons and so on. But the methods of psychophysics are probably the longest established experimental methods of studying consciousness."
},
{
"end_time": 3237.927,
"index": 126,
"start_time": 3208.353,
"text": " How do we interpret data for people pushing buttons when you show them things? I mean, it's very simple, but there's a huge amount of literature that goes back to the 19th century there. And, you know, I made, I think a ton of mistakes and certainly a ton of inefficiencies kind of improvising my way through this literature or through my own work because I hadn't gained the skills early enough. So that's one example."
},
{
"end_time": 3267.227,
"index": 127,
"start_time": 3238.268,
"text": " something I wish I'd done differently. I think it would have allowed me to ask better questions experimentally. The thing that went well was I picked up, I learned, trained myself and then asked other people to help me learn information theory and Grainger causality modeling. This is a mathematical sort of framework for understanding"
},
{
"end_time": 3297.654,
"index": 128,
"start_time": 3267.841,
"text": " Information flow causal interactions in between nodes of a network in complex systems. Generally these methods were mainly used certainly at the time in the early 2000s when I encountered them, they were primarily still used in economics, econometrics, not in neuroscience. There are a couple of papers basically saying, hold on a minute, we might be able to look at apply these methods in neuroscience. And I was, I just got curious about that, not because I thought there was a big clue to consciousness there."
},
{
"end_time": 3325.998,
"index": 129,
"start_time": 3298.114,
"text": " But I thought, hold on, that's really interesting. You know, people assume that they look at coherence or mutual information or correlation between brain regions, but might not be interested in causal information flow, you know, arrows, lines with arrows that are not going both directions. So I was lucky to know people who could help me learn this stuff. And it's become quite a strong part of"
},
{
"end_time": 3352.449,
"index": 130,
"start_time": 3326.749,
"text": " What I've done over the years now working with mathematicians who know this stuff much better than me, but we've done a lot in applying these methods in neuroscience now and giving people the tools to apply them for themselves. And it's also fed back into other things to ask. This is the other example. So different questions, right? So one question that I've been asking for years and I think it's"
},
{
"end_time": 3382.619,
"index": 131,
"start_time": 3352.756,
"text": " Getting some"
},
{
"end_time": 3399.616,
"index": 132,
"start_time": 3383.166,
"text": " Toolbox of information theory and granger causality is actually turned out to be very useful in figuring out how to do this to come up with measures of emergence that allow us to ask questions about emergence in a more quantitative and operational way."
},
{
"end_time": 3430.469,
"index": 133,
"start_time": 3400.503,
"text": " I remember you and a few other people had a paper on this within the past two years or so, correct? That's right. I've been working actually with two different groups of people on two different approaches. The main one is with my colleague Lionel Barnett, who I've worked with for many years now, who's a mathematician. I actually wrote a paper on this 15 years ago using Granger causality to measure emergence and I was very pleased with myself at the time. I thought this is great. Here's this concept and here's a way to"
},
{
"end_time": 3454.411,
"index": 134,
"start_time": 3430.691,
"text": " Implement it mathematically and it kind of got a bit of attention but not much and then Lionel pointed out to me that it was basically flawed in all sorts of ways and came up with a related idea that does something much more rigorously and it's a slightly different thing and we're still working on it to figure out how to extend it but"
},
{
"end_time": 3480.316,
"index": 135,
"start_time": 3455.077,
"text": " It's mathematically a much more serious enterprise now, but what it does basically, it says, okay, you've got a complex system. An example that's often used is you have a flock of birds. There may be birds flying around in the sky and sometimes that looks like they're flocking and other times it doesn't. Can you quantify that? And of course you could say, well, it's in the eye of the observer. Fine, it's in the eye of the observer, but so"
},
{
"end_time": 3508.08,
"index": 136,
"start_time": 3480.691,
"text": " Basically, you know, so is everything really, there's still a difference between a flock and a non flock. And if we can, if we can quantify that, then and generalize it. So, you know, maybe there's something about neurons that have an essence of this flockiness, but maybe not now in space in three dimensions, but in some other dynamical space in some other dimensional space. And the approach to this that Lionel and I,"
},
{
"end_time": 3536.544,
"index": 137,
"start_time": 3508.251,
"text": " was to come up with a measure we call dynamical independence, which is when a zoomed out level of description, a coarse graining as physicists like to say, a higher level of description of a system, if its evolution over time is statistically independent of what its constituent parts are doing, then it in some sense has a life of its own."
},
{
"end_time": 3563.882,
"index": 138,
"start_time": 3537.159,
"text": " then it is in some sense emergent, dynamically independent. And it turns out that the utility of this approach is that we can apply it in a purely data-driven way without making any presuppositions of saying, oh yeah, there's a flock, is it emergent? We can just identify potential emergent properties in a system and see how they look in different states."
},
{
"end_time": 3593.746,
"index": 139,
"start_time": 3564.991,
"text": " And just to where we're at right now for me is a hugely exciting thing, actually, which is that often people say, well, conscious states are emergent from from that neural underpinnings. The brain is in some sense, conscious brain is in some sense more than some of its parts. That all sounds very nice. And I'm sure I've said stuff like this many times before. But now with the tools that Lionel developed and applied with a PhD student of ours and who's also working with others in Paris, Thomas Andrion,"
},
{
"end_time": 3623.268,
"index": 140,
"start_time": 3595.077,
"text": " We find something quite different actually, which is that when the brain is in a conscious wakeful state, there's less prominence of these so-called dynamical independent course grainings than when the brain is unconscious in anesthesia, which is sort of not the, it's, the slogan would be a little bit not what we were expecting. Like emergence is lower in consciousness."
},
{
"end_time": 3648.524,
"index": 141,
"start_time": 3623.609,
"text": " than in unconsciousness. That would not be what I would have predicted a few years ago or even two years ago, one year ago I'm not sure. But it's looked when we operationalize emergence this specific way and with this specific data that's what we find but then that raises other interesting questions and I think this is the beauty of actually"
},
{
"end_time": 3673.234,
"index": 142,
"start_time": 3648.899,
"text": " operationalizing these things making them quantitative because now we have another set of questions just like ah maybe it's this is because in the conscious state actually when you don't have emergence in the way we're quantifying it what you actually have is something called scale integration where there's actually what's happening at the macro and what's happening at the micro and much more independent there's much"
},
{
"end_time": 3702.568,
"index": 143,
"start_time": 3673.729,
"text": " Less separation of scales and this takes us right back to what we were talking about with Mike and indeed the whole idea of conscious AI that you know I said right at the top that in brains there seems to be it's harder to separate what they do from what they are in a sense this is a way of quantifying that hardness and it seems when the brain conscious it's even harder to separate what it does from what it is you have this deeper integration of scales vertically not across time or across space but across levels of"
},
{
"end_time": 3728.08,
"index": 144,
"start_time": 3703.131,
"text": " Description of the system. And so for me, this is opening like a whole range of questions that haven't really been asked. Certainly I haven't asked him before. Um, it's a different way of looking at a system like this and it all turns on, um, having this mathematical method available. And for me that goes right back to the serendipity of being curious about range of causality 20 years ago. Hmm."
},
{
"end_time": 3747.415,
"index": 145,
"start_time": 3728.865,
"text": " There's some research that says that when one takes psychedelics, it probably depends on the psychedelic, that the brain is less active even though your conscious experience is greater somehow. Is this related to that or have you not studied emergence when it comes to the brain under psychedelics?"
},
{
"end_time": 3777.944,
"index": 146,
"start_time": 3748.131,
"text": " It's a little related, so we have a little bit in collaboration. We don't have the license to collect our own data under psychedelics, but we've collaborated with people like Robin Cart-Harris and others who have. We have not yet, but this is very much on the cards, we have not yet applied this same measure that I was just talking about to the psychedelics data, but there's no reason we can't."
},
{
"end_time": 3804.462,
"index": 147,
"start_time": 3778.507,
"text": " What we have done is we've applied other measures that have often been used in things like sleep and anesthesia as well that measure what we call signal diversity. And the story here is that when you lose consciousness, your brain activity seems to become more predictable. So the repertoire of states that it inhabits is lower."
},
{
"end_time": 3825.674,
"index": 148,
"start_time": 3804.855,
"text": " Running a business comes with a lot of what ifs."
},
{
"end_time": 3854.36,
"index": 149,
"start_time": 3826.015,
"text": " But luckily, there's a simple answer to them. Shopify. It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need. From website design and marketing to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality. Turn those what-ifs into sign up for your $1 per month trial at Shopify.com special offer. When we applied this, this was now nearly"
},
{
"end_time": 3882.329,
"index": 150,
"start_time": 3854.889,
"text": " eight or nine years ago to data from psilocybin lsd we found the opposite that the brain activity became even less predictable so more diverse more different patterns less compressible higher levels of complexity so that's one clue but but it's to me it's still very preliminary this method of measuring signal diversity is quite"
},
{
"end_time": 3905.179,
"index": 151,
"start_time": 3883.285,
"text": " Precarious it depends if you do a different way you tend to get different results. But I think you know there are other things we looked for we didn't find in the psychedelics data set you know I was expecting to see for instance just much greater information flow from the front of the brain to the back I thought you know you that that might explain."
},
{
"end_time": 3934.855,
"index": 152,
"start_time": 3905.64,
"text": " Now, lastly, speaking of surprise minimization,"
},
{
"end_time": 3958.268,
"index": 153,
"start_time": 3935.623,
"text": " What else has surprised you lately in consciousness research? What has surprised me? I mean, we can put it in a sidebar. I think the thing that surprised everybody, this is only tangentially related, is how simultaneously impressive and unimpressive language models are. Okay."
},
{
"end_time": 3988.2,
"index": 154,
"start_time": 3958.695,
"text": " They're really very different from how I thought they would be. They can do a lot more, but they also have sort of still bizarre failure modes and so on. So I just would not have expected the trajectory of language models to be as salient as it has been. That's certainly been a big surprise. What else has been surprising? I don't know."
},
{
"end_time": 4018.968,
"index": 155,
"start_time": 3990.23,
"text": " It's a really good question. I'm not sure that anything massively stands out to me. I'm sure something will come to mind as soon as we finish this conversation. As it does. There have been other things which have turned out kind of in ways that one might have expected. There was this huge adversarial collaboration between integrated information theory and global workspace theory. This big effort to compare these two big theories of consciousness."
},
{
"end_time": 4043.916,
"index": 156,
"start_time": 4019.616,
"text": " And of course that's turning out that there's evidence for and against both and there's no decisive blow against either. And that's probably exactly what one would have expected, though there's still a lot of interesting and surprising things there in the details. But yeah, I don't know. There's lots of things that are"
},
{
"end_time": 4071.254,
"index": 157,
"start_time": 4045.555,
"text": " I would say small-scale surprising that's like, oh, I didn't expect that experiment to go this way or that way, but I can't think of anything massively. The AI thing is sort of dominating my surprise minimization landscape at the moment. Thank you both for spending so much time with me and the audience. Thank you so much. Yeah, much appreciated. Thank you. Thank you, Mike. See you both. Yeah, see you. Hi there."
},
{
"end_time": 4096.203,
"index": 158,
"start_time": 4071.613,
"text": " Kurt here. If you'd like more content from Theories of Everything and the very best listening experience, then be sure to check out my sub stack at kurtjymungle.org. Some of the top perks are that every week you get brand new episodes ahead of time. You also get bonus written content exclusively for our members. That's C-U-R-T."
},
{
"end_time": 4117.824,
"index": 159,
"start_time": 4096.476,
"text": " You can also just search my name and the word SUBSTACK on Google. Since I started that SUBSTACK, it somehow already became number two in the science category. Now, SUBSTACK, for those who are unfamiliar, is like a newsletter. One that's beautifully formatted, there's zero spam,"
},
{
"end_time": 4145.282,
"index": 160,
"start_time": 4118.114,
"text": " This is the best place to follow the content of this channel that isn't anywhere else. It's not on YouTube. It's not on Patreon. It's exclusive to the Substack. It's free. There are ways for you to support me on Substack if you want, and you'll get special bonuses if you do. Several people ask me like, hey Kurt, you've spoken to so many people in the field of theoretical physics, of philosophy, of consciousness. What are your thoughts, man?"
},
{
"end_time": 4174.616,
"index": 161,
"start_time": 4145.725,
"text": " Well, while I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics. And it's the perfect way to support me directly. KurtJaymungle.org or search KurtJaymungle substack on Google. Oh, and I've received several messages, emails and comments from professors and researchers saying that they recommend theories of everything to their students."
},
{
"end_time": 4201.698,
"index": 162,
"start_time": 4174.957,
"text": " That's fantastic. If you're a professor or a lecturer or what have you, and there's a particular standout episode that students can benefit from or your friends, please do share. And of course, a huge thank you to our advertising sponsor, The Economist. Visit economist.com slash totoe to get a massive discount on their annual subscription. I subscribe to The Economist and you'll love it as well."
},
{
"end_time": 4226.084,
"index": 163,
"start_time": 4202.159,
"text": " Tou is actually the only podcast that they currently partner with. So it's a huge honor for me. And for you, you're getting an exclusive discount. That's economist.com slash toe. And finally, you should know this podcast is on iTunes. It's on Spotify. It's on all the audio platforms. All you have to do is type in theories of everything and you'll find it."
},
{
"end_time": 4251.664,
"index": 164,
"start_time": 4226.357,
"text": " I know my last name is complicated, so maybe you don't want to type in Jymungle, but you can type in theories of everything and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read in the comment that toe listeners also gain from replaying. So how about instead you relisten on one of those platforms like iTunes, Spotify, Google podcasts, whatever podcast catcher you use. I'm there with you. Thank you for listening."
}
]
}
No transcript available.