Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

William Hahn: Top AI Scientist Unifies Wolfram, Leibniz, & Consciousness

February 11, 2025 1:04:29 undefined

⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.

Transcript

Enhanced with Timestamps
143 sentences 10,094 words
Method: api-polled Transcription time: 62m 0s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
[0:26] I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
[0:53] We're here at the MIT Media Lab. I'm here with Professor William Hahn. The links to his work are on screen and in the description as well as the previous time that we spoke, which was a fantastic podcast and I think people should check it out. Thank you so much. It's really great to speak with you again. I've been looking forward to this for
[1:22] Sometime now. So we spoke off air yesterday and the day before you were actually there for the Jacob bar and his podcast, which is also on screen. You had some ideas as to how to unify liveness with Stephen Wolfram's work. Please talk about that. Yeah. So really the idea is bringing together ideas from consciousness and computation. And so typically when we think of consciousness, we have this kind of light switch view.
[1:51] that it's either on or off. And there's an idea attributed to Leibniz where it's really a hierarchy. It's kind of a ladder of different capabilities. And it revolves around the ability to represent your ideas, to think about your thinking, and then to think about that thinking and so on as this sort of this hierarchy that goes up. And if we think about the simplest animals, they don't
[2:18] They have the sort of the first order things like sensation and response to stimulus but they don't really have any way of representing that response and it seems that by thinking in terms of representation and then representing your representation where we can have things like language and then we can build you know philosophies on top of that language and it's
[2:43] It's a way of modeling the model and a way of talking about the talking and things. And so we get this more analog or at least a stepwise approach to consciousness where it's not just on or off, but it's kind of this discontinuum. And I think this is relevant now because we're in this age where we're building intelligent machines.
[3:08] And I would argue that these machines that we have in the form of these modern AIs are quite intelligent. And the question now is, do they instantiate something like consciousness? And if so, is it like human consciousness? Is it something more like a lower animal where it's just knee-jerk reaction, you put in the prompt and it just spits out the tokens? Or does it have some way of knowing what it knows and thinking about what it's thinking about?
[3:37] And maybe the current versions that we see right now in early 2025 don't. But it seems to me almost inevitable that if we keep going in this direction that we will kind of close the loop on this thing. That the snake is going to bite its tail and we will bootstrap this thing we call consciousness inside these machines. But we might need a kind of graded scale to try to understand that.
[4:06] And then hopefully this will help us understand things like animal consciousness or even higher levels that human consciousness might achieve. So when someone says that consciousness is a continuum and it's not a binary on or off, I think you can always translate a continuum to an on or off by saying, is it zero or non zero? So for instance,
[4:26] We say that a photon is chargeless. We don't say it has a little charge or the electron. We say it's a charged particle, even though it's charges minuscule. Right. So when you say that consciousness is a continuum in its representations of representations, do you believe there are entities or things or objects or what have you that are not conscious or is it just all conscious? Well, I think that's that's a great question. I think that's at the heart of it is trying to figure out
[4:54] Does this have that threshold? Do we want to think about things like rocks, for example, as being conscious? Classically, that would have been the pinnacle example of, no, clearly it's not. But I would argue that things like a modern GPU are rock-like objects. They're these semiconductor layers, very delicate, intricate patterns.
[5:19] But essentially what the classical world would say is a fancy crystal of some kind. And now we're instantiating this intelligent, possibly ultimately conscious behavior out of this rock-like object. So the situation is getting more interesting than it was previously. How does this tie into Wolfram? So Wolfram has this fantastic idea of computational equivalence that
[5:50] The idea is that the natural world stumbles upon computers quite often. That the idea of a universal machine that Turing established, which is a machine that can simulate any other machine. It's a machine that runs software. And in that software programming, you can get this universal machine to simulate the behavior of any other machine. And in the early days of computing, it seems like computers were
[6:19] very delicate human artifacts that took a lot of engineering work to put together. But what Wolfram showed with his cellular automata research is that it's relatively easy to stumble upon a system that's complex enough to support universal computation. That he has this breakdown of sort of four different classes of, he uses automata as the case study,
[6:48] But he argues that any system that's not obviously simple, any system that's sort of interesting in its dynamics, the majority of the time that's going to support universal computation. So in other words that there's these natural computers
[7:07] All throughout nature and previously we didn't recognize them or understand them as computational objects. I think that's what's just so interesting about this new era of maybe what we might call philosophy is that we have this new language. We can invoke the metaphor of a computer and software and program and universality where as we spoke last time the best minds on the planet didn't have access to those thinking tools.
[7:35] And so if that's really true that there's these computer-like objects all over the place, and now we have computer-like objects that might be instantiating consciousness, namely a GPU running a large language model, which obviously would be highly debated whether or not that's conscious or not. But if we take that in the abstract sense and extrapolate that out a decade or so, to me it's
[8:00] It's clear that we will have a hard time, or it might be fruitless to argue whether they are or not, because they'll be so similar to the kinds of things that we see in humans, that maybe they're actually all throughout nature. Maybe throughout the universe we should expect to find objects that are universal machines, sort of spontaneous in nature. And we want to think about now what's the
[8:30] the chance essentially that they have a software program that might be somewhere on this ladder of consciousness that is attributed to Leibniz. Are you referring to computational equivalence? Exactly. Yeah. OK. So in the Leibniz case, you were referencing representations of representations and you have a hierarchy. Right. In the Wolfram case, it sounds like you have what is not a computer and then what exhibits computer like qualities. And then you hit universal computer. Right.
[9:00] But there's nothing above that. Exactly. And that's kind of the point is that you smash into this sort of upper ceiling of computation relatively quickly, right? It's sort of a low speed limit and whatever vehicle you're in quickly hits that speed limit because that's what's so profound about universality is that's the ceiling. There's nothing something more powerful than that. And so if that is relatively easy,
[9:25] to create spontaneously in some sense by chance we get a computer-like object, a universal machine, then we want to think about, well, what's the... the analogous part is the software that would go with it, right? So Wolfram suggests that it's relatively easy to find computers in the wild. Well, what's the chance they have any interesting programming to go with that?
[9:48] So is there a similar ceiling with representations? Is it the case that when you hit the fourth representation you've hit them all and is there a universal representation at level six? So I think you hit the nail on the head with that one because that's really kind of what I've been thinking about. I like how you phrase it. Are humans already at that universality? Are our intellectual sphere, our language, our culture, have we maxed out?
[10:13] Or are we going to transcend this era, reach another level of consciousness, maybe called spirituality, whatever that might be, another layer of evolution, or is this it, right? That we have the ability to represent our representations and we can think about our thinking and then we can think about that and so on. And I don't know, right? I would like to hope that we're not at that universal level yet.
[10:42] I think we mentioned last time on the channel, this sort of path from ape to angel, right? Or from rocks to light, you know, kind of thing. Where are we on that sort of evolutionary trend, in a sense? And I think that we're not at the angel stage, right? We're not quite there yet.
[11:10] And so it reminds me of a quote I think they said where, I forget who says it, we'll have to look it up, but we're angels with assholes. There were still these corporeal beings that eat sandwiches and have to go through our daily lives. We're not this ephemeral being that's just pure thought and energy and things. But it seems like the AI is on that trajectory.
[11:32] And I wonder if maybe there's just a discontinuity and as others have suggested, maybe the biological sphere just bootstraps the AI and then that's the next stage of evolution. Because for lack of a better term, they don't have assholes. Is the question of have we hit the upper limit equivalent to the question of are there thoughts that we can't think of?
[11:57] So now what I mean by equivalent to is we can imagine that let's look at this room and there are thoughts everywhere and we have access to them. Okay, cool. Then there are thoughts that are outside this room, we don't have access to them. Then we'd say, okay, we have not hit the upper ceiling. And then we can say that, okay, so in that case, it was if there exists a thought that's outside this room, then we have not hit the upper ceiling. We can also say, if
[12:20] We have not hit the upper ceiling. It implies there is a thought that's outside this room. So I'm not sure if that's the case, because then you would make an equivalence between those two. Right. Well, I do think it's the case there's unthinkable thoughts.
[12:32] And the question is, are we able to think about them in the abstract sense? Maybe not the actual thought, but do we really understand or can we even appreciate that there's things outside of our conceptual and perceptual window? And that's what I've been thinking about a lot recently or trying to and
[12:53] You know, we mentioned last time Richard Hamming and he talks about this idea. He says there's smells that dogs can smell that we can't smell and they can hear things we can't hear and there's lots of things in the animal kingdom that have a larger perceptual window. And he argues, why should we think that there's thoughts we can't think?
[13:10] Right, it seems quite clear, and also even just arithmetically, there are numbers that we can never conceive of, at least we can write them down like Graham's number, but we can say, to the power of Graham's number, we can't actually conceive of it. We actually don't understand what it means for a number to be greater than 150 or so.
[13:26] We can't fit these things into our mind. We can kind of only approximate them. So to someone listening, they're like, well, what are you talking about? It's quite clear. The philosophical term, I believe, is Umwelt. It's quite clear. Umwelt is bounded and has some overlap with other animals. I mean, that's not quite clear, but it's not obvious. OK, so then they're thinking, OK, so what's so profound about this? Well, so
[13:54] Thinking about these sort of unthinkable thoughts has led me in a few interesting directions. Why can we not get to that space? I think I mentioned last time the future is not what we think it is. Because if we knew exactly which direction to go, we could just go in that direction. And so there's some kind of
[14:15] Barrier and the question is is this like a natural barrier? You know, we just don't have enough neurons We don't have enough court cortical layers to represent the representations and so on and then there could just be another hierarchy Or something I've been thinking about more recently is that there's a there's a more practical Reason why we can't think these thoughts and it's kind of a sort of an immune system that we mentioned before That there's that the mind it's our mind
[14:45] itself is trying to protect us from a variety of thought patterns that would essentially destabilize both our mental patterns and then ultimately our physical self if the thoughts go into a terrible spiral. So from an evolutionary point of view, there might be this kind of filter, this barrier that evolution has instantiated to prevent us from going nutty, essentially.
[15:14] And that if from a Darwinian evolution perspective, if we just want to kind of populate the planet the best we can, you don't need this kind of intelligence, right? Intelligence could be kind of a Fermi style filter that's actually preventing a lot of progress. If we think about like, you know, the rain the dinosaurs had on the planet, it was 100 million years, 200 million years, and humans haven't had that kind of longevity on the planet yet. And
[15:44] It's hard to imagine us continuing the way we're doing for that much time. But imagine like whales and dolphins. You could easily imagine them doing the same thing they're doing for millions of years into the future and being happy to do it. But this also leads me to think about what thoughts our language supports. And I used to think that English was good enough. But now I'm not so sure. And
[16:12] I think what's really interesting is we've taught these language models, not just English, but all of the human languages that we could get our hands on. And I think something that's particularly relevant is the computer languages and the mathematical languages that we've also put into that system. So it has representations that it can map into English, but its mind, so to speak, doesn't really operate in English. It operates in these dot vectors, these word vectors that are independent of any particular language.
[16:38] Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store.
[16:48] . . . . . .
[17:09] Okay, this is extremely interesting. So I have two thoughts and you can take both of them if you like or choose a direction. So when you said that you thought English was enough, are you saying that you thought there was something like the universal language and English is one of them. Chinese may be another. There may be some languages where they only have a hundred words and maybe they those are insufficient to represent. So in other words, some people say there are some words in some languages which you cannot translate from untranslatable words.
[17:37] And technically, they should say they're untranslatable into another single word. And let's pick one untranslatable word. So in Turkish, there's some word for the feeling that you have after you drink coffee with friends, something like that. Let's say that. I just described it there with a set of 10 words or so. And if it was German, you would just smush them all together and make one new word out of it. Right. Right. OK. I think Turkish does actually as well. OK. So is that what you meant when you were saying you thought English was enough? Right. OK. It was this like you mentioned, a universal software framework.
[18:07] that we have this idea of universal machine but you know this idea of a Turing tar pit that just because everything's possible doesn't mean anything's practical with say like a Turing machine or some other abstract computer and so the idea is what computer languages would be sufficient right so we could as an analogy we could say basic is not sufficient sufficiently powerful enough of a computer language to instantiate the internet or something like that
[18:33] Or it might be, but it would require orders of magnitude more complexity in the coding. Whereas something like C ended up being sufficient to kind of create the modern software stack. And so I suspect now that English is not quite as universal as I previously thought it was. This is why I've been looking at things like musical languages and conlangs and esoteric languages to get some ideas about where could we put these unthinkable thoughts.
[19:03] Because when you learn those new languages, you can suddenly think in new, maybe more compact ways. You don't have to have a whole sentence. You can have just sort of one neuron. But like you said, I would argue that there are no untranslatable words. And if there were, you couldn't talk about them at all. You wouldn't even be able to mention that they were untranslatable in some sense. And so that would be in the space of sort of maybe not unthinkable thoughts, but uncommunicatable thoughts. And one of the ideas I've been thinking about this year is
[19:33] The set of ideas for which you can take it apart, like Legos, right? They have the Lego Lab upstairs. And Legos, you could take apart into little bricks, and like language, I have an idea in my head now, and I'm decomposing it into a serial sequence of symbols, and then you're taking those and you're putting them back together, and I hope that you get the same kind of mental representation. But you could define a set of ideas for which precisely you cannot do that, right? Where if you take any one piece, it's not the same thing anymore.
[20:03] um, that it doesn't go down into a sequence of things. And that might be, uh, you know, related. It reminds me of things like gnosis, right? Knowing from the inside where you can't explain, you know, things like faith and God and all that. And I think a lot in the modern world, people were like, well, explain it to me, you know, explain what you mean by faith and God. And it's like, well, that's not how it works. It's precisely the kind of idea that I can't
[20:28] Let me have fun with this immune analogy. Is it the case then that there would be autoimmune disorders of the mind? The mind sees something that's actually benign or even salubrious or nourishing and thinks that it's not and so it tries to remove it and perhaps it attacks itself in the process?
[20:58] It's a fantastic idea. I'd have to think about that a little more, but I think in some sense, absolutely, that there's going to be, and that might be kind of the basis for mental diversity and mental disorders, if we might call them that. Because now we're better understanding language than ever before. By building these machines that instantiate language, it gives us this petri dish, this microscope.
[21:29] I think there's a lot of mystery around how these language models operate. And I think the real mystery is in the words, in the language. We didn't understand what words were. We didn't really understand what language was. And we thought it was this communication kind of thing. And I've been thinking about it more with my colleague, Dr. Barinholtz, as an organism, right? Something that's alive.
[21:56] and that we need to invoke the framework from biology to think about it not just as communication, but as a, I've referred to it as a parasite in some sense, and not necessarily pejorative, I've referred to it as a divine parasite, but it's something that lives on the brain, but it's not the brain.
[22:22] One of the ways I've been thinking about this is with things like multiple personality syndrome and split personality syndrome that I think those are evidence that when I'm talking to you now, I'm not talking to your brain, right? I'm talking to this entity that lives in your brain. And when we see people with multiple personalities, it's clear that they're instantiating more than one operating system, software, stack,
[22:48] On the same hardware and their multiplex like we run more than one app on our phone at a time That the brain is rich enough of a substrate of a backdrop to have multiple plays unfolding at the same time And then this leads into Kind of the other side of unthinkable thoughts and this immune system concept in areas of things like mind control and is our brain a programmable object
[23:18] Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers.
[23:43] By joining you'll directly be supporting my work and helping keep these conversations at the cutting edge so click the link on screen here hit subscribe and let's keep pushing the boundaries of knowledge together thank you and enjoy the show just so you know if you're listening it's c-u-r-t-j-a-i-m-u-n-g-a-l.org.
[24:04] Okay, so
[24:30] When we speak about
[24:52] Every single thing that I would assume the brains quantum which would sure okay well whatever it doesn't matter we get down to the electrical level we say patterns of firings so I imagine the patterns of firings along with your brain are the personality or could be I don't know it would say that's at least an ingredient right so let me know what you think about the distinction between hardware and software as it relates to the brain yeah so one of the things we spoke about last time is this idea of a virtual machine
[25:17] And as I mentioned a minute ago, if we go back into neuroscience a hundred and fifty years ago or so, they didn't have the computer metaphor and they didn't have the software metaphor to understand it. Well, now we do and that caused the cognitive revolution is to think, OK, now we know that there's this thing called program and so on. But I think we need to take it a step further and think about machines that themselves are made out of software. So the example we gave last time that I like to think about is
[25:45] If you, if this idea of an emulator, which the listeners who play video games might be aware of, but this is a piece of software that you run on a modern computer that simulates an older computer. So if you want to play Super Mario on your MacBook, you can get this program that it doesn't run Mario directly on your Mac chip. The Mac chip simulates, it pretends to be a Nintendo chip and then the Nintendo chip naturally can run the Mario software.
[26:16] And what's really interesting is you can do that again. You can have an emulator running on an emulator. You could somehow run an Atari game on an endo. Yeah, I've never thought about that. And so it could be virtual machines all the way down in some sense. And so I think when we try to understand language and the brain and the self and personality and ego and all that kind of idea, we need to consider that there could be multiple layers
[26:42] between the thing doing the talking and the neurons doing the firing. And traditional neuroscience would suggest this kind of a one-to-one mapping, right? And I think the reason why we haven't made as much progress in those spaces is because there's multiple representations.
[27:01] So when one references these turtles, the turtles all the way down, I take it extremely seriously. So when you say it's virtual machines all the way down, do you actually mean that or do you mean that it's multiple levels of virtual machines, but there's a finite end? Well, there's a good question. So this in the brain, I think there could be multiple layers of this virtual machine more than we think. But it makes me think of a much broader idea in terms of matter, right, that
[27:30] Quantum information theory would suggest that matter itself is an information-like process and that when we get down below the atoms there might be software again and it's sort of this interesting loop where we have information at the bottom which somehow instantiates the atomic reality and then on that we build computers out of the atoms and then we get software again. It would sort of complete the circle.
[27:58] Um, so both in our mind and in the university, it might be sort of these virtual turtles, right? Which I think is a, is an amazing reference. We're in the, the MIT media lab where Seymour Papper developed the turtle programming language earlier when you were talking about representations and you're saying, okay, we can represent representations and et cetera. At some point at the first representation, you're representing something that was presented in order for you to represent it.
[28:28] Do you believe that's the case or also do you believe there that it goes infinitely downward? Or do you believe there's some non-represented substrate that gets represented? I think in the case of humans, there's going to be something like a bottom in some sense, right? We see that with cells and things. But it's not clear where that happens.
[28:53] You know i think there's all this great work you've been following it with michael levin looking at the cell and the electrical activity and that there's a lot more interesting there that the classical biology kinda just glazed over and now we need to go back and realize no there's really interesting software at that layer and throughout the engineering world we always build a virtual machine and a lot of times those are just what we call programming languages.
[29:20] right so if we think about like the electrical activity of the cell and it has some kind of software it also will have evolved it seems likely some kind of other abstraction to make it operate more efficiently just so just as an analogy when we we know about nvidia gpus and we know about these large language models well we don't really work directly with the gpu we create abstractions like python
[29:45] right and pi torch and and intermediates a pi torch is an imaginary machine that has things like vector operators and matrix multiply that then get mapped to the low level instructions on the gpu so so in the modern world we never actually talk to the hardware directly we build a virtual hardware that's much easier to use and i would imagine that nature and evolution and biology would have done the same thing
[30:11] So we have an audience here of Professor Barenholtz and I'm sure he's champing at the bit and has some questions. Yeah, I can't wait for you to have them on the show. Which I'll feel that also I would like to talk to you at some point, but at least for now the audience can hear a disembodied voice at least or perhaps just my voice reiterating the question. Before we get to the audience questions, tell me more about these unthinkable thoughts. Well, one of the directions I've been thinking about in that is
[30:38] The thoughts that other beings think about, right? And the classic example is, what is it like to be a bat? A classic paper from 74, I think. And, you know, I wonder if it's anything at all to be a bat. And I know Elon has some interesting ideas about
[31:01] I don't know. I think I take a strong approach to this that maybe it's not like anything to be a bat. There's this idea in psychology of what's called infant amnesia and it's that babies don't really have any episodic memory.
[31:29] And I actually encountered this as a child. I remember I was, I don't know, seven or eight or something like that. And I asked my mom about her oldest memory. And I was, I didn't understand at the time how she could remember decades of her childhood. But as an eight-year-old or whatever it was, I couldn't remember being two. Right? And I was saying, well, that's only six years. How is it that after six years, I can't, that's too far back to remember, but my mother could remember back multiple decades? How would that sort of thing work?
[31:56] And I think one of the explanations to infant amnesia might be this idea of representation. That without language, right, we install language into children between the ages of one and two and so on. And without that framework, it would be nearly impossible to have an episodic memory system without having a labeling system, right? And so when I can say, oh, that nice dinner we had yesterday with Jacob, and have a label,
[32:26] right and and with that label you can reference this experience and we can say oh that was an episode and I remember that but how would you refer to it in the space of all possible mental states without a language system which is like the Dewey Decibel catalog without having this card I know that's the one I mean what would there would be no meaning to that experience and so we think about what is it like to be a bat I think they might have a
[32:54] This fleeting ephemeral sensory loop that it would involve things like pain and pleasure, but it can't represent those things. It can't recall them. It has no way of knowing that it went out and got fruit that morning because how would it recall that mental state without a language type thing?
[33:17] Maybe we need to do more research and find out, well, they do have a language type system and they're tokenizing the sonar and things like that. I don't know. But it would seem to me that we should be skeptical that it's like anything to be those systems. And I take it to another extreme and say that it's, you know, what is it like to be a human being? And do we really know or is it just this approximation
[33:44] that our language model can reference different episodes and different states and so on. Because the example I like to give is the vast majority of our so-called experience is outside this thought window. We don't even know that we have gallbladder, let alone what they do, right? Most people can't name the organs, they don't even know what organs they have, right? So if we don't even have words for them, we don't have representations for them, they don't exist, right?
[34:13] I remember a few years ago a student was saying they wanted to be hollow and they didn't quite know what that meant at first and not in like the the shallow sense in the sense of like they didn't want to have to have all of that complexity because from their experience point of view they don't right the thing behind your eyes that's talking doesn't most the time unless something goes wrong completely unaware that we even have all those systems operating in parallel we're not thinking about our toes we're not thinking about our ears we're not and and and strangely enough we're not usually thinking about thinking
[34:41] I think most of, and this might again be a too strong a statement, but it seems like most people, most of the time, including myself, are not thinking about thinking. We might be thinking, but we're not thinking about the thinking. And then we take it a step further and think about that, but that's when you start to get nutty. That's when your mental immune system kicks in and says, it's time for lunch. Do you think that getting nutty, quote unquote, is reaching that universal language model? That's an interesting thing.
[35:10] That touches on this lethal experience, lethal concept idea. As we said last time, if you see the face of God, it'll be the last thing you've ever seen. Can we reach that? Or is that transcendent? Will you just simply die? Or maybe not in the physical sense, but you're not a human anymore. And in thinking these unthinkable thoughts, you have to be willing to go crazy.
[35:36] And if you're afraid of going nutty or afraid of going crazy, you can't get past these barriers that your mind has built. Well, what's the point? Why do you have to be willing to go to lose your mind? Why would you want to at all? I think one way of defining losing your mind is sort of leaving everyone else behind, is going off
[36:00] In an abstract vector space direction of thought that there's no there's no one there. There's no one else there or you might be in very rare company. And maybe you could define being human is that that overlap the reason why we can communicate and you know because we have the shared experience and.
[36:20] a sort of a common set of beliefs and ideas and languages and so when you escape that you're not in the herd anymore. So there's a difference between having thoughts that other people have not had so you're in uncharted territory and also just having a wrong representation of reality. Okay so girdle as we talked about in the car ride it's not as if he was in uncharted territory where you may have some gold that you can then bring back and you're like Steve Jobs and you've invented the iPhone supposedly even those engineers also.
[36:49] Help with that it's more like in girdle's case he believe the government was poisoning him or someone was poisoning him but i think that's a perfect example of somebody. Who went out into the mines brought back a gold nugget with his with his theorems and lost his. Maybe his humanity and what they lost his mind but became. Too far away to where other humans could no longer you know hey just sit down have a bowl of soup was not just about this he couldn't take a break for lunch.
[37:19] Okay, I would like to talk to you for so much longer, but I know there are some questions here. I'm curious what you think about the software hardware dichotomy and in particular the language in the case of humans and other species that do not have language.
[37:40] Is there is there a completely different level of abstraction that took place in at the symbolic level because language is fundamentally survived, right? Okay, before you answer the question, can you summarize the question? Yeah, so Dr. Bernholz is suggesting that when we get this symbolic language layer did that create a new A new layer to reality right a new expansion of the universe into this new dimension and It seems like it did
[38:10] Because I think that might explain this extreme chasm between humans, human behavior, and then even other obviously intelligent animals, whales and primates and things like that. It seems like we are of a different category. And especially if we go back to the ancient world, it was just sort of patently obvious that humans were on this spiritual realm that was just miles above the plants and animals and the backdrop.
[38:38] And so maybe language did get this universality that we didn't have before, that brains, birds and dolphins and things have one kind of universality on the hierarchy, but this language instantiates a whole other. But maybe it's just a virtual machine that's much more efficient and that
[39:01] Brains are really powerful, but even better is this software brain via language that rides on top of brains. And that just, maybe it's still universal, but like a Mac comes out today versus the Apple II, they're both universal machines, but they can accomplish more tasks than the others. It seems impossible from the point of view of like an Apple II. So do you see that as continuous in some ways or completely discontinuous? Well, that's a great question.
[39:31] You know, in complex systems, there's this famous thing, I think it was Anderson said, more is different. Right? And the idea is that you get like a phase transition. And the classic example is you have a pot of water on the stove with a digital temperature setting. And if you just click up one degree every few minutes, nothing happens. Nothing changes about the system. And then suddenly you hit this inflection point where you get boiling.
[39:54] Right, so it's clear that we see that from a practical point of view, that while there is this universality, we do get phase transitions and kind of capabilities, right? So we had cell phones in the 90s, but they didn't do any of the things that our cell phones can do, even though they had universal processors inside them. But they couldn't, they hadn't gone through this phase transition of capability. And so it could be that languages like that, that it gives us a whole new
[40:24] set of apps that we can run. But it also reminds me of this idea that Marvin Minsky talks about, the founder of the Media Lab that we're in now. And he talks about a car and when a car is running. And we all know what it means when the car is running, right? And it is sort of this digital thing, right? The car is either running or it's not.
[40:47] Maybe you could get into the weird, you know, kind of states and stuff. But the first approximation, it's sort of this digital. And he argues that with consciousness is like that, that it's this thing, but he argues that it's not mysterious, right? It somehow comes out of this dynamic of all of the parts and all of those things together, instantiate this thing called a running car. And as we know from auto mechanics and maintenance and stuff, it's subtle and it's
[41:17] Or rather, it's fragile. And if you have one system in the thing, the car won't start. Or the car, you know... So, it does seem kind of like this, and I think we need new language and we need to explore this analog-digital dichotomy, because it seems like the boiling, that as we add more complexity, you get new behaviors emerging. And this is exactly what we've seen with these large language models, with the so-called scaling laws, that we...
[41:47] We took systems that a few generations ago couldn't very well predict the next word. And if they did, it was just word salad. And then suddenly just scaling that system up, just training on a larger data set, and then suddenly it can do arithmetic. And then you scale a little more and it can do algebra, scale a little more, it can do theoretical physics. And what's the limit to that?
[42:10] I think that we're not that far away and we're approximating already not just artificial general intelligence, but artificial superintelligence. And I think that when we start training these things on video, for example, like so the large language model like ChatGPT was trained on Wikipedia and the historical archive of books and Reddit and things like that. But it hasn't watched YouTube yet, right? It hasn't been embodied in a robot where it actually gets to play in a sandbox for three years in a row like a little kid does, right?
[42:39] Little kids will slosh water around for hours on end, just learning how fluids work and what a container does and all that kind of stuff. They don't have any of those tokens, let's call them. And so I think we don't even need a new recipe. I don't think it's going to be mysterious how we're going to get to machines that can make real, genuine advancements in chemistry and physics and astronomy. Just give them a telescope. Give them more data.
[43:07] And give them experience and I think we'll get those phase transitions to where we have really capable machines. When you talked about animals and that they do experience suffering because at first it sounded like what you're saying was more of the Descartes. Well, animals do not have consciousness. That's purely a human phenomena and thus you can torture your animals and don't listen to their screaming because that's just the clanking of a machine. Okay, so you're not saying that.
[43:33] But you were saying that animals don't have a self model. But then do we have a self model that persists, that actually persists? Or is it more of the Buddhist notion where there's some transient self? You know, I think it's clear that they would experience things like suffering, but I don't know. It's not clear that they have the ability to represent it. And so they don't know that's what's happening to them.
[44:01] And while they'll probably, you know, take actions to try to move into a different environment to reduce that, they can't really like lament about it, right? They're not going to write the poetry about it and song and things like that to try to express those internal states because they're sort of farther down on that ladder. Do you believe that having a self model is also this binary, having a self model, or is it also somehow continuous?
[44:30] That's a great question. And the first thing that comes to mind is, do we have one self or do we have many selves? And I'm of the opinion that we all have sort of multiple selves and that when we get to break down in that delicate dance, then we can get multiple personalities that suddenly emerge and things. But I think in our subconscious, we have all of those things. We represent other people.
[44:58] In our in our mind, something we spoke about last time is I have I carry people in me in some sense. And the idea that in again, this is related to software, you know, there's the classic expression, the ghost in the machine. But I think we're machines made out of ghosts, essentially. And are they? Are they like selves? You know, we kind of have this one primary self, but I have a copy of you in me, right? It's how I'm able to communicate with you and think about you when you're not in my visual field and things like that.
[45:28] uh... it's not as rich as the self that you have but it's kind of
[45:32] Like a hologram, it's a lower resolution, but it somehow captures the whole thing in some weird sense. That's interesting. So when I speak to people on this channel, people ask me, how do I prepare? So I prepare for weeks in advance and until I get to the point where I can emulate the person. And that's when I say, OK, I've sufficiently prepared when I can imagine almost any question that I when I can answer almost any question that I can imagine from the point of view of the other person and be correct. And I can test myself by
[46:01] Okay, what questions did this person get asked in an interview? You can put that into an LLM and say don't give me the answers, right? And then I can say what would they say? What are they likely to say? So, you know like the in the large language models within the neural networks we have like this weight pruning and we can do like a lower reduced a bit depth for each of the weights, right? So you have a low bit depth representation of me essentially, right? But then we need to think of it like an ecosystem.
[46:29] Because how many people do you have in your mind? How many people have you interviewed, right? You have all of those ghosts in your machine, essentially. Yeah, speaking of unthinkable thoughts. Well, I had these experiences. I've had an experience, let's just say that, and I'll be somewhat vague about it. And we talked about it off air, so I'll just briefly speak about it where
[46:55] I felt like I was losing my mind
[47:15] Each week I'm interviewing someone who has an entirely different point of view as to someone else and I have to take them on, first emulate them but also think that they could be correct because I can't be dismissive of you or contemptuous of you and think my model is the right model and I'm only going to entertain your model as a theoretical fantasy but not actually treat you as you possibly have an element of reality.
[47:41] How did you recover from that? Well, in some ways, I'm still recovering. It was
[48:02] Quite traumatic and I do have to distance myself from the thoughts of other people and arguments. I don't have to distance myself from arguments, but I can distance myself from conclusions of other people, especially what they have severed. What they say with confidence and many people, we confuse when they speak proclaiming something without diffidence with, okay, we give it more credence than when someone's speaking meekly.
[48:32] And so I have to almost as if they were just typing their their speech. I have to evaluate their questions as such and not say, OK, well, what are the accoutrements that come along with their speech? Kind of like a multiple choice sentences where they're just sort of sitting there and you haven't assigned a true statement to any of them. And one of the just if people are listening, I went through act therapy. So acceptance commitment therapy, I believe that's what it stands for. And I have an episode on it. I interviewed this lady named Lillian Dindo.
[49:02] And one of the pieces of advice is if you're encountering something that is triggering, you can actually recoil from it. People will say, no, no, no, don't face it. You're supposed to face it as much as you can and do so voluntarily. And one of the ways you can do so is let's say someone said something that is triggering. This is just now a vague example.
[49:24] You can then look at the words and then just read the words and just say these are just words on a paper. They don't influence me. They don't have to influence me. I don't have to buy into it. This is one model of the world. It strikes me as sort of reading the code but not running the program. That's a brilliant way of phrasing it. Yeah.
[49:44] You know, because I had the same and still have the same kind of challenges myself, particularly in this kind of research, in thinking about unthinkable thoughts and this concept of lethal text and so on. When I first came across that, just the concept of an idea that would do harm, I was very hesitant for a long time to even share that idea in the abstract sense. Not even any particular idea, but just the meta idea of that there are harmful ideas.
[50:15] And I had to, you know, in this act of climbing the mountain of madness, I had to retreat because I felt like I was getting too far away from humanity, from myself, from my past. And when we get to sort of new layers of thought, they are lethal to our previous self, right? Your adulthood is lethal to your childlike self.
[50:46] Right. So why were you afraid of sharing even the notion that there are lethal ideas? Because somewhere in me that idea frightened me. And I was afraid of doing harm to other people. I worried that maybe I would encounter someone who didn't have the right constitution or wasn't in the right place and that even that idea wouldn't
[51:15] Because the idea itself, the meta idea is enough of a seed to then either instantiate a lethal framework or to open your perception where you start to find them because I think they're everywhere. Something that we talked about over dinner with Jacob Barndes. So we just recorded an episode of Jacob Barndes link on screen obligatory remarks. I recommend you check it out.
[51:38] Is that I want to make sure that what I'm doing with this channel is good or it is not promulgating harm. And so it's extremely tricky because even this, it sounds like what you're saying is it's necessary for the creative endeavor to go out outside the norm and to allow yourself to indulge in some
[52:05] Dosage of madness, but then at the same time there is such a thing as madness, right? And that hasn't been separated in this conversation. So I'm I would like you to make that distinction what comes to mind is the real scary idea is that Those that have gone mad are not wrong We have this tendency and I think it's an unfortunate framework that those that suffer from mental illness are broken somehow there's a chemical imbalance their brains wired wrong or they have
[52:35] dramatic experience that sort of messed up their software. But maybe that's not the case at all. Maybe they're sort of astronauts that have been to the moon and the rest of us have just haven't been there. And like you were saying before with your with your guests, you have to assume that they have this valid experience. And if we imagine that we are software, then then all experiences are real, right? Because they are
[53:00] They're just virtual machines anyway. So all of our thoughts are made out of just patterns. And so if someone has that experience, it's genuine. It's not a fallacy. It's what they experience. And by them instantiating it, it's real. And so this idea that the mentally ill aren't broken but are just at the edge of evolution or the edge of these unthinkable thoughts is, I think,
[53:30] It's something that stars me. There could also be misattributions. So for instance, they go to the moon and then they come back and say, I was on a balloon made of cheese. And so we then say there are no balloons made of cheese. However, if they had said and correctly correctly identified, there's a rock that orbits the earth and we never noticed it. You understand what orbits are and so on. Then we would be like, oh, that's interesting. Can we also can we go look for that now and we find it now in this example is quite
[54:00] Yeah, I think that's probably the situation that we're in and that when these people go to these places and they come back and they try to use ordinary word vectors, they try to decompose it and serialize it and they model in their mind how your mind is going to respond to their experience and they describe it, it doesn't match.
[54:24] And then we hear words like balloon and cheese where in their mind they're thinking about something much richer, but that's sort of the token that they were able to get out. And then we say, oh, no, that person is nutty because they're talking about balloons of cheese and that's not how we think about it. But if we were to go back millennia and describe the moon in modern terms to the wisest people we could find, they would say that we're nutty.
[54:54] They would say, no, no, no, this is the goddess and this is Luna and this is how it works and this is what it means and so on. And so when we describe it as a collection of rocks in an orbit in a gravitational well, that would sound like balloons and cheese to them. So part of me is maybe you felt the same. One of the reasons why I didn't talk about my experience and I, but I do more and more, but I still rarely do relatively compared to how often I have these podcasts or talk to people is that
[55:25] I'm ashamed of it. And I also think that it's, I thought that it was much more rare than, than it was. And as I speak to people, there's some professors, there's the prominent professor of math who I can talk to you about off air. His name is a household name to mathematicians who told me that I'm, I'm so glad you talked about this in the Carl Friston episode, because I was experiencing something like this myself. I think it's going to, we'll find out it's a much more universal phenomenon. And this
[55:54] This immune system concept applies at the cultural level where we don't share those ideas for fear in some sense, because we know that they can be lethal to relationships, to our standing in society, to our financial well-being. That really exposing that raw self, even though the experiences are genuine,
[56:19] and invalid, we don't share that. And I think that's kind of what we need to overcome as a culture. If we want to make it to this next era of evolution and humanity, we have to embrace that. We have to embrace that diversity of thought and people that previously were laughed off the stage. But that's precisely where the interesting ideas are going to come from, right? So, you know, we mentioned like with the moon, but if we take anything out of the modern world, whether it's
[56:49] You know quantum physics or information theory or even just the idea of software and we go back like we said if you know 100 years 200 years 300 years these ideas were just they were Insane right the whole world is just this this utterly unimaginable creation compared to what we what we thought about before It reminds me of I've been traveling recently and seeing sort of modern cities up close and
[57:19] And I think if you took someone from 200 years ago and you brought them into a modern city like Boston, I think they would think we're a million years into the future, not 200 years. And I think it would be overwhelming and unimaginable. They wouldn't really even be able to take it in, all the lights and little computers and all of the plethora of cars and all this kind of stuff that we have in the modern world. I think it would be overwhelming. And I think they would suspect that
[57:47] That they had time traveled a million years and not 200. And so by the same token, if we think about where the human mental space is going to be in, say, 25 years, I think it's farther out than just the linear 25 years. The vistas, the Lovecraft vistas we talked about last time, they're going to get that are being opened up by these AI tools, are either going to drive us mad or they're going to open up a new renaissance.
[58:17] And we have to get going prior to going, the audience comprises a general audience, but a large audience of researchers. So now you're speaking to researchers and then also people who want to become, who want to go into the fields of physics, mathematics, philosophy, computer science. Yeah. Well, you've got such an amazing audience and community, and I'd really love to know what unthinkable thoughts they're thinking about.
[58:42] You know put down in the comments and and tell us the stories and the experiences that you're having in terms of being off the map and Where your GPS doesn't get any signal kind of thing and you and you find yourself either in ecstasy or despair or discovery and I'd I'd be curious to kind of mine the the beautiful minds that you have in your channel and what they think and
[59:07] And I'll also put a link to help if they require help for the various countries. I'll look at the top 10 countries and just put whatever is the national hotline or what have you. Will, thank you so much. Thank you, sir. Always a pleasure. Looking forward to speaking with you again. New update. Started a sub stack. Writings on there are currently about language and ill-defined concepts as well as some other mathematical details.
[59:35] Several people ask me, hey Kurt, you've spoken to so many people in the fields of theoretical physics, philosophy, and consciousness. What are your thoughts? While I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics.
[60:02] Also, thank you to our partner, The Economist. Firstly, thank you for watching. Thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself. Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm.
[60:30] which means that whenever you share on Twitter, say on Facebook, or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube, which in turn
[60:41] Thirdly, there's a remarkably active Discord and subreddit for Theories of Everything where people explicate Toes, they disagree respectfully about Theories, and build as a community our own Toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the audio platforms. All you have to do is type in Theories of Everything and you'll find it. Personally, I gained from rewatching lectures and podcasts
[61:09] I also read in the comments
[61:29] And donating with whatever you like. There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video. It's audio in the case of Patreon video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier.
[61:53] Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 3288,
  "audio_duration_seconds": 3719.84,
  "completed_at": "2025-11-30T21:52:35Z",
  "segments": [
    {
      "end_time": 26.203,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
    },
    {
      "end_time": 53.234,
      "index": 1,
      "start_time": 26.203,
      "text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
    },
    {
      "end_time": 82.073,
      "index": 2,
      "start_time": 53.558,
      "text": " We're here at the MIT Media Lab. I'm here with Professor William Hahn. The links to his work are on screen and in the description as well as the previous time that we spoke, which was a fantastic podcast and I think people should check it out. Thank you so much. It's really great to speak with you again. I've been looking forward to this for"
    },
    {
      "end_time": 110.657,
      "index": 3,
      "start_time": 82.534,
      "text": " Sometime now. So we spoke off air yesterday and the day before you were actually there for the Jacob bar and his podcast, which is also on screen. You had some ideas as to how to unify liveness with Stephen Wolfram's work. Please talk about that. Yeah. So really the idea is bringing together ideas from consciousness and computation. And so typically when we think of consciousness, we have this kind of light switch view."
    },
    {
      "end_time": 138.029,
      "index": 4,
      "start_time": 111.101,
      "text": " that it's either on or off. And there's an idea attributed to Leibniz where it's really a hierarchy. It's kind of a ladder of different capabilities. And it revolves around the ability to represent your ideas, to think about your thinking, and then to think about that thinking and so on as this sort of this hierarchy that goes up. And if we think about the simplest animals, they don't"
    },
    {
      "end_time": 162.79,
      "index": 5,
      "start_time": 138.626,
      "text": " They have the sort of the first order things like sensation and response to stimulus but they don't really have any way of representing that response and it seems that by thinking in terms of representation and then representing your representation where we can have things like language and then we can build you know philosophies on top of that language and it's"
    },
    {
      "end_time": 187.773,
      "index": 6,
      "start_time": 163.183,
      "text": " It's a way of modeling the model and a way of talking about the talking and things. And so we get this more analog or at least a stepwise approach to consciousness where it's not just on or off, but it's kind of this discontinuum. And I think this is relevant now because we're in this age where we're building intelligent machines."
    },
    {
      "end_time": 216.237,
      "index": 7,
      "start_time": 188.37,
      "text": " And I would argue that these machines that we have in the form of these modern AIs are quite intelligent. And the question now is, do they instantiate something like consciousness? And if so, is it like human consciousness? Is it something more like a lower animal where it's just knee-jerk reaction, you put in the prompt and it just spits out the tokens? Or does it have some way of knowing what it knows and thinking about what it's thinking about?"
    },
    {
      "end_time": 246.152,
      "index": 8,
      "start_time": 217.125,
      "text": " And maybe the current versions that we see right now in early 2025 don't. But it seems to me almost inevitable that if we keep going in this direction that we will kind of close the loop on this thing. That the snake is going to bite its tail and we will bootstrap this thing we call consciousness inside these machines. But we might need a kind of graded scale to try to understand that."
    },
    {
      "end_time": 266.084,
      "index": 9,
      "start_time": 246.527,
      "text": " And then hopefully this will help us understand things like animal consciousness or even higher levels that human consciousness might achieve. So when someone says that consciousness is a continuum and it's not a binary on or off, I think you can always translate a continuum to an on or off by saying, is it zero or non zero? So for instance,"
    },
    {
      "end_time": 292.995,
      "index": 10,
      "start_time": 266.425,
      "text": " We say that a photon is chargeless. We don't say it has a little charge or the electron. We say it's a charged particle, even though it's charges minuscule. Right. So when you say that consciousness is a continuum in its representations of representations, do you believe there are entities or things or objects or what have you that are not conscious or is it just all conscious? Well, I think that's that's a great question. I think that's at the heart of it is trying to figure out"
    },
    {
      "end_time": 319.053,
      "index": 11,
      "start_time": 294.343,
      "text": " Does this have that threshold? Do we want to think about things like rocks, for example, as being conscious? Classically, that would have been the pinnacle example of, no, clearly it's not. But I would argue that things like a modern GPU are rock-like objects. They're these semiconductor layers, very delicate, intricate patterns."
    },
    {
      "end_time": 348.831,
      "index": 12,
      "start_time": 319.684,
      "text": " But essentially what the classical world would say is a fancy crystal of some kind. And now we're instantiating this intelligent, possibly ultimately conscious behavior out of this rock-like object. So the situation is getting more interesting than it was previously. How does this tie into Wolfram? So Wolfram has this fantastic idea of computational equivalence that"
    },
    {
      "end_time": 378.677,
      "index": 13,
      "start_time": 350.06,
      "text": " The idea is that the natural world stumbles upon computers quite often. That the idea of a universal machine that Turing established, which is a machine that can simulate any other machine. It's a machine that runs software. And in that software programming, you can get this universal machine to simulate the behavior of any other machine. And in the early days of computing, it seems like computers were"
    },
    {
      "end_time": 407.363,
      "index": 14,
      "start_time": 379.667,
      "text": " very delicate human artifacts that took a lot of engineering work to put together. But what Wolfram showed with his cellular automata research is that it's relatively easy to stumble upon a system that's complex enough to support universal computation. That he has this breakdown of sort of four different classes of, he uses automata as the case study,"
    },
    {
      "end_time": 427.415,
      "index": 15,
      "start_time": 408.251,
      "text": " But he argues that any system that's not obviously simple, any system that's sort of interesting in its dynamics, the majority of the time that's going to support universal computation. So in other words that there's these natural computers"
    },
    {
      "end_time": 454.189,
      "index": 16,
      "start_time": 427.773,
      "text": " All throughout nature and previously we didn't recognize them or understand them as computational objects. I think that's what's just so interesting about this new era of maybe what we might call philosophy is that we have this new language. We can invoke the metaphor of a computer and software and program and universality where as we spoke last time the best minds on the planet didn't have access to those thinking tools."
    },
    {
      "end_time": 480.316,
      "index": 17,
      "start_time": 455.725,
      "text": " And so if that's really true that there's these computer-like objects all over the place, and now we have computer-like objects that might be instantiating consciousness, namely a GPU running a large language model, which obviously would be highly debated whether or not that's conscious or not. But if we take that in the abstract sense and extrapolate that out a decade or so, to me it's"
    },
    {
      "end_time": 510.043,
      "index": 18,
      "start_time": 480.742,
      "text": " It's clear that we will have a hard time, or it might be fruitless to argue whether they are or not, because they'll be so similar to the kinds of things that we see in humans, that maybe they're actually all throughout nature. Maybe throughout the universe we should expect to find objects that are universal machines, sort of spontaneous in nature. And we want to think about now what's the"
    },
    {
      "end_time": 539.804,
      "index": 19,
      "start_time": 510.555,
      "text": " the chance essentially that they have a software program that might be somewhere on this ladder of consciousness that is attributed to Leibniz. Are you referring to computational equivalence? Exactly. Yeah. OK. So in the Leibniz case, you were referencing representations of representations and you have a hierarchy. Right. In the Wolfram case, it sounds like you have what is not a computer and then what exhibits computer like qualities. And then you hit universal computer. Right."
    },
    {
      "end_time": 565.282,
      "index": 20,
      "start_time": 540.026,
      "text": " But there's nothing above that. Exactly. And that's kind of the point is that you smash into this sort of upper ceiling of computation relatively quickly, right? It's sort of a low speed limit and whatever vehicle you're in quickly hits that speed limit because that's what's so profound about universality is that's the ceiling. There's nothing something more powerful than that. And so if that is relatively easy,"
    },
    {
      "end_time": 587.824,
      "index": 21,
      "start_time": 565.862,
      "text": " to create spontaneously in some sense by chance we get a computer-like object, a universal machine, then we want to think about, well, what's the... the analogous part is the software that would go with it, right? So Wolfram suggests that it's relatively easy to find computers in the wild. Well, what's the chance they have any interesting programming to go with that?"
    },
    {
      "end_time": 613.439,
      "index": 22,
      "start_time": 588.626,
      "text": " So is there a similar ceiling with representations? Is it the case that when you hit the fourth representation you've hit them all and is there a universal representation at level six? So I think you hit the nail on the head with that one because that's really kind of what I've been thinking about. I like how you phrase it. Are humans already at that universality? Are our intellectual sphere, our language, our culture, have we maxed out?"
    },
    {
      "end_time": 642.312,
      "index": 23,
      "start_time": 613.985,
      "text": " Or are we going to transcend this era, reach another level of consciousness, maybe called spirituality, whatever that might be, another layer of evolution, or is this it, right? That we have the ability to represent our representations and we can think about our thinking and then we can think about that and so on. And I don't know, right? I would like to hope that we're not at that universal level yet."
    },
    {
      "end_time": 667.244,
      "index": 24,
      "start_time": 642.875,
      "text": " I think we mentioned last time on the channel, this sort of path from ape to angel, right? Or from rocks to light, you know, kind of thing. Where are we on that sort of evolutionary trend, in a sense? And I think that we're not at the angel stage, right? We're not quite there yet."
    },
    {
      "end_time": 691.783,
      "index": 25,
      "start_time": 670.879,
      "text": " And so it reminds me of a quote I think they said where, I forget who says it, we'll have to look it up, but we're angels with assholes. There were still these corporeal beings that eat sandwiches and have to go through our daily lives. We're not this ephemeral being that's just pure thought and energy and things. But it seems like the AI is on that trajectory."
    },
    {
      "end_time": 717.159,
      "index": 26,
      "start_time": 692.261,
      "text": " And I wonder if maybe there's just a discontinuity and as others have suggested, maybe the biological sphere just bootstraps the AI and then that's the next stage of evolution. Because for lack of a better term, they don't have assholes. Is the question of have we hit the upper limit equivalent to the question of are there thoughts that we can't think of?"
    },
    {
      "end_time": 739.77,
      "index": 27,
      "start_time": 717.534,
      "text": " So now what I mean by equivalent to is we can imagine that let's look at this room and there are thoughts everywhere and we have access to them. Okay, cool. Then there are thoughts that are outside this room, we don't have access to them. Then we'd say, okay, we have not hit the upper ceiling. And then we can say that, okay, so in that case, it was if there exists a thought that's outside this room, then we have not hit the upper ceiling. We can also say, if"
    },
    {
      "end_time": 751.886,
      "index": 28,
      "start_time": 740.299,
      "text": " We have not hit the upper ceiling. It implies there is a thought that's outside this room. So I'm not sure if that's the case, because then you would make an equivalence between those two. Right. Well, I do think it's the case there's unthinkable thoughts."
    },
    {
      "end_time": 771.561,
      "index": 29,
      "start_time": 752.329,
      "text": " And the question is, are we able to think about them in the abstract sense? Maybe not the actual thought, but do we really understand or can we even appreciate that there's things outside of our conceptual and perceptual window? And that's what I've been thinking about a lot recently or trying to and"
    },
    {
      "end_time": 789.718,
      "index": 30,
      "start_time": 773.234,
      "text": " You know, we mentioned last time Richard Hamming and he talks about this idea. He says there's smells that dogs can smell that we can't smell and they can hear things we can't hear and there's lots of things in the animal kingdom that have a larger perceptual window. And he argues, why should we think that there's thoughts we can't think?"
    },
    {
      "end_time": 806.067,
      "index": 31,
      "start_time": 790.009,
      "text": " Right, it seems quite clear, and also even just arithmetically, there are numbers that we can never conceive of, at least we can write them down like Graham's number, but we can say, to the power of Graham's number, we can't actually conceive of it. We actually don't understand what it means for a number to be greater than 150 or so."
    },
    {
      "end_time": 833.763,
      "index": 32,
      "start_time": 806.442,
      "text": " We can't fit these things into our mind. We can kind of only approximate them. So to someone listening, they're like, well, what are you talking about? It's quite clear. The philosophical term, I believe, is Umwelt. It's quite clear. Umwelt is bounded and has some overlap with other animals. I mean, that's not quite clear, but it's not obvious. OK, so then they're thinking, OK, so what's so profound about this? Well, so"
    },
    {
      "end_time": 855.094,
      "index": 33,
      "start_time": 834.189,
      "text": " Thinking about these sort of unthinkable thoughts has led me in a few interesting directions. Why can we not get to that space? I think I mentioned last time the future is not what we think it is. Because if we knew exactly which direction to go, we could just go in that direction. And so there's some kind of"
    },
    {
      "end_time": 885.162,
      "index": 34,
      "start_time": 855.316,
      "text": " Barrier and the question is is this like a natural barrier? You know, we just don't have enough neurons We don't have enough court cortical layers to represent the representations and so on and then there could just be another hierarchy Or something I've been thinking about more recently is that there's a there's a more practical Reason why we can't think these thoughts and it's kind of a sort of an immune system that we mentioned before That there's that the mind it's our mind"
    },
    {
      "end_time": 914.36,
      "index": 35,
      "start_time": 885.367,
      "text": " itself is trying to protect us from a variety of thought patterns that would essentially destabilize both our mental patterns and then ultimately our physical self if the thoughts go into a terrible spiral. So from an evolutionary point of view, there might be this kind of filter, this barrier that evolution has instantiated to prevent us from going nutty, essentially."
    },
    {
      "end_time": 943.66,
      "index": 36,
      "start_time": 914.855,
      "text": " And that if from a Darwinian evolution perspective, if we just want to kind of populate the planet the best we can, you don't need this kind of intelligence, right? Intelligence could be kind of a Fermi style filter that's actually preventing a lot of progress. If we think about like, you know, the rain the dinosaurs had on the planet, it was 100 million years, 200 million years, and humans haven't had that kind of longevity on the planet yet. And"
    },
    {
      "end_time": 971.647,
      "index": 37,
      "start_time": 944.206,
      "text": " It's hard to imagine us continuing the way we're doing for that much time. But imagine like whales and dolphins. You could easily imagine them doing the same thing they're doing for millions of years into the future and being happy to do it. But this also leads me to think about what thoughts our language supports. And I used to think that English was good enough. But now I'm not so sure. And"
    },
    {
      "end_time": 997.722,
      "index": 38,
      "start_time": 972.585,
      "text": " I think what's really interesting is we've taught these language models, not just English, but all of the human languages that we could get our hands on. And I think something that's particularly relevant is the computer languages and the mathematical languages that we've also put into that system. So it has representations that it can map into English, but its mind, so to speak, doesn't really operate in English. It operates in these dot vectors, these word vectors that are independent of any particular language."
    },
    {
      "end_time": 1003.899,
      "index": 39,
      "start_time": 998.285,
      "text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store."
    },
    {
      "end_time": 1028.336,
      "index": 40,
      "start_time": 1008.268,
      "text": " . . . . . ."
    },
    {
      "end_time": 1057.278,
      "index": 41,
      "start_time": 1029.309,
      "text": " Okay, this is extremely interesting. So I have two thoughts and you can take both of them if you like or choose a direction. So when you said that you thought English was enough, are you saying that you thought there was something like the universal language and English is one of them. Chinese may be another. There may be some languages where they only have a hundred words and maybe they those are insufficient to represent. So in other words, some people say there are some words in some languages which you cannot translate from untranslatable words."
    },
    {
      "end_time": 1087.5,
      "index": 42,
      "start_time": 1057.5,
      "text": " And technically, they should say they're untranslatable into another single word. And let's pick one untranslatable word. So in Turkish, there's some word for the feeling that you have after you drink coffee with friends, something like that. Let's say that. I just described it there with a set of 10 words or so. And if it was German, you would just smush them all together and make one new word out of it. Right. Right. OK. I think Turkish does actually as well. OK. So is that what you meant when you were saying you thought English was enough? Right. OK. It was this like you mentioned, a universal software framework."
    },
    {
      "end_time": 1112.892,
      "index": 43,
      "start_time": 1087.927,
      "text": " that we have this idea of universal machine but you know this idea of a Turing tar pit that just because everything's possible doesn't mean anything's practical with say like a Turing machine or some other abstract computer and so the idea is what computer languages would be sufficient right so we could as an analogy we could say basic is not sufficient sufficiently powerful enough of a computer language to instantiate the internet or something like that"
    },
    {
      "end_time": 1141.681,
      "index": 44,
      "start_time": 1113.148,
      "text": " Or it might be, but it would require orders of magnitude more complexity in the coding. Whereas something like C ended up being sufficient to kind of create the modern software stack. And so I suspect now that English is not quite as universal as I previously thought it was. This is why I've been looking at things like musical languages and conlangs and esoteric languages to get some ideas about where could we put these unthinkable thoughts."
    },
    {
      "end_time": 1171.817,
      "index": 45,
      "start_time": 1143.114,
      "text": " Because when you learn those new languages, you can suddenly think in new, maybe more compact ways. You don't have to have a whole sentence. You can have just sort of one neuron. But like you said, I would argue that there are no untranslatable words. And if there were, you couldn't talk about them at all. You wouldn't even be able to mention that they were untranslatable in some sense. And so that would be in the space of sort of maybe not unthinkable thoughts, but uncommunicatable thoughts. And one of the ideas I've been thinking about this year is"
    },
    {
      "end_time": 1202.619,
      "index": 46,
      "start_time": 1173.473,
      "text": " The set of ideas for which you can take it apart, like Legos, right? They have the Lego Lab upstairs. And Legos, you could take apart into little bricks, and like language, I have an idea in my head now, and I'm decomposing it into a serial sequence of symbols, and then you're taking those and you're putting them back together, and I hope that you get the same kind of mental representation. But you could define a set of ideas for which precisely you cannot do that, right? Where if you take any one piece, it's not the same thing anymore."
    },
    {
      "end_time": 1228.456,
      "index": 47,
      "start_time": 1203.336,
      "text": " um, that it doesn't go down into a sequence of things. And that might be, uh, you know, related. It reminds me of things like gnosis, right? Knowing from the inside where you can't explain, you know, things like faith and God and all that. And I think a lot in the modern world, people were like, well, explain it to me, you know, explain what you mean by faith and God. And it's like, well, that's not how it works. It's precisely the kind of idea that I can't"
    },
    {
      "end_time": 1258.37,
      "index": 48,
      "start_time": 1228.882,
      "text": " Let me have fun with this immune analogy. Is it the case then that there would be autoimmune disorders of the mind? The mind sees something that's actually benign or even salubrious or nourishing and thinks that it's not and so it tries to remove it and perhaps it attacks itself in the process?"
    },
    {
      "end_time": 1287.773,
      "index": 49,
      "start_time": 1258.797,
      "text": " It's a fantastic idea. I'd have to think about that a little more, but I think in some sense, absolutely, that there's going to be, and that might be kind of the basis for mental diversity and mental disorders, if we might call them that. Because now we're better understanding language than ever before. By building these machines that instantiate language, it gives us this petri dish, this microscope."
    },
    {
      "end_time": 1316.288,
      "index": 50,
      "start_time": 1289.224,
      "text": " I think there's a lot of mystery around how these language models operate. And I think the real mystery is in the words, in the language. We didn't understand what words were. We didn't really understand what language was. And we thought it was this communication kind of thing. And I've been thinking about it more with my colleague, Dr. Barinholtz, as an organism, right? Something that's alive."
    },
    {
      "end_time": 1341.817,
      "index": 51,
      "start_time": 1316.578,
      "text": " and that we need to invoke the framework from biology to think about it not just as communication, but as a, I've referred to it as a parasite in some sense, and not necessarily pejorative, I've referred to it as a divine parasite, but it's something that lives on the brain, but it's not the brain."
    },
    {
      "end_time": 1367.858,
      "index": 52,
      "start_time": 1342.415,
      "text": " One of the ways I've been thinking about this is with things like multiple personality syndrome and split personality syndrome that I think those are evidence that when I'm talking to you now, I'm not talking to your brain, right? I'm talking to this entity that lives in your brain. And when we see people with multiple personalities, it's clear that they're instantiating more than one operating system, software, stack,"
    },
    {
      "end_time": 1398.08,
      "index": 53,
      "start_time": 1368.285,
      "text": " On the same hardware and their multiplex like we run more than one app on our phone at a time That the brain is rich enough of a substrate of a backdrop to have multiple plays unfolding at the same time And then this leads into Kind of the other side of unthinkable thoughts and this immune system concept in areas of things like mind control and is our brain a programmable object"
    },
    {
      "end_time": 1423.37,
      "index": 54,
      "start_time": 1398.524,
      "text": " Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers."
    },
    {
      "end_time": 1443.626,
      "index": 55,
      "start_time": 1423.37,
      "text": " By joining you'll directly be supporting my work and helping keep these conversations at the cutting edge so click the link on screen here hit subscribe and let's keep pushing the boundaries of knowledge together thank you and enjoy the show just so you know if you're listening it's c-u-r-t-j-a-i-m-u-n-g-a-l.org."
    },
    {
      "end_time": 1469.599,
      "index": 56,
      "start_time": 1444.787,
      "text": " Okay, so"
    },
    {
      "end_time": 1492.483,
      "index": 57,
      "start_time": 1470.64,
      "text": " When we speak about"
    },
    {
      "end_time": 1516.408,
      "index": 58,
      "start_time": 1492.483,
      "text": " Every single thing that I would assume the brains quantum which would sure okay well whatever it doesn't matter we get down to the electrical level we say patterns of firings so I imagine the patterns of firings along with your brain are the personality or could be I don't know it would say that's at least an ingredient right so let me know what you think about the distinction between hardware and software as it relates to the brain yeah so one of the things we spoke about last time is this idea of a virtual machine"
    },
    {
      "end_time": 1545.145,
      "index": 59,
      "start_time": 1517.056,
      "text": " And as I mentioned a minute ago, if we go back into neuroscience a hundred and fifty years ago or so, they didn't have the computer metaphor and they didn't have the software metaphor to understand it. Well, now we do and that caused the cognitive revolution is to think, OK, now we know that there's this thing called program and so on. But I think we need to take it a step further and think about machines that themselves are made out of software. So the example we gave last time that I like to think about is"
    },
    {
      "end_time": 1574.821,
      "index": 60,
      "start_time": 1545.52,
      "text": " If you, if this idea of an emulator, which the listeners who play video games might be aware of, but this is a piece of software that you run on a modern computer that simulates an older computer. So if you want to play Super Mario on your MacBook, you can get this program that it doesn't run Mario directly on your Mac chip. The Mac chip simulates, it pretends to be a Nintendo chip and then the Nintendo chip naturally can run the Mario software."
    },
    {
      "end_time": 1602.688,
      "index": 61,
      "start_time": 1576.408,
      "text": " And what's really interesting is you can do that again. You can have an emulator running on an emulator. You could somehow run an Atari game on an endo. Yeah, I've never thought about that. And so it could be virtual machines all the way down in some sense. And so I think when we try to understand language and the brain and the self and personality and ego and all that kind of idea, we need to consider that there could be multiple layers"
    },
    {
      "end_time": 1621.391,
      "index": 62,
      "start_time": 1602.927,
      "text": " between the thing doing the talking and the neurons doing the firing. And traditional neuroscience would suggest this kind of a one-to-one mapping, right? And I think the reason why we haven't made as much progress in those spaces is because there's multiple representations."
    },
    {
      "end_time": 1649.565,
      "index": 63,
      "start_time": 1621.903,
      "text": " So when one references these turtles, the turtles all the way down, I take it extremely seriously. So when you say it's virtual machines all the way down, do you actually mean that or do you mean that it's multiple levels of virtual machines, but there's a finite end? Well, there's a good question. So this in the brain, I think there could be multiple layers of this virtual machine more than we think. But it makes me think of a much broader idea in terms of matter, right, that"
    },
    {
      "end_time": 1678.285,
      "index": 64,
      "start_time": 1650.128,
      "text": " Quantum information theory would suggest that matter itself is an information-like process and that when we get down below the atoms there might be software again and it's sort of this interesting loop where we have information at the bottom which somehow instantiates the atomic reality and then on that we build computers out of the atoms and then we get software again. It would sort of complete the circle."
    },
    {
      "end_time": 1708.404,
      "index": 65,
      "start_time": 1678.797,
      "text": " Um, so both in our mind and in the university, it might be sort of these virtual turtles, right? Which I think is a, is an amazing reference. We're in the, the MIT media lab where Seymour Papper developed the turtle programming language earlier when you were talking about representations and you're saying, okay, we can represent representations and et cetera. At some point at the first representation, you're representing something that was presented in order for you to represent it."
    },
    {
      "end_time": 1733.387,
      "index": 66,
      "start_time": 1708.899,
      "text": " Do you believe that's the case or also do you believe there that it goes infinitely downward? Or do you believe there's some non-represented substrate that gets represented? I think in the case of humans, there's going to be something like a bottom in some sense, right? We see that with cells and things. But it's not clear where that happens."
    },
    {
      "end_time": 1759.838,
      "index": 67,
      "start_time": 1733.541,
      "text": " You know i think there's all this great work you've been following it with michael levin looking at the cell and the electrical activity and that there's a lot more interesting there that the classical biology kinda just glazed over and now we need to go back and realize no there's really interesting software at that layer and throughout the engineering world we always build a virtual machine and a lot of times those are just what we call programming languages."
    },
    {
      "end_time": 1785.316,
      "index": 68,
      "start_time": 1760.35,
      "text": " right so if we think about like the electrical activity of the cell and it has some kind of software it also will have evolved it seems likely some kind of other abstraction to make it operate more efficiently just so just as an analogy when we we know about nvidia gpus and we know about these large language models well we don't really work directly with the gpu we create abstractions like python"
    },
    {
      "end_time": 1810.179,
      "index": 69,
      "start_time": 1785.572,
      "text": " right and pi torch and and intermediates a pi torch is an imaginary machine that has things like vector operators and matrix multiply that then get mapped to the low level instructions on the gpu so so in the modern world we never actually talk to the hardware directly we build a virtual hardware that's much easier to use and i would imagine that nature and evolution and biology would have done the same thing"
    },
    {
      "end_time": 1837.534,
      "index": 70,
      "start_time": 1811.118,
      "text": " So we have an audience here of Professor Barenholtz and I'm sure he's champing at the bit and has some questions. Yeah, I can't wait for you to have them on the show. Which I'll feel that also I would like to talk to you at some point, but at least for now the audience can hear a disembodied voice at least or perhaps just my voice reiterating the question. Before we get to the audience questions, tell me more about these unthinkable thoughts. Well, one of the directions I've been thinking about in that is"
    },
    {
      "end_time": 1859.923,
      "index": 71,
      "start_time": 1838.729,
      "text": " The thoughts that other beings think about, right? And the classic example is, what is it like to be a bat? A classic paper from 74, I think. And, you know, I wonder if it's anything at all to be a bat. And I know Elon has some interesting ideas about"
    },
    {
      "end_time": 1888.49,
      "index": 72,
      "start_time": 1861.288,
      "text": " I don't know. I think I take a strong approach to this that maybe it's not like anything to be a bat. There's this idea in psychology of what's called infant amnesia and it's that babies don't really have any episodic memory."
    },
    {
      "end_time": 1916.237,
      "index": 73,
      "start_time": 1889.104,
      "text": " And I actually encountered this as a child. I remember I was, I don't know, seven or eight or something like that. And I asked my mom about her oldest memory. And I was, I didn't understand at the time how she could remember decades of her childhood. But as an eight-year-old or whatever it was, I couldn't remember being two. Right? And I was saying, well, that's only six years. How is it that after six years, I can't, that's too far back to remember, but my mother could remember back multiple decades? How would that sort of thing work?"
    },
    {
      "end_time": 1945.913,
      "index": 74,
      "start_time": 1916.937,
      "text": " And I think one of the explanations to infant amnesia might be this idea of representation. That without language, right, we install language into children between the ages of one and two and so on. And without that framework, it would be nearly impossible to have an episodic memory system without having a labeling system, right? And so when I can say, oh, that nice dinner we had yesterday with Jacob, and have a label,"
    },
    {
      "end_time": 1974.394,
      "index": 75,
      "start_time": 1946.442,
      "text": " right and and with that label you can reference this experience and we can say oh that was an episode and I remember that but how would you refer to it in the space of all possible mental states without a language system which is like the Dewey Decibel catalog without having this card I know that's the one I mean what would there would be no meaning to that experience and so we think about what is it like to be a bat I think they might have a"
    },
    {
      "end_time": 1997.125,
      "index": 76,
      "start_time": 1974.991,
      "text": " This fleeting ephemeral sensory loop that it would involve things like pain and pleasure, but it can't represent those things. It can't recall them. It has no way of knowing that it went out and got fruit that morning because how would it recall that mental state without a language type thing?"
    },
    {
      "end_time": 2024.053,
      "index": 77,
      "start_time": 1997.619,
      "text": " Maybe we need to do more research and find out, well, they do have a language type system and they're tokenizing the sonar and things like that. I don't know. But it would seem to me that we should be skeptical that it's like anything to be those systems. And I take it to another extreme and say that it's, you know, what is it like to be a human being? And do we really know or is it just this approximation"
    },
    {
      "end_time": 2052.346,
      "index": 78,
      "start_time": 2024.923,
      "text": " that our language model can reference different episodes and different states and so on. Because the example I like to give is the vast majority of our so-called experience is outside this thought window. We don't even know that we have gallbladder, let alone what they do, right? Most people can't name the organs, they don't even know what organs they have, right? So if we don't even have words for them, we don't have representations for them, they don't exist, right?"
    },
    {
      "end_time": 2081.118,
      "index": 79,
      "start_time": 2053.319,
      "text": " I remember a few years ago a student was saying they wanted to be hollow and they didn't quite know what that meant at first and not in like the the shallow sense in the sense of like they didn't want to have to have all of that complexity because from their experience point of view they don't right the thing behind your eyes that's talking doesn't most the time unless something goes wrong completely unaware that we even have all those systems operating in parallel we're not thinking about our toes we're not thinking about our ears we're not and and and strangely enough we're not usually thinking about thinking"
    },
    {
      "end_time": 2109.701,
      "index": 80,
      "start_time": 2081.869,
      "text": " I think most of, and this might again be a too strong a statement, but it seems like most people, most of the time, including myself, are not thinking about thinking. We might be thinking, but we're not thinking about the thinking. And then we take it a step further and think about that, but that's when you start to get nutty. That's when your mental immune system kicks in and says, it's time for lunch. Do you think that getting nutty, quote unquote, is reaching that universal language model? That's an interesting thing."
    },
    {
      "end_time": 2135.282,
      "index": 81,
      "start_time": 2110.572,
      "text": " That touches on this lethal experience, lethal concept idea. As we said last time, if you see the face of God, it'll be the last thing you've ever seen. Can we reach that? Or is that transcendent? Will you just simply die? Or maybe not in the physical sense, but you're not a human anymore. And in thinking these unthinkable thoughts, you have to be willing to go crazy."
    },
    {
      "end_time": 2159.735,
      "index": 82,
      "start_time": 2136.237,
      "text": " And if you're afraid of going nutty or afraid of going crazy, you can't get past these barriers that your mind has built. Well, what's the point? Why do you have to be willing to go to lose your mind? Why would you want to at all? I think one way of defining losing your mind is sort of leaving everyone else behind, is going off"
    },
    {
      "end_time": 2180.538,
      "index": 83,
      "start_time": 2160.009,
      "text": " In an abstract vector space direction of thought that there's no there's no one there. There's no one else there or you might be in very rare company. And maybe you could define being human is that that overlap the reason why we can communicate and you know because we have the shared experience and."
    },
    {
      "end_time": 2209.241,
      "index": 84,
      "start_time": 2180.776,
      "text": " a sort of a common set of beliefs and ideas and languages and so when you escape that you're not in the herd anymore. So there's a difference between having thoughts that other people have not had so you're in uncharted territory and also just having a wrong representation of reality. Okay so girdle as we talked about in the car ride it's not as if he was in uncharted territory where you may have some gold that you can then bring back and you're like Steve Jobs and you've invented the iPhone supposedly even those engineers also."
    },
    {
      "end_time": 2238.66,
      "index": 85,
      "start_time": 2209.241,
      "text": " Help with that it's more like in girdle's case he believe the government was poisoning him or someone was poisoning him but i think that's a perfect example of somebody. Who went out into the mines brought back a gold nugget with his with his theorems and lost his. Maybe his humanity and what they lost his mind but became. Too far away to where other humans could no longer you know hey just sit down have a bowl of soup was not just about this he couldn't take a break for lunch."
    },
    {
      "end_time": 2259.718,
      "index": 86,
      "start_time": 2239.548,
      "text": " Okay, I would like to talk to you for so much longer, but I know there are some questions here. I'm curious what you think about the software hardware dichotomy and in particular the language in the case of humans and other species that do not have language."
    },
    {
      "end_time": 2290.247,
      "index": 87,
      "start_time": 2260.503,
      "text": " Is there is there a completely different level of abstraction that took place in at the symbolic level because language is fundamentally survived, right? Okay, before you answer the question, can you summarize the question? Yeah, so Dr. Bernholz is suggesting that when we get this symbolic language layer did that create a new A new layer to reality right a new expansion of the universe into this new dimension and It seems like it did"
    },
    {
      "end_time": 2317.705,
      "index": 88,
      "start_time": 2290.606,
      "text": " Because I think that might explain this extreme chasm between humans, human behavior, and then even other obviously intelligent animals, whales and primates and things like that. It seems like we are of a different category. And especially if we go back to the ancient world, it was just sort of patently obvious that humans were on this spiritual realm that was just miles above the plants and animals and the backdrop."
    },
    {
      "end_time": 2340.879,
      "index": 89,
      "start_time": 2318.353,
      "text": " And so maybe language did get this universality that we didn't have before, that brains, birds and dolphins and things have one kind of universality on the hierarchy, but this language instantiates a whole other. But maybe it's just a virtual machine that's much more efficient and that"
    },
    {
      "end_time": 2369.172,
      "index": 90,
      "start_time": 2341.169,
      "text": " Brains are really powerful, but even better is this software brain via language that rides on top of brains. And that just, maybe it's still universal, but like a Mac comes out today versus the Apple II, they're both universal machines, but they can accomplish more tasks than the others. It seems impossible from the point of view of like an Apple II. So do you see that as continuous in some ways or completely discontinuous? Well, that's a great question."
    },
    {
      "end_time": 2393.66,
      "index": 91,
      "start_time": 2371.203,
      "text": " You know, in complex systems, there's this famous thing, I think it was Anderson said, more is different. Right? And the idea is that you get like a phase transition. And the classic example is you have a pot of water on the stove with a digital temperature setting. And if you just click up one degree every few minutes, nothing happens. Nothing changes about the system. And then suddenly you hit this inflection point where you get boiling."
    },
    {
      "end_time": 2422.978,
      "index": 92,
      "start_time": 2394.36,
      "text": " Right, so it's clear that we see that from a practical point of view, that while there is this universality, we do get phase transitions and kind of capabilities, right? So we had cell phones in the 90s, but they didn't do any of the things that our cell phones can do, even though they had universal processors inside them. But they couldn't, they hadn't gone through this phase transition of capability. And so it could be that languages like that, that it gives us a whole new"
    },
    {
      "end_time": 2447.193,
      "index": 93,
      "start_time": 2424.292,
      "text": " set of apps that we can run. But it also reminds me of this idea that Marvin Minsky talks about, the founder of the Media Lab that we're in now. And he talks about a car and when a car is running. And we all know what it means when the car is running, right? And it is sort of this digital thing, right? The car is either running or it's not."
    },
    {
      "end_time": 2476.544,
      "index": 94,
      "start_time": 2447.91,
      "text": " Maybe you could get into the weird, you know, kind of states and stuff. But the first approximation, it's sort of this digital. And he argues that with consciousness is like that, that it's this thing, but he argues that it's not mysterious, right? It somehow comes out of this dynamic of all of the parts and all of those things together, instantiate this thing called a running car. And as we know from auto mechanics and maintenance and stuff, it's subtle and it's"
    },
    {
      "end_time": 2506.596,
      "index": 95,
      "start_time": 2477.346,
      "text": " Or rather, it's fragile. And if you have one system in the thing, the car won't start. Or the car, you know... So, it does seem kind of like this, and I think we need new language and we need to explore this analog-digital dichotomy, because it seems like the boiling, that as we add more complexity, you get new behaviors emerging. And this is exactly what we've seen with these large language models, with the so-called scaling laws, that we..."
    },
    {
      "end_time": 2529.957,
      "index": 96,
      "start_time": 2507.09,
      "text": " We took systems that a few generations ago couldn't very well predict the next word. And if they did, it was just word salad. And then suddenly just scaling that system up, just training on a larger data set, and then suddenly it can do arithmetic. And then you scale a little more and it can do algebra, scale a little more, it can do theoretical physics. And what's the limit to that?"
    },
    {
      "end_time": 2559.138,
      "index": 97,
      "start_time": 2530.316,
      "text": " I think that we're not that far away and we're approximating already not just artificial general intelligence, but artificial superintelligence. And I think that when we start training these things on video, for example, like so the large language model like ChatGPT was trained on Wikipedia and the historical archive of books and Reddit and things like that. But it hasn't watched YouTube yet, right? It hasn't been embodied in a robot where it actually gets to play in a sandbox for three years in a row like a little kid does, right?"
    },
    {
      "end_time": 2587.295,
      "index": 98,
      "start_time": 2559.428,
      "text": " Little kids will slosh water around for hours on end, just learning how fluids work and what a container does and all that kind of stuff. They don't have any of those tokens, let's call them. And so I think we don't even need a new recipe. I don't think it's going to be mysterious how we're going to get to machines that can make real, genuine advancements in chemistry and physics and astronomy. Just give them a telescope. Give them more data."
    },
    {
      "end_time": 2613.012,
      "index": 99,
      "start_time": 2587.944,
      "text": " And give them experience and I think we'll get those phase transitions to where we have really capable machines. When you talked about animals and that they do experience suffering because at first it sounded like what you're saying was more of the Descartes. Well, animals do not have consciousness. That's purely a human phenomena and thus you can torture your animals and don't listen to their screaming because that's just the clanking of a machine. Okay, so you're not saying that."
    },
    {
      "end_time": 2640.828,
      "index": 100,
      "start_time": 2613.336,
      "text": " But you were saying that animals don't have a self model. But then do we have a self model that persists, that actually persists? Or is it more of the Buddhist notion where there's some transient self? You know, I think it's clear that they would experience things like suffering, but I don't know. It's not clear that they have the ability to represent it. And so they don't know that's what's happening to them."
    },
    {
      "end_time": 2669.65,
      "index": 101,
      "start_time": 2641.954,
      "text": " And while they'll probably, you know, take actions to try to move into a different environment to reduce that, they can't really like lament about it, right? They're not going to write the poetry about it and song and things like that to try to express those internal states because they're sort of farther down on that ladder. Do you believe that having a self model is also this binary, having a self model, or is it also somehow continuous?"
    },
    {
      "end_time": 2697.398,
      "index": 102,
      "start_time": 2670.384,
      "text": " That's a great question. And the first thing that comes to mind is, do we have one self or do we have many selves? And I'm of the opinion that we all have sort of multiple selves and that when we get to break down in that delicate dance, then we can get multiple personalities that suddenly emerge and things. But I think in our subconscious, we have all of those things. We represent other people."
    },
    {
      "end_time": 2727.739,
      "index": 103,
      "start_time": 2698.097,
      "text": " In our in our mind, something we spoke about last time is I have I carry people in me in some sense. And the idea that in again, this is related to software, you know, there's the classic expression, the ghost in the machine. But I think we're machines made out of ghosts, essentially. And are they? Are they like selves? You know, we kind of have this one primary self, but I have a copy of you in me, right? It's how I'm able to communicate with you and think about you when you're not in my visual field and things like that."
    },
    {
      "end_time": 2731.288,
      "index": 104,
      "start_time": 2728.183,
      "text": " uh... it's not as rich as the self that you have but it's kind of"
    },
    {
      "end_time": 2761.015,
      "index": 105,
      "start_time": 2732.073,
      "text": " Like a hologram, it's a lower resolution, but it somehow captures the whole thing in some weird sense. That's interesting. So when I speak to people on this channel, people ask me, how do I prepare? So I prepare for weeks in advance and until I get to the point where I can emulate the person. And that's when I say, OK, I've sufficiently prepared when I can imagine almost any question that I when I can answer almost any question that I can imagine from the point of view of the other person and be correct. And I can test myself by"
    },
    {
      "end_time": 2788.575,
      "index": 106,
      "start_time": 2761.015,
      "text": " Okay, what questions did this person get asked in an interview? You can put that into an LLM and say don't give me the answers, right? And then I can say what would they say? What are they likely to say? So, you know like the in the large language models within the neural networks we have like this weight pruning and we can do like a lower reduced a bit depth for each of the weights, right? So you have a low bit depth representation of me essentially, right? But then we need to think of it like an ecosystem."
    },
    {
      "end_time": 2814.974,
      "index": 107,
      "start_time": 2789.206,
      "text": " Because how many people do you have in your mind? How many people have you interviewed, right? You have all of those ghosts in your machine, essentially. Yeah, speaking of unthinkable thoughts. Well, I had these experiences. I've had an experience, let's just say that, and I'll be somewhat vague about it. And we talked about it off air, so I'll just briefly speak about it where"
    },
    {
      "end_time": 2835.06,
      "index": 108,
      "start_time": 2815.913,
      "text": " I felt like I was losing my mind"
    },
    {
      "end_time": 2861.084,
      "index": 109,
      "start_time": 2835.486,
      "text": " Each week I'm interviewing someone who has an entirely different point of view as to someone else and I have to take them on, first emulate them but also think that they could be correct because I can't be dismissive of you or contemptuous of you and think my model is the right model and I'm only going to entertain your model as a theoretical fantasy but not actually treat you as you possibly have an element of reality."
    },
    {
      "end_time": 2882.483,
      "index": 110,
      "start_time": 2861.8,
      "text": " How did you recover from that? Well, in some ways, I'm still recovering. It was"
    },
    {
      "end_time": 2911.357,
      "index": 111,
      "start_time": 2882.739,
      "text": " Quite traumatic and I do have to distance myself from the thoughts of other people and arguments. I don't have to distance myself from arguments, but I can distance myself from conclusions of other people, especially what they have severed. What they say with confidence and many people, we confuse when they speak proclaiming something without diffidence with, okay, we give it more credence than when someone's speaking meekly."
    },
    {
      "end_time": 2942.073,
      "index": 112,
      "start_time": 2912.176,
      "text": " And so I have to almost as if they were just typing their their speech. I have to evaluate their questions as such and not say, OK, well, what are the accoutrements that come along with their speech? Kind of like a multiple choice sentences where they're just sort of sitting there and you haven't assigned a true statement to any of them. And one of the just if people are listening, I went through act therapy. So acceptance commitment therapy, I believe that's what it stands for. And I have an episode on it. I interviewed this lady named Lillian Dindo."
    },
    {
      "end_time": 2964.36,
      "index": 113,
      "start_time": 2942.756,
      "text": " And one of the pieces of advice is if you're encountering something that is triggering, you can actually recoil from it. People will say, no, no, no, don't face it. You're supposed to face it as much as you can and do so voluntarily. And one of the ways you can do so is let's say someone said something that is triggering. This is just now a vague example."
    },
    {
      "end_time": 2983.183,
      "index": 114,
      "start_time": 2964.889,
      "text": " You can then look at the words and then just read the words and just say these are just words on a paper. They don't influence me. They don't have to influence me. I don't have to buy into it. This is one model of the world. It strikes me as sort of reading the code but not running the program. That's a brilliant way of phrasing it. Yeah."
    },
    {
      "end_time": 3014.292,
      "index": 115,
      "start_time": 2984.804,
      "text": " You know, because I had the same and still have the same kind of challenges myself, particularly in this kind of research, in thinking about unthinkable thoughts and this concept of lethal text and so on. When I first came across that, just the concept of an idea that would do harm, I was very hesitant for a long time to even share that idea in the abstract sense. Not even any particular idea, but just the meta idea of that there are harmful ideas."
    },
    {
      "end_time": 3045.247,
      "index": 116,
      "start_time": 3015.316,
      "text": " And I had to, you know, in this act of climbing the mountain of madness, I had to retreat because I felt like I was getting too far away from humanity, from myself, from my past. And when we get to sort of new layers of thought, they are lethal to our previous self, right? Your adulthood is lethal to your childlike self."
    },
    {
      "end_time": 3074.735,
      "index": 117,
      "start_time": 3046.698,
      "text": " Right. So why were you afraid of sharing even the notion that there are lethal ideas? Because somewhere in me that idea frightened me. And I was afraid of doing harm to other people. I worried that maybe I would encounter someone who didn't have the right constitution or wasn't in the right place and that even that idea wouldn't"
    },
    {
      "end_time": 3097.261,
      "index": 118,
      "start_time": 3075.196,
      "text": " Because the idea itself, the meta idea is enough of a seed to then either instantiate a lethal framework or to open your perception where you start to find them because I think they're everywhere. Something that we talked about over dinner with Jacob Barndes. So we just recorded an episode of Jacob Barndes link on screen obligatory remarks. I recommend you check it out."
    },
    {
      "end_time": 3124.582,
      "index": 119,
      "start_time": 3098.404,
      "text": " Is that I want to make sure that what I'm doing with this channel is good or it is not promulgating harm. And so it's extremely tricky because even this, it sounds like what you're saying is it's necessary for the creative endeavor to go out outside the norm and to allow yourself to indulge in some"
    },
    {
      "end_time": 3155.555,
      "index": 120,
      "start_time": 3125.776,
      "text": " Dosage of madness, but then at the same time there is such a thing as madness, right? And that hasn't been separated in this conversation. So I'm I would like you to make that distinction what comes to mind is the real scary idea is that Those that have gone mad are not wrong We have this tendency and I think it's an unfortunate framework that those that suffer from mental illness are broken somehow there's a chemical imbalance their brains wired wrong or they have"
    },
    {
      "end_time": 3180.435,
      "index": 121,
      "start_time": 3155.845,
      "text": " dramatic experience that sort of messed up their software. But maybe that's not the case at all. Maybe they're sort of astronauts that have been to the moon and the rest of us have just haven't been there. And like you were saying before with your with your guests, you have to assume that they have this valid experience. And if we imagine that we are software, then then all experiences are real, right? Because they are"
    },
    {
      "end_time": 3207.824,
      "index": 122,
      "start_time": 3180.947,
      "text": " They're just virtual machines anyway. So all of our thoughts are made out of just patterns. And so if someone has that experience, it's genuine. It's not a fallacy. It's what they experience. And by them instantiating it, it's real. And so this idea that the mentally ill aren't broken but are just at the edge of evolution or the edge of these unthinkable thoughts is, I think,"
    },
    {
      "end_time": 3239.616,
      "index": 123,
      "start_time": 3210.776,
      "text": " It's something that stars me. There could also be misattributions. So for instance, they go to the moon and then they come back and say, I was on a balloon made of cheese. And so we then say there are no balloons made of cheese. However, if they had said and correctly correctly identified, there's a rock that orbits the earth and we never noticed it. You understand what orbits are and so on. Then we would be like, oh, that's interesting. Can we also can we go look for that now and we find it now in this example is quite"
    },
    {
      "end_time": 3264.104,
      "index": 124,
      "start_time": 3240.657,
      "text": " Yeah, I think that's probably the situation that we're in and that when these people go to these places and they come back and they try to use ordinary word vectors, they try to decompose it and serialize it and they model in their mind how your mind is going to respond to their experience and they describe it, it doesn't match."
    },
    {
      "end_time": 3293.643,
      "index": 125,
      "start_time": 3264.77,
      "text": " And then we hear words like balloon and cheese where in their mind they're thinking about something much richer, but that's sort of the token that they were able to get out. And then we say, oh, no, that person is nutty because they're talking about balloons of cheese and that's not how we think about it. But if we were to go back millennia and describe the moon in modern terms to the wisest people we could find, they would say that we're nutty."
    },
    {
      "end_time": 3323.422,
      "index": 126,
      "start_time": 3294.684,
      "text": " They would say, no, no, no, this is the goddess and this is Luna and this is how it works and this is what it means and so on. And so when we describe it as a collection of rocks in an orbit in a gravitational well, that would sound like balloons and cheese to them. So part of me is maybe you felt the same. One of the reasons why I didn't talk about my experience and I, but I do more and more, but I still rarely do relatively compared to how often I have these podcasts or talk to people is that"
    },
    {
      "end_time": 3354.053,
      "index": 127,
      "start_time": 3325.145,
      "text": " I'm ashamed of it. And I also think that it's, I thought that it was much more rare than, than it was. And as I speak to people, there's some professors, there's the prominent professor of math who I can talk to you about off air. His name is a household name to mathematicians who told me that I'm, I'm so glad you talked about this in the Carl Friston episode, because I was experiencing something like this myself. I think it's going to, we'll find out it's a much more universal phenomenon. And this"
    },
    {
      "end_time": 3379.531,
      "index": 128,
      "start_time": 3354.616,
      "text": " This immune system concept applies at the cultural level where we don't share those ideas for fear in some sense, because we know that they can be lethal to relationships, to our standing in society, to our financial well-being. That really exposing that raw self, even though the experiences are genuine,"
    },
    {
      "end_time": 3409.019,
      "index": 129,
      "start_time": 3379.77,
      "text": " and invalid, we don't share that. And I think that's kind of what we need to overcome as a culture. If we want to make it to this next era of evolution and humanity, we have to embrace that. We have to embrace that diversity of thought and people that previously were laughed off the stage. But that's precisely where the interesting ideas are going to come from, right? So, you know, we mentioned like with the moon, but if we take anything out of the modern world, whether it's"
    },
    {
      "end_time": 3437.858,
      "index": 130,
      "start_time": 3409.616,
      "text": " You know quantum physics or information theory or even just the idea of software and we go back like we said if you know 100 years 200 years 300 years these ideas were just they were Insane right the whole world is just this this utterly unimaginable creation compared to what we what we thought about before It reminds me of I've been traveling recently and seeing sort of modern cities up close and"
    },
    {
      "end_time": 3466.527,
      "index": 131,
      "start_time": 3439.531,
      "text": " And I think if you took someone from 200 years ago and you brought them into a modern city like Boston, I think they would think we're a million years into the future, not 200 years. And I think it would be overwhelming and unimaginable. They wouldn't really even be able to take it in, all the lights and little computers and all of the plethora of cars and all this kind of stuff that we have in the modern world. I think it would be overwhelming. And I think they would suspect that"
    },
    {
      "end_time": 3496.493,
      "index": 132,
      "start_time": 3467.09,
      "text": " That they had time traveled a million years and not 200. And so by the same token, if we think about where the human mental space is going to be in, say, 25 years, I think it's farther out than just the linear 25 years. The vistas, the Lovecraft vistas we talked about last time, they're going to get that are being opened up by these AI tools, are either going to drive us mad or they're going to open up a new renaissance."
    },
    {
      "end_time": 3522.21,
      "index": 133,
      "start_time": 3497.756,
      "text": " And we have to get going prior to going, the audience comprises a general audience, but a large audience of researchers. So now you're speaking to researchers and then also people who want to become, who want to go into the fields of physics, mathematics, philosophy, computer science. Yeah. Well, you've got such an amazing audience and community, and I'd really love to know what unthinkable thoughts they're thinking about."
    },
    {
      "end_time": 3546.101,
      "index": 134,
      "start_time": 3522.551,
      "text": " You know put down in the comments and and tell us the stories and the experiences that you're having in terms of being off the map and Where your GPS doesn't get any signal kind of thing and you and you find yourself either in ecstasy or despair or discovery and I'd I'd be curious to kind of mine the the beautiful minds that you have in your channel and what they think and"
    },
    {
      "end_time": 3574.804,
      "index": 135,
      "start_time": 3547.295,
      "text": " And I'll also put a link to help if they require help for the various countries. I'll look at the top 10 countries and just put whatever is the national hotline or what have you. Will, thank you so much. Thank you, sir. Always a pleasure. Looking forward to speaking with you again. New update. Started a sub stack. Writings on there are currently about language and ill-defined concepts as well as some other mathematical details."
    },
    {
      "end_time": 3601.237,
      "index": 136,
      "start_time": 3575.043,
      "text": " Several people ask me, hey Kurt, you've spoken to so many people in the fields of theoretical physics, philosophy, and consciousness. What are your thoughts? While I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics."
    },
    {
      "end_time": 3630.538,
      "index": 137,
      "start_time": 3602.637,
      "text": " Also, thank you to our partner, The Economist. Firstly, thank you for watching. Thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself. Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm."
    },
    {
      "end_time": 3641.596,
      "index": 138,
      "start_time": 3630.538,
      "text": " which means that whenever you share on Twitter, say on Facebook, or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube, which in turn"
    },
    {
      "end_time": 3669.821,
      "index": 139,
      "start_time": 3641.817,
      "text": " Thirdly, there's a remarkably active Discord and subreddit for Theories of Everything where people explicate Toes, they disagree respectfully about Theories, and build as a community our own Toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the audio platforms. All you have to do is type in Theories of Everything and you'll find it. Personally, I gained from rewatching lectures and podcasts"
    },
    {
      "end_time": 3689.804,
      "index": 140,
      "start_time": 3669.821,
      "text": " I also read in the comments"
    },
    {
      "end_time": 3713.183,
      "index": 141,
      "start_time": 3689.804,
      "text": " And donating with whatever you like. There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video. It's audio in the case of Patreon video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier."
    },
    {
      "end_time": 3719.838,
      "index": 142,
      "start_time": 3713.387,
      "text": " Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much."
    }
  ]
}

No transcript available.