Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Max Tegmark: Physics Absorbed Artificial Intelligence & (Maybe) Consciousness

September 3, 2025 1:49:53 undefined

ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.

Transcript

Enhanced with Timestamps
234 sentences 16,105 words
Method: api-polled Transcription time: 107m 12s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
[0:26] I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
[0:53] When Michael Faraday first proposed the
[1:10] People were like, what are you talking about? You're saying there is some stuff that exists, but you can't see it, you can't touch it. That sounds like total non-scientific ghosts. Most of my science colleagues still feel that talking about consciousness as science is just bullshit. But what I've noticed is when I push them a little harder about why they think it's bullshit, they split into two camps that are in complete disagreement with each other. You can have intelligence without consciousness, and you can have consciousness without intelligence.
[1:39] Your brain is doing something remarkable right now. It's turning these words into meaning. However, you have no idea how. Professor Max Tegmark of MIT studies this puzzle. You recognize faces instantly, yet you can't explain the unconscious processes. You dream full of consciousness while you're outwardly not doing anything. Thus, there's intelligence without consciousness and consciousness without intelligence. In other words,
[2:08] They're different phenomena entirely. Tegmark proposes something radical. Consciousness is testable in a new extension to science where you become the judge of your own subjective experience. Physics absorbed electromagnetism and then atoms and then space-time, and now Tegmark says it's swallowing AI. In fact, I spoke to Nobel Prize winner Jeffrey Hinton about this specifically.
[2:33] Now to Max Tegmark, the same principle that explains why light bends in water may actually explain how thoughts emerge from neurons.
[2:41] I was honored to have been invited to the Augmentation Lab Summit, which was a weekend of events at MIT last week. This was hosted by MIT researcher Dunya Baradari. The summit featured talks on the future of biological and artificial intelligence, brain-computer interfaces, and included speakers such as Stephen Wolfram and Andreas Gomez-Emelson. My conversations with them will be released on this channel in a couple weeks, so subscribe to get notified.
[3:08] Or you can check the sub-stack, curtjymongle.com, as I release episodes early over there. A special thank you to our advertising sponsor, The Economist. Among weekly global affairs magazines, The Economist is praised for its non-partisan reporting and being fact-driven. This is something that's extremely important to me. It's something that I appreciate. I personally love their coverage of other topics that aren't just politics-based as well.
[3:35] For instance, The Economist has a new tab for artificial intelligence on their website, and they have a fantastic article on the recent DESI Dark Energy Survey. It surpasses, in my opinion, Scientific American's coverage. Something else I love, since I have ADHD, is that they allow you to listen to articles at 2x speed, and it's from an actual person, not a dubbed voice. The British accents are a bonus.
[3:59] So if you're passionate about expanding your knowledge and gaining a deeper understanding of the forces that shape our world, I highly recommend subscribing to The Economist. It's an investment into your intellectual growth, one that you won't regret. I don't regret it. As a listener of Toe, you get a special discount. Now you can enjoy The Economist and all it has to offer for less. Head over to their website, economist.com slash toe to get started.
[4:27] I believe that artificial intelligence has gone from being not physics to being physics, actually.
[4:57] You know, one of the best ways to insult physicists is to tell them that their work isn't physics. As if somehow there's a generally agreed on boundary between what's physics and what's not or between what science and what's not. But I find like the most obvious lesson we get if we just look at the history of science is that the boundary has evolved. Some things that used to be considered scientific by some like astrology has
[5:25] Left the boundaries contracted. So that's not considered science now. And then a lot of other things that were put as being non-scientific are now considered obviously science. Like last, I sometimes teach the electromagnetism course and I remind my students that when Michael Faraday first proposed the idea of the electromagnetic field, people were like, what are you talking about? You're saying there is some stuff that
[5:54] Exists, but you can't see it. You can't touch it. That sounds like ghosts, like total non-scientific bullshit, you know, and they really gave him a hard time for that. And the irony is not only is that considered part of physics now, but you can see the electromagnetic field. It's in fact the only thing we can see because light is electromagnetic wave. And after that, things like black holes, things like atoms, which Max Planck famously said is not physics, you know,
[6:25] even talk about what our universe was doing 13.8 billion years ago have become considered part of physics. And I think AI is now going the same way. I think that's part of the reason that Jeff Hinton got the Nobel prize in physics because what is physics? To me, you know, physics is all about looking at some complex, interesting system, doing something and trying to figure out how it works.
[6:53] We started on things like the solar system and atoms. But if you look at an artificial neural network that can translate French into Japanese, and that's pretty impressive too. And there's this whole field that started blossoming now that I also had a lot of fun working in called mechanistic interpretability, where you study an intelligent artificial system
[7:22] to try to ask these basic questions like, how does it work? Are there some equations that describe it? Are there some basic mechanisms? And so on. And in a way, I think of traditional physics, like astrophysics, for example, is just mechanistic interpretability applied to the universe. And Hopfield also got the Nobel Prize last year. He was the first person to show that, hey, you know, you can actually write down
[7:50] An energy landscape, so put potential energy on the vertical axis, how the potential energy depends on where you are, and think of each little valley as a memory. You might wonder, like, how the heck can I, like, store information in an egg carton, say, if it has 25 valleys in it? Well, very easy. You know, you can put the marble in one of them.
[8:19] That's log 25 bits right there. And how do you retrieve what the memory is? You can look, where is the marble? And Hopfield had this amazing physics insight. If you think of there as being any system whose potential energy function has many, many, many different minima that are pretty stable, you can just use it to store information. But he realized that that's different from
[8:47] from the way computer scientists used to store information it used to be like the whole von Neumann paradigm you know with a computer you're like tell me what's in this variable tell me what number is sitting in this particular address you go look here right uh that's how traditional computers store things but if i say to you you know twinkle twinkle uh-huh
[9:11] Little Star. Yeah, that's a different kind of memory retrieval, right? I didn't tell you, hey, give me the information that's stored in those neurons over there. I gave you something which was sort of partial part of the stuff and you filled it in. This is called associative memory. And this is also how Google will give you something. You can type something in that you don't quite remember and it'll give you the right thing. And Hopfield showed, coming back to the egg carton, all right, that if you
[9:39] If you put, if you, if you don't remember exactly what, suppose you want to memorize the digital PI and you have an energy function as a function where the actual minimum is at exactly 3.15, et cetera. Yeah. But you don't remember exactly what PI is. You said three something. Yes. So you put a marble at three, you let it roll. As long as it's in the basin of attraction, whose minimum is at PI, it's going to go there. So to me,
[10:09] This is an example of how something that felt like it had nothing to do with physics, like memory, can be beautifully understood with tools from physics, you have an energy landscape, you have different minima, you have dynamics, the Hopfield network. So I think, yeah, it's, it's totally fair that Hinton and Hopfield got an award in physics. And it's because we're beginning to understand that, that we can expand again, the domain of what is physics to include
[10:39] What about consciousness?
[10:56] Consciousness seems to be at a similar stage where many scientists or many physicists tend to look at the way that consciousness studies or consciousness is studied or consciousness is talked about. Yeah. Well, firstly, what's the definition of consciousness? You all can't agree there's phenomenal, there's access, etc. And then even there, what is it? And then the critique would be, well, you're asking me for a third person definition of something that's a first person phenomena. Yeah. OK, so how do you view consciousness in this? Yeah, I love these questions.
[11:25] Feel the consciousness is actually probably the final frontier, the final thing, which is going to end up ultimately in the domain of physics, then as the right now on the controversial borderline. So let's go back to Galileo say, right? If, uh, if he dropped a grape and a hazelnut, he could predict exactly when they were going to hit the ground and
[11:55] that how far they fall grows like a parabola as a function of time. But he had no clue why the grape was green and the hazelnut was brown. Then came Maxwell's equations and we started to understand that light and colors is also physics. And we got the equations for it. And then we got, he couldn't figure out Galileo either why the grape was soft and why the hazelnut was hard. Then we got quantum mechanics and we realized that all these properties of stuff, you know,
[12:23] could be calculated from the Schrodinger equation and also brought into physics. And then intelligence seemed like such a holdout. We've already talked about now how if you start breaking it into components like memory, and we can talk more about computation and learning, how that can also very much be understood as a physical process. So what about consciousness? Yeah.
[12:52] I'd say most of my science colleagues still feel that talking about consciousness as science is just bullshit. But what I've noticed is when I push them a little harder by why they think it's bullshit, they split into two camps that are in complete disagreement with each other. Half of them roughly will say, Oh, consciousness is bullshit because it's just the same thing as intelligence. Okay. And the other half will say consciousness is bullshit because
[13:23] Obviously machines can't be conscious, which is obviously totally inconsistent with saying it's the same thing as intelligence. What's really powered the AI revolution in recent decades is just moving away from philosophical quibbles about what does intelligence really mean in a deep philosophical sense and instead making a list of tasks and saying, can my machine, can it do this task? Can it do this task?
[13:51] And that's quantitative, you can train your systems to get better at the task. And I think you'd have a very hard time if you went to the NeurIPS conference and argue that machines can never be intelligent, right? So if you take that, if you then say intelligence is the same as consciousness, you're predicting that machines are conscious then if they're smart. But we know
[14:19] The consciousness is not the same as intelligence, just by some very simple introspection. We can do it like right now. So for example, what does it say here? I guess we shouldn't do products. No, I don't mind. Let's do this one. What does it say? Towards more interpretable AI with sparse auto encoders by Joshua Engels. Great. This is a PhD thesis of my student, Josh Engels, or master's thesis. So how did you do that computation? 30 years ago, if I gave you just
[14:49] a bunch
[15:19] No, same here. And for me, it's pretty obvious. It feels like my consciousness, there's some part of my information processing that is conscious and it's kind of got an email from the face recognition module saying, you know, face recognition complete. The answer is, you know, it's, it's so-and-so. So, so in other words, you, you do something when you recognize people that's quite intelligent, but not conscious. Right.
[15:47] And I would say actually large fraction of what your brain does, you're just not conscious about you. You find out about the results of it often after the fact. So you can have intelligence without consciousness. That's the first point I'm making. And second, you can have consciousness without intelligence, without accomplishing any tasks. Like did you have, did you have any dreams last night?
[16:12] None that I remember. But have you ever had a dream? Yes. So there was consciousness there. But if someone was just looking at you lying there in the bed, you probably weren't accomplishing anything, right? So I think it's obvious that consciousness is not the same. You can have consciousness without intelligence and vice versa. So those who say that consciousness equals intelligence are being sloppy. Now, what is it then?
[16:44] My guess is that consciousness is a particular type of information processing and that intelligence is also a typical type of information processing, but that there's a Venn diagram like this. There's some things that are intelligent and conscious. Some are intelligent, but not conscious. Some of them are conscious, but not very intelligent. And so the question then becomes to try to understand, can we write down some equations or formulate some principles?
[17:13] So what kind of information processing is intelligent and what kind of information processing is conscious? And I think my guess is that for something to be conscious, there's at least some sufficient conditions that it probably has to have. There has to be information, a lot of information there and something for it to be the content of consciousness, right?
[17:45] There's an Italian scientist, Giulio Tononi, who has put a lot of creative thought into this and triggered enormous controversy also, who argues that the one necessary condition for consciousness is what he calls integration.
[18:10] basically that if it's going to subjectively feel like a unified consciousness, like, you know, like your consciousness, it cannot consist of two information processing systems that don't communicate with each other. Because if consciousness is the way information feels when it's being processed, right, then if this is the information that's conscious and it's just completely disconnected from this information, there's no way that this information can be part of what it's conscious of.
[18:37] Just a quick question. Ultimately, what's the difference between information processing, computing, and communication? So communication, I would say, is just a very simple special case of information processing. You have some information here and you make a copy of it, the code ends up over there. It's a volleyball you send over. Yeah, volleyball you send over, copy this to that. The computation can be much more complex than that.
[19:07] And then the information processing communication was the third word computation. And yeah, so computation and information processing, I would say is more or less the same thing. Then you can try to classify different kinds of information processing, depending on how complex it is, and, and mathematicians have been doing and have been doing an amazing job there, even though they still don't know whether P equals NP and so on. But just coming back to consciousness again,
[19:37] I think a mistake many people make when they think about their own consciousness is like, can you see the beautiful sunlight coming in here from the window and some colors and so on? To have this model that somehow you're actually conscious of that stuff, that the content of your consciousness somehow is the outside world.
[20:03] I think that's clearly wrong because you can experience those things when your eyes are closed, when you're dreaming, right? So I think the conscious experience is intrinsic to the information processing itself. What you are actually conscious about when you look at me isn't me, it's the world model that you have and the model you have in your head right now of me. And you can be conscious of that whether you're awake or whether you're asleep. And then of course you're using your senses
[20:31] The information processing has to be such that there's no way of
[21:00] We switched bodies. I'm freaking out right now. I think I just peed a little. It's an absolute riot. And the only movie that can be described as... So much weirder than the last time. What last time? It's the Frequel. You ready? We've been waiting for. That absolutely slays.
[21:30] that it isn't actually just secretly two separate parts that don't communicate at all and cannot communicate with each other because then they would basically be like two parallel universes that were just unaware of each other and you wouldn't be able to have this feeling that it's all unified. I actually think that's a very reasonable criteria and he has a particular formula he calls phi for measuring how integrated things are and the things that have a high phi are more conscious.
[22:00] completely sure whether that was the only formula that had that property. So I wrote a paper once to classify all possible formulas that have that property. And it turned out there was less than 100 of them. So I think it's actually quite interesting to test if any of the other ones fit the experiments better than his. But just to close and finish up on the why people say consciousness is bullshit, though, I think ultimately the main reason is either they feel
[22:29] It sounds too philosophical, or they say, Oh, you can never test the consciousness theories. Because how can you test if I'm conscious or not? When y'all you can observe as my behavior, right? But the here is a misunderstanding. I'm more, actually, I'm much more optimistic. Can I tell you about an experiment I envisioned where you can test the consciousness theory? No, of course. So suppose you have someone like Giulio Tononi or anyone else who has really stuck to the name
[22:59] neck out and written down a formula for what kind of information processing is conscious. And suppose we put you in one of our MEG machines here at MIT or some future scanner that can read out in a massive amount of your neural data in real time. And you connect that to this computer that uses that theory to make predictions about what you're conscious of.
[23:30] And then, uh, now it says, I predict that you're consciously aware of a water bottle. And you're like, yeah, that's true. Yes. And then it says, okay, now I predict that you're, I see information processing there about regulating your pulse. And I predict that you're consciously aware of your heartbeat. You're like, no, I'm not. You've not ruled out that theory actually. Right. It made a prediction about your subjective experience.
[24:01] And you yourself can falsify that, right? So first of all, it is possible for you to rule out the theory to your satisfaction. That might not convince me because you told me that you weren't aware of your heartbeat. Maybe I think you're lying or whatever. But then you can go, okay, hey Max, why don't you try this experiment? And I put on my MEG helmet and I
[24:28] Welcome back.
[24:50] That this theory just again and again and again and again keeps predicting exactly what you're conscious of and never anything that you're not conscious about. You would gradually start getting kind of impressed, I think. And if you moreover read about what goes into this theory and you say, wow, this is a beautiful formula and it kind of philosophically makes sense that these are the criteria that consciousness should have and so on. You might be tempted now to try to extrapolate that's validity and wonder if it works also on, on
[25:18] other biological animals, maybe even on computers and so on. And, you know, this is not altogether different from, from how we've dealt with, for example, general relativity, right? So you might say, you can never, it's bullshit to talk about what happens inside black holes. Cause you can't go there and check and then come back and tell your friends or publish your findings in physical review letters, right? Um,
[25:47] But what we're actually testing is not some philosophical ideas about black holes. We're testing a mathematical theory, general relativity. I have it there in a frame by my window, right? And so what's happened is we tested it on the perihelion shift of mercury, how it's not really going in an ellipse, but the ellipse is processing a bit. We tested it and it worked. We tested it on how gravity bends light.
[26:13] And then we extrapolated it to all sorts of stuff way beyond what Einstein had thought about, like what would happen when our universe was a billion times smaller in volume and what would happen when black holes get really close to each other and give off gravitational waves and it just passed all these tests also. So that gave us a lot of confidence in the theory and therefore also in the
[26:41] The predictions that we haven't been able to test yet, even the predictions that we can never test, like what happens inside black holes. So now, so this is typical for science, really, if someone says, you know, I like I like Einstein, you know, I like what it did for predicted for black for gravity in our solar system, but I, I'm going to opt out of the black hole prediction. You can't do that.
[27:08] It's not like, oh, I want coffee, but decaf, you know, if you're going to buy the theory, you need to buy all its predictions, not just the ones you like. And if you don't like the predictions, well, come up with an alternative to general relativity, write down the math, and then make sure that it correctly predicts all the things we can test. And good luck, because some of the smartest humans on the planet have spent the hundred years trying and failed, right? So if we have a theory of consciousness,
[27:38] the same vein, which correctly predicts the subjective experience on whoever puts on this device and tests predictions for what they are conscious about. And it keeps working. I think people will start taking pretty seriously also what it predicts about coma patients who seem to be unresponsive, whether they're having locked in syndrome or in a coma, and even what it predicts about machine consciousness, whether machines are suffering or not. And people who don't like that, they will then
[28:08] We're being incentivized to work harder to come up with an alternative theory that at least predicts subjective experience. So this was my, I'll get off my soapbox now, but this is why I strongly disagree with people who say that consciousness is all bullshit. I think there's actually more saying that because there is an excuse to be lazy and not work on it.
[28:29] Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers.
[28:49] By joining you'll directly be supporting my work and helping keep these conversations at the cutting edge. So click the link on screen here, hit subscribe and let's keep pushing the boundaries of knowledge together. Thank you and enjoy the show. Just so you know, if you're listening, it's C-U-R-T-J-A-I-M-U-N-G-A-L dot org.
[29:11] So in the experiment where you put some probes on your brain in order to discern which neurons are firing or what have you. So that would be a neural correlate. I'm sure you've already thought of this. Okay, so that you're correlating some neural pattern with
[29:26] the bottle and you're saying, Hey, okay. I think you're experiencing a bottle, but then technically are we actually testing consciousness or testing the further correlation of that? It tends to be that when I ask you the question, are you experiencing a bottle? And we see this neural pattern that that's correlated with you saying yes. So it's still another correlation is enough. Well, but you're not trying to convince when the experiment is being run on you, you're not trying to convince me.
[29:54] It's just you talking to the computer. You are just doing experiments basically on the theory. There's no one else involved. No other human. And you're just trying to convince yourself. So you sit there and you have all sorts of thoughts. You might just decide to click to, you know, close your eyes and think about your favorite place in Toronto to see if it can predict that you're conscious of that. Right. And then you might also do something else, which you know you can do unconsciously and see if it, if you can trick it into predicting that
[30:23] You're conscious of that information that you know is being processed in your brain. Ultimately, you're just trying to convince yourself that the theory is making incorrect predictions. I guess what I'm asking is in this case, I can see being convinced that it can read my mind in the sense that it can roughly determine what I'm seeing, but I don't see how that would tell this other system that I'm conscious of that.
[30:45] In the same way that we can see what files are on a computer, doesn't mean that those files are or that when we do some cut and paste, we can see some process happen. Well, you're not trying to convince the computer. The computer is coded up to just make the predictions from this putative theory of consciousness, this mathematical theory, right? And then your job is just to see all those wrong equations or the right equations. And the way you ascertain that is to see whether it correctly or incorrectly predicts what you're actually subjectively aware of.
[31:15] We should be clear that we're defining consciousness here just simply as subjective experience, right? Which is very different from talking about what information is in your brain like you have all sorts of memories In your brain right now that you haven't probably haven't thought about for months and that's not your subjective experience right now and even again when When I open my eyes and I see a person, you know, and I there's a computation happening and
[31:44] to
[32:06] And suppose there's some small subset of this which is highlighted in yellow and you have to have a computer that can predict exactly what's highlighted in yellow. It's pretty impressive if it gets it right. And in the same way, if it can accurately predict exactly which
[32:21] Okay, so let me see if I understand this. So in the global workspace theory, you have like a small desktop and pages are being sent to the desktop, but only a small amount at any given time. I know that there's another metaphor of a spotlight, but whatever, let's just think of that. So this desktop is quite small relative to the globe.
[32:42] Yeah. Okay. Relative to the city, relative to the globe for sure. So our brain is akin to this globe because there's so many different connections. There's so many different words that there could possibly be. Yeah. If there's some theory that can say, Hey, this little thumb tack is what you're experiencing. And you're like, actually that is correct. Yeah. Okay. Exactly. So the global workspace theory, great stuff, you know, but it is not sufficiently predictive to do this experiment. It doesn't have a lot of equations in it. Mostly words, right?
[33:11] And so we don't have no one has actually done this experiment yet. I would love to do it for someone to do it where you have a theory that's sufficiently physical mathematical that it can actually stick its neck out and risk being proven false all the time. I guess what I was saying, just to wrap this up. Yes, that is extremely impressive. I don't even know if that can technologically be done. Hope maybe it can be approximately done. But regardless, we can for sure falsify theories.
[33:40] but it still wouldn't suggest to an outside observer that this little square patch here or whoever is experiencing the square patch is indeed experiencing the square patch. But you already know that you're experiencing the square patch. I know. Yes, that's the key thing. You know it. I don't know it. I don't know that you know, but you can convince yourself that this theory is false or that this theory is increasingly promising, right?
[34:10] that's the catch and I just want to stress you know people sometimes say to me that oh you can never prove for sure that something is conscious we can never prove anything with physics a little dark secret but we can never prove anything we can't prove that general relativity is correct you know probably it's wrong probably it's just a really good approximation all we ever do in physics is we disprove theories but if
[34:41] As in the case of general relativity, some of the smartest people on earth have spent over a century trying to disprove something and they still have failed. We start to take it pretty seriously and start to say, well, you know, might be wrong, but we're going to take it pretty seriously as a really good approximation, at least for what's actually going on. You know, that's how it works in physics and that's the best we can ever get with consciousness. Also something which people have, which is making strong predictions and which we've
[35:11] You said something interesting. Look, we can tell or you can tell you, you can tell this theory of consciousness is correct for you or you can convince yourself. This is super interesting because earlier in the conversation, we're talking about physics, what was considered physics and what is no longer considered physics. So what is this amorphous boundary? Maybe it's not amorphous, but it changes. Yeah, that's absolutely changes. Do you think that's also the case for science?
[35:41] Do you think science to incorporate a scientific view of consciousness is going to have to change what it considers to be science? I think I'm a big fan of Karl Popper. I think I personally consider things scientific if we can falsify them. If there's at least if there's no if no one can even think of a way in which we could even conceptually in the future with arbitrary funding, you know,
[36:09] and technology tested, I would say it's not science. I think Popper didn't say if it can be falsified, then it's science. It's more that if it can't be falsified, it's not science. I'll agree with that. I'll agree with that also for sure. But what I'm saying is consciousness is can cannot a theory of consciousness is willing to actually make concrete predictions about what you personally subject to the experience cannot be dismissed like that because you can falsify it if it predicts just one thing that
[36:38] That's wrong, right? Then you falsify it. And I would encourage people to stop wasting time on philosophical excuses for being lazy and try to build these experiments. That's why I think we should. And you know, we saw this happen with intelligence. People had so many quibbles about, I don't know, I have to define intelligence and whatever. And in the meantime, you got a bunch of people who started rolling up their sleeves and saying, well, can you build a machine
[37:04] that beats the best human in chess. Can you build a machine that translates Chinese into French? Can you build a machine that figures out how to fold proteins, you know? And amazingly, all of those things have now been done, right? And what that's effectively done is just made people redefine intelligence as the ability to accomplish tasks, ability to accomplish goals.
[37:31] That's what people in machine learning will say if you ask them, what do they mean by intelligence? And the ability to accomplish goals is different from having a subjective experience. The first I call intelligence, the second I call consciousness. And it's just getting a little philosophical here. I mean, it's quite striking throughout the history of physics, how oftentimes we vastly delayed physics breakthroughs just by some curmogens
[38:01] convincingly arguing that it's impossible to make this scientific. For example, extra solar planets. People were so stuck with this idea that all the other solar systems had to be like our solar system with a star and then some small rocky planets near it and some gas giants farther out. So they're like, yeah, no point in even looking around other stars because we can't see Earth-like planets. Eventually, some folks decided just look anyway.
[38:31] With a Doppler method to see if stars were going in little circles because something was orbiting around and they found these hot Jupiters like the gigantic Jupiter sized thing going closer to the star than Mercury is going to our Sun Wow, but they could have done that ten years earlier, you know If they hadn't been indeterminated by these convergence who said don't look so what my attitude is is
[38:57] Don't listen to the curmudgeons. If you have an idea for an experiment you can build, that's just going to cut into some new part of parameter space and experimentally test the kind of questions that have never been asked. Just do it. More than half the time when people have done that, there was a revolution. When Carl Jansky wanted to build the first x-ray telescope and look at x-rays from the sky, for example, people said, what a loser.
[39:27] There are no x-rays coming from the sky. Do you think there's like dentists out there? I don't know. And then you found that there is a massive amount of x-rays even coming from the sun, you know, or people decided to look at them. Basically, whenever we open up another wavelength with telescopes, we've seen new phenomena we didn't even know existed. Or when Leuvenhoek built the first microscope, do you think he expected to find these animals that were
[39:57] It's so tiny you couldn't see them with the naked eye. Of course not, right? But he basically went orders of magnitude in a new direction in an experimental parameter space. And there was a whole new world there, right? So this is what I think we should do with consciousness. And with intelligence, this is exactly what has happened. If we segue a little bit into that topic, I think,
[40:28] I think there's too much pessimism in science. If you should go back, I don't know, 30,000 years ago, you know, if you and I were living in a cave, sitting and having this conversation, you know, we would probably have figured, well, you know, look at those little white dots in the sky here. They're pretty nifty.
[40:54] We wouldn't have any Netflix to distract us with and we would know that some of our friends had come up with some cool myths for what these these dots in the sky were and oh look that one maybe looks like an archer or whatever but yeah since you're a guy who likes to think hard you'd probably have a little bit of a melancholy tinge that you know we're never really gonna know what they are
[41:18] You can't jump up and reach them, you can climb the highest tree and they're just as far away. And we're kind of stuck here on our planet, you know, and maybe we'll starve to death, you know, and 50,000 years from now, if there's still people, you know, the life for them is going to be more or less like it is for ours. And boy, oh boy, would we have been too pessimistic, right? We hadn't realized we had totally that we were the masters of underestimation, like we, we massively underestimated
[41:47] not only the size of what existed, that everything we knew of was just a small part of this giant spinning ball, Earth, which was in turn just a small part of the grander structure of the solar system, part of a galaxy, part of a galaxy cluster, part of a super cluster, part of a universe, maybe part of a hierarchy or parallel universes. But more importantly, we also underestimated the power of our own minds to figure stuff out.
[42:18] We didn't even have to fly to the stars to figure out what they were. We just really kind of had to let our minds fly. And, you know, Aristarchus of Samos over 2000 years ago was looking at a lunar eclipse and some of his friends were probably like, oh, this moon turned red. It probably means we're all going to die or the gods, an omen from the gods. And he's like, hmm, the moon is there.
[42:47] Sun just set over there. So this is obviously Earth's shadow being cast on the moon. And actually the edge of the shadow of Earth is not straight, it's curved. Wait, so we're living on a curved thing? We're maybe living on a ball? Huh. And wait a minute, you know, the curvature of Earth's shadow there is clearly showing that Earth is much bigger than the moon is. And he went down and calculated how much bigger Earth is.
[43:17] the moon and then he's like okay well i know that um
[43:23] This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state.
[43:54] I know Earth is about 40,000 kilometers because I read that Eratosthenes had figured that out. And I know the moon, I can cover it with my pinkies. It's like half a degree in size. I can figure out what the actual physical size is of the moon. It was ideas like this that started breaking this curse of overdone pessimism. We started to believe in ourselves a little bit more.
[44:24] And, um, and here we are now with the internet, with artificial intelligence, with all these little things you can eat to prevent you from dying of pneumonia. My grandfather, Sven, you know, died of a stupid kidney infection could have been treated with, with penicillin. It's amazing, you know, how, how much excessive pessimism there's been. And I think we still have a lot of it, unfortunately.
[44:55] That's why you want to come back to this thing that if someone has, yeah, there's no better way to fail at something than convince yourself that it's impossible, you know, and look at AI. I would say whenever with science we have,
[45:18] started to understand how something in nature works that we previously thought of as sort of magic, like what causes the winds or the seasons, etc. What causes things to move? We were able to historically transform that into technology that often did this better and could serve us more. So we figured out how we could build machines that were stronger than us and faster than us. We got the Industrial Revolution. We
[45:45] Now we're figuring out that thinking is also a physical process, information processing, computation and Alan Turing was of course one of the real pioneers in this field and he
[46:11] He clearly realized that the brain is a biological computer. He didn't know how the brain worked. We still don't, exactly. But it was very clear to him that we could probably build something that was much more intelligent and maybe more conscious, too, once we figured out more details. I would say from the 50s when the term AI was coined, not far from here, Dartmouth, the field has been chronically overhyped.
[46:40] Most progress has gone way slower than people predicted, even than McCarthy and Minsky predicted for that Dartmouth workshop and so on. But then something changed about four years ago and it went from being overhyped to being underhyped because I remember very vividly like seven years ago, six years ago, most of my colleagues here at MIT and most of my AI colleagues in general were pretty convinced that we were decades away from passing the Turing test, decades away from building machines that could
[47:10] Master language and knowledge at human level. And they were all wrong. They were way too pessimistic because it already happened. You can quibble about whether it happened with chat GPT-4 or when it was exactly, but it's pretty clear it's in the past now. So if people could be so wrong about that, maybe they were wrong about more. And sure enough, since then, AI has gone from
[47:39] being kind of high school level, kind of college level to many areas being PhD level, the professor level to even far beyond that than in many areas in just four short years. So prediction after prediction has been crushed now where things have happened faster. So I think we have gone from the overhyped regime to the underhyped regime. And this is, of course, the reason why so many people now are talking about maybe we'll
[48:06] reach broadly human level and cop two years or five years, depending on which tech CEO you talk to or which professor you talk to. But it's very hard now for me to find anyone serious who thinks we're 100 years away from it. And then, of course, you have to think about go back and reread your Turing. So he said in 1951 that once we get machines
[48:35] They're vastly smarter than us in every way. They can basically perform better than us on all cognitive tasks. The default outcome is that they're going to take control, and from there on, Earth will be run by them, not by us, just like we took over from other apes. And Irving J. Good pointed out in the 60s that
[48:58] That last sprint from being roughly a little bit better than us to being way better than us can go very fast because as soon as we can replace the human AI researchers by machines who don't have to sleep and eat and can think 100 times faster and can copy all their knowledge to the others, every doubling in quality from then on might not take months or years like it is now.
[49:23] the sort of human R&D time scale. It might happen every day or on the time scale hours or something and we would get the sigmoid ultimately where we shift away from the sort of slow exponential progress that technology has had ever since the dawn of civilization where you use today's technology to build tomorrow's technology which is so many percent better and see to an exponential which goes much faster first because it's now
[49:53] Humans are out of the loop. Don't slow things down. And then eventually it plateaus into a sigmoid when it bumps up against the laws of physics. No matter how smart you are, you're probably not going to send information faster than light and general relativity and quantum mechanics put limits and so on. But my colleague Seth Lloyd here at MIT has estimated that we're still about the million million million million million times away from the limits from the laws of physics. So
[50:22] can get pretty crazy pretty quickly. And it's also Alan, I keep discovering more stuff to do with Russell dug out this fun quote from him in 1951 that I wasn't aware of before, where he also talks about, you know, how what happens when we reach this threshold and he he's like, well, don't worry about this control loss thing now. And if it's because far away, but I'll give you a test so you know what the
[50:53] pay attention you know the canary in the coal mine the Turing test as we call it now and then we already talked about how that was just passed and this reminds me so much of what happened around 1942 when Enrico Fermi built the first self-sustaining nuclear chain reaction under the football stadium in Chicago that was like a Turing test for nuclear weapons when
[51:22] The physicists who found out about this, then they totally freaked out. Not because the reactor was at all dangerous. It was pretty small, you know, and it wasn't any more dangerous than CHATCPT is, but today, but, but because they realized, oh, that was the canary in the coal mine, you know, that was the last big milestone. We had no idea how to meet and the rest is just engineering.
[51:48] I feel pretty similarly about AI now. I think that we obviously don't have AI that are better than us or as good as us at AI development. But it's mostly engineering, I think, from here on out. We can talk more about the nerdy details of how it might happen. It's not going to be large language models scaled. It's going to be other things. But like in 1942,
[52:20] I'm curious, actually, if you were there visiting Fermi, how many years would you predict it would have taken from then until the first nuclear explosion happened? How many years? Difficult to say, maybe a decade. So then it happened in three, could have been a decade, probably got sped up a bit because of the geopolitical competition that was happening during World War II.
[52:44] Similarly, it's very hard to say now, is it going to be three years? It's going to be a decade, but there's no shortage of competition fueling it again. And as opposed to the nuclear situation, there's also a lot of money in it. So it's, I think this is the most interesting time and interesting fork in the road in human history. And if Earth is the only place in our observable universe with telescopes,
[53:14] Here's a question I have when people talk about the AIs taking over. I wonder, so
[53:40] Which AIs? So is Claude considered a competitor to OpenAI in this AI space from the AI's perspective? Does it look at other models as you're an enemy because I want to self-preserve? Does Claude look at other instances so you have your own Claude chats? Are they all competitors?
[54:00] is every time it generates a new token, is that a new identity? So it looks at what's going to come next and before as, hey, I would like you to not exist anymore because I want to exist. Like what is the continuing identity that would make us say that the AIs will take over? Like what is the AI there? Yeah, those are really great questions. I mean, the very short answer is that people generally don't know. I'll say a few things. First of all,
[54:27] We don't know whether Claude or GPT-5 or any of these other systems are having a subjective experience or not, whether they're conscious or not. Because as we talked about for a long time, we do not have a consensus theory of what kind of information processing has a subjective experience, what consciousness is. We don't need necessarily for machines to be conscious for them to be a threat to us. If you're chased by a heat-taking missile, you probably don't care whether it's
[54:57] Conscious in some deep philosophical sense, you just care about what it's actually doing, what its goals are. And so let's shift, let's just switch to talking about just behavior of systems. You know, in physics, we typically think about the behavior as determined by the past through causality, right? Why did this phone fall down? Because gravity pulled on it, because there's an earth planet down here. When you look at what people do,
[55:28] We usually instead interpret, explain why they do in terms of not the past, but the future that some goal they're trying to accomplish. If you see someone scoring a beautiful goal in a soccer match, you could be like, yeah, it's because their foot struck the ball in this angle and therefore action equals reaction, blah, blah, blah. But more likely you're like, they wanted to win. And we are, when we build technology, we,
[55:55] Usually build it with a purpose in it. So people build heat seeking missiles to shoot down aircraft. They have a goal. We build mousetraps to kill mice. And we train our AI systems today, our language models, for example, to make money and accomplish certain things.
[56:15] But to actually answer your question about what this system, if we would have a goal to collaborate with other systems or destroy them or see them as competitors, you actually have to ask what does the system actually have? Is it meaningful to say that this AI system as a whole has a coherent goal? And that's very unclear, honestly. You could say at a very trivial level that
[56:45] ChatGPT has the goal to correctly predict the next token or word in a lot of text, because that's exactly that's how we trained it. So called pre training, you know, you just let it read all the internet and look and predict which words are going to come next. You let it look at pictures and predict what's what's more in them and so on. But clearly, they're able to have much more sophisticated goals than that. Because it just turns out that in order to predict
[57:15] Like if you're just trying to predict my next word, it helps if you make a more detailed model about me as a person and what my actual goals are and what I'm trying to accomplish, right? So these AI systems have gotten very good at simulating people. Say, this sounds like a Republican. And so if this Republican is writing about immigration, he's probably going to write this.
[57:42] Based on what they wrote previously, they're probably a Democrat. So when they write about immigration, they're more likely to say these words. The Democrat is more likely to maybe use the word undocumented immigrant, whereas the Republican might predict they're going to say illegal, alien. So they're very good at predicting, modeling people's goals. But does that mean they have the goals themselves? If you're a really good actor,
[58:10] You're very good at modeling people with all sorts of different goals. But does that mean you have the goals really? This is not a well understood situation. And when companies spend a lot of money on what they call aligning an AI, which they bill as giving it good goals, what they are actually in practice doing is just affecting its behavior. So they basically
[58:39] Punish it when it says things that they don't want it to say and encourage it. And that's just like if you train a serial killer, you know, to not say anything that reveals his murderous desires. So I'm curious, if you do that, and then the serial killer stops ever dropping any hints about wanting to knock someone off, you know, would you feel that you've actually changed this person's goals to not want to kill anyone?
[59:07] Well, the difference in this case would be that the AI's goals seem to be extremely tied to its matching of whatever fitness function you give it. Whereas in the serial killer case, their true goals are something else and their verbiage is something else. Yeah. It seems like in the LLM, in LLM's cases. Yeah. When you train in LLM, I'm talking about the pre-training now where they read the whole internet, basically. You're not telling it to be kind or anything like that.
[59:37] really training it to be have the goal of predicting. And then in the so-called fine tuning reinforcement learning for human feedback is the nerd phrase for it. Yes, there you look at different answers that it could give and you say, I want this one, not that one. But you're again not explaining to it. You know, I like I have a two and a half year old, I have a two year old son, right? This guy. And you know, my idea for how to make him a good person is to help him understand
[60:08] The value of kindness, my approach to parenting is not to be mean to him if he ever kicks somebody without any explanation. I want him rather to internalize the goal of being a kind person and that he should value the well-being of others.
[60:29] and that's very different from from how we do reinforcement learning with human feedback and it's frankly not at all we I would stick my neck out and say we have no clue really what if any goals chat GPT has it acts as if it has goals yeah you know but if you kick your dog every time it tries to bite someone it's gonna also act like it doesn't want to bite people but like who knows with a serial killer chase
[60:54] It's quite possible that that doesn't have any particular set of unified goals at all. So this is a very important thing to study and understand because if we're ever going to end up living with machines that are way smarter than us, right, then our well-being depends on them having actual goals now to be the treat as well, not just having
[61:21] said the right buzzwords before they got the power. So we both have lived with entities that were smaller than us, our parents, when we were little, and it worked out fine because they really had goals to be nice to us, right? So we need some deeper, very fundamental understanding of the science of goals in AI systems. Right now,
[61:49] Most people who say that they've aligned goals to AIs are just bullshitting in my opinion. They haven't. They've aligned behavior on goals. And I think once I would encourage any physics physicists and mathematicians watching this or thinking about getting into AI to think. I would encourage them to think to consider this because physicists have one of the things that's great about physicists is
[62:19] Physicists like you have a much higher bar on what they mean by understanding something than engineers typically do. Engineers will be like, well, yeah, it works. Let's ship it. Whereas as a physicist, you might be like, but why exactly does it work? And can I actually go a little deeper? Is there some, can I write down an effective field theory for how the training dynamics works? Can I model this somehow? Can I,
[62:48] This is what the kind of thing that Hopfield did with memory. This is the sort of thing that Jeff Hinton has done. And we need much more of this to have an actual satisfying theory theory of intelligence, what it is, and of goals. If we actually have a system, an AI system that actually has goals, and there's some way for us to actually really know what they are, then we would be in a much better situation than we are today. We haven't solved the problems.
[63:19] A great word was used, understand. That's something I want to talk about.
[63:45] What does it mean to understand? Before we get to that, I want to linger on your grandson for a moment. When you're training your son, why is that not you're a human, you're giving feedback, it's reinforcement,
[64:03] Why is that not RLHF for the child? And then, well, you'd wonder, well, what is the pre-training stage? What if the pre-training stage was all of evolution, which would have just given rise to his nervous system by default. And now you're coming in with your RLHF and tuning not only his behavior, but his goals simultaneously. So let's start with that second part. Yeah. So first of all,
[64:30] The way RLHF actually works now is that American companies will pay one or two dollars an hour to a bunch of people in Kenya and Nigeria to sit and watch the most awful graphical images and horrible things and and then they keep clicking on which of the different what you keep classifying them and is this something that should be is okay or not okay and so on. It's nothing
[64:55] the way anyone watching this podcast treats their child, where they really try to help the child understand in a deep way. Second, the actual architecture of the transformers and more scaffolding systems being built right now are very different from our limited understanding of how a child's brain works.
[65:25] We're certainly not. We can't just declare victory and move on from this. Just like I said before, the people I think have used some philosophical excuses to avoid working hard on the consciousness problem. I think some people have made the philosophical excuses to avoid just asking this very sensible question of goals. Before we talk about understanding, can I talk a little bit more about goals?
[65:53] If you talk about goal-oriented behavior first, there's less emotional baggage associated with that. Let's define goal-oriented behavior as behavior that's more easily explained by the future than by the past, more easily explained by the effects it is going to have than by what caused it.
[66:21] Extra value meals are back. That means 10 tender juicy McNuggets and medium fries and a drink are just $8. Only at McDonald's. For limited time only. Prices and participation may vary. Prices may be higher in Hawaii, Alaska and California and for delivery. You could say the cause of it moving was because I
[66:51] Another object, my hand bumped into it, action equals reaction. Now there was this impulse given to it, et cetera, et cetera. Or you could say that, but the goal oriented view, you could view it as goal oriented behavior thinking, well, Max wanted to illustrate a point. He wanted it to move. So he did something that made it move. Right. And that feels like the more economic description in this case. And it's interesting, even in basic physics, we actually see stuff which can sometimes be more so.
[67:21] First thing I want to say is there is no right and wrong description. Both of those descriptions are correct, right? So look at the water in this bottle here again. If you put a straw into it, you know, it's going to look bent because light rays bend when they cross the surface into the water. You can give two different kinds of explanations for this. The causal explanation will be like, well, the light ray came there. There were, but now there are some atoms.
[67:48] in the water and the interactive electromagnetic field and blah, blah, blah, blah. And after a very complicated calculation, you can calculate the angle that goes that way. But you can give a different explanation from us principle and say that actually the light ray took the path that was going to get it there the fastest. If this were instead a beach and this is an ocean and you're working a summer job as a lifeguard and you want to risk and you see a swimmer who's in trouble here, how are you going to go to the swimmer?
[68:19] You're going to again go in the path that gets you there the fastest. So you'll run longer distance through the air on the beach and then a shorter distance through the water, you know, then clearly that's goal oriented behavior, right? For us, for the photon though? Well, both descriptions are valid. It turns out in this case that it's actually simpler to calculate the right answer if you do from as principle and look at the goal oriented way. Uh, and now,
[68:47] We then we see in biology. So Jeremy England, who used to be a professor here, realized that in many cases, non-equilibrium thermodynamics can also be understood sometimes more simply through gold-granted behavior. Like if you, if you know, suppose I put a bunch of sugar on the floor and no life form ever enters the room, come back in the end, including the, the Phyllis who keeps this nice and tidy here.
[69:17] Then it's still going to be there in a year, right? But if there are some ants, sugar is going to be gone pretty soon. And entropy will have increased faster that way because the sugar was eaten and there was dissipation. And Jeremy England showed actually that there's a general principle in non-equilibrium thermodynamics where systems tend to adjust themselves to always be able to dissipate faster.
[69:47] To be able to, if you have a thing, if you have some, there's some, some kinds of liquids where you can put some stuff where if you shine light at one wavelength, it will rearrange itself so that it can absorb that wavelength better, dissipate the heat faster. And you can even think of life itself a little bit of that. Life basically can't reduce entropy, right? In the universe as a whole, it can't beat the second law of thermodynamics, but it has this trick where it can keep its own entropy low.
[70:17] and do interesting things, retain its complexity and reproduce by increasing the entropy of its environment even faster. And so if I understand the increasing of the entropy in the environment is the side effect, but the goal is to lower your own entropy. So again, you can have, there are two ways of looking at it. You can look at it all as just a bunch of atoms bouncing around and causally explain everything. But a more economic way of thinking about it is that, yeah,
[70:47] Life is doing the same thing that that liquid that rearranges itself to absorb sunlight. It's a process that just increases the overall entropy production in the universe. It makes the universe messier faster so that it can accomplish things itself. And since life can make copies of itself, of course, those systems
[71:11] There are two ways you can think of physics either as the past causing the future or as deliberate choices made now to cause a certain future.
[71:42] And gradually our universe has become more and more goal oriented, as we started getting more and more sophisticated life forms, now us. And we already at the point, a very interesting transition point now where the amount of atoms that are in technology that we built with goals in mind is becoming comparable to the biomass already. And it might be if we end up in some sort of
[72:13] AI powered future where life starts spreading into the cosmos near the speed of light, et cetera, that the vast majority of all the atoms are going to be engaged in goal oriented behavior so that our universe is becoming more and more goal oriented. So I wanted to just anchor it a little bit in physics again, since you love physics, right? And say that I think it's very interesting for physicists to think more about the physics of goal oriented behavior.
[72:41] And when you look at an AI system, oftentimes what plays the role of a goal is actually just a loss function or a reward function. You know, some, you have a lot of options and there's some sort of optimization trying to make the loss as small as possible. And anytime you have optimization, you'd say you have a goal.
[73:07] Yeah, but just like it's a very lame and banal goal for a lightweight refract a little bit in water to get there as fast as possible. And that's a very sophisticated goal if someone tries to raise their daughter well or write a beautiful movie or symphony. It's, it's
[73:28] There's a whole spectrum of goals, but yeah, optimize, building a system that's trying to optimize something, I would say absolutely is a goal-oriented system. Yeah. I was just going to inquire about, are they equivalent? So I see that whenever you're optimizing something, you have to have a goal that you're optimizing towards. Sure. But is it the case that anytime you have a goal, there's also optimization? So anytime someone uses the word goal, you can think there's going to be optimization involved and vice versa. That's a wonderful question. Actually, Richard Feynman famously asked us a question. He said that all laws of physics he knows about,
[73:57] can actually be derived from an optimization principle except one. Anyone wondered if there was one? So I think this is an interesting open question. You just threw it out there. I mean, look, look at you. I would suspect that you cannot, your actions cannot be really accurately modeled by writing down a single goal that you're just trying to maximize. I don't think that's how human beings in general operate.
[74:25] What I think actually is happening with us and goals is a little different. No, our genes, according to Darwin, the goal oriented behavior they exhibited, right, even though they weren't conscious, obviously, our genes, what was just evolutionary fitness, you know, make a lot of successful copies of themselves. That's all they all they cared about. So then it turned out that they would reproduce better if they also
[74:53] the the
[75:23] How many times the expected number of fertile offspring I have, that rabbit would just die of starvation and those genes would go out of the gene pool. They didn't have the cognitive capacity to always anchor every decision it made in one single goal. It was computationally unfeasible to always be running this actual optimization that the genes cared about. So what happened instead,
[75:47] In rabbits and humans and what we in computer science call agents of bounded rationality, where we have limits to how much we can compute, was we developed all these heuristic hacks. Like, well, if you feel hungry, eat. If you feel thirsty, drink. If there's something that tastes sweet or savory, eat more of it. Fall in love, make babies.
[76:17] These are clearly proxies ultimately for what the genes cared about, you know, making copies of themselves because you're not going to have a lot of babies if you died of starvation, right? But, but, but now that you have your great brain, what it is actually doing is, is, is guy making these decisions based on all these heuristics that themselves don't lead to correspond to any unique goal anymore. Like any person watching this podcast, right? Who's ever used birth control.
[76:46] would have so pissed off their genes if the genes were conscious, because this is not at all what the genes wanted, right? The genes just gave them the incentive to make love because the genes that would make copies of the genes. The person who used birth control was well aware of what the genes wanted and was like, screw this. I don't want to have a baby at this point in my life. So there's been a rebellion in the goal oriented behavior of people.
[77:16] against the original goal that we were made with and replaced by these heuristics that we have, our emotional drives and desires and hunger and thirst and et cetera, et cetera, that are not any more optimizing for anything specific. And they can sometimes go work out pretty badly, like the obesity epidemic and so on. And I think the machines today, the smartest AI systems are even more extreme like that.
[77:46] the humans and I don't think they have anything. I think they're much more, I think humans still tend to end up, especially the more for those who are, who like introspection and self-reflection are much more prone and likely to have some, at least somewhat consistent strategy for their life or goals than, than Chachi PT has, which is a completely random mishmash of all sorts of things.
[78:17] Understanding. Understanding, yes. That's a big one. I've been toying with writing a paper called artificial understanding for quite a long time, as opposed to artificial consciousness and artificial intelligence. And the reason I haven't written it is because it's a really tough question. I feel there is a way of defining understanding so that it's
[78:47] Quite different from both consciousness and intelligence, although also a kind of information processing, or at least kind of information representation. I thought you're going to relate it to goals because if I understand that goals are related to intelligence, sure. But then also the understanding of someone else's goals seems to be related to intelligence. So for instance, in chess, you're constantly trying to figure out the goals of the opponent. If I can figure out your goals prior to you figuring out mine,
[79:17] or ahead of yours, then I'm more intelligent than you. Now, you would think that the ability to reliably achieve your goals is what is intelligence. But it's not just that, because you can have an extremely simple goal that you always achieve, like the photon here. It's just following some principle. But we have goals. Even the person on the beach with the swimming, hypothetically, even that will fail at. But we're more intelligent than the photon.
[79:45] So, but we're able to model the photons goal, the photons not able to model our goal. So I thought you were going to say, well, that modeling is related to understanding. Yeah, that I agree with for sure. Modeling is absolutely related to understanding. Goals I view as different. I personally think of intelligence as being rather independent of goals. So I would define intelligence as
[80:17] ability to accomplish goals. You know, you talked about chess, right? There are tournaments where computers play chess against computers to win. There's, have you ever played losing chess? It's a game where you're trying to force the other person to win the game. No computer tournaments for that too. So you can actually give a computer the goal, which is the exact opposite of a normal chess computer and it'll happen. And then you can say that the, the one, the one, the losing chess tournament is the most intelligent again. So this right there shows that,
[80:46] Being intelligent is not the same as having a particular goal. It's how good you are at accomplishing them, right? I think a lot of people also make the mistake of saying, oh, we shouldn't worry about what happens with powerful AI because it's going to be so smart, it'll be kind automatically to us. You know, if Hitler had been smarter, do you really think the world would have been better? I would guess that it would have been worse, in fact, if he had been smarter.
[81:15] and one world war two and and so on. So there's Nick Bostrom causes the orthogonality thesis that intelligence is just an ability to accomplish whatever goals you give yourself or whatever goals you have. And I think understanding is a component of intelligence, which is very linked to modeling, as you said, having maybe
[81:43] You could argue that it even is the same, the ability to have a really good model of something, another person, as you said, or the universe, our universe, if you're a physicist, right? And I'm not going to give you some very glib definition of what of understanding or artificial understanding. Because I view it as an open problem, but I can tell you one anecdote of something which felt like artificial understanding.
[82:13] to me. So me and some of my students here at MIT, we were very interested in and so we've done a lot of work, like including this this thesis here for the randomly happens to be lying here, you know, is about how you take AI systems and you do something smart and you figure out how they do it. So one particular task we trained an AI system to do is just to implement to learn
[82:43] group operations abstractly. So the concrete example, suppose you want to, you have 59 and the numbers zero through 58. Okay. And you're adding them modulo 59. So you say like one plus two is three, but 57 plus three is 60. Well, that's bigger than 59. So you subtract off 59, you say it's one. Same principles as clock.
[83:11] Same exactly as a clock. And I'm so glad you said clock, because that's your model in your brain about modular arithmetic. You think of all the numbers sitting in a circle, it goes after 10 and 11 comes 12, but then comes one. So what happened was we, there are 59 times 59, so about 3,600 pairs of numbers, right? We trained on the system on some fraction of those, see if we could learn to get the right answer. And the way the AI worked was it
[83:41] It learned to embed and represent each number, which was given to it just as a symbol. It didn't know five, whether it had anything to do with the number five, as a point in a high dimensional space. So we have these 59 points in a high dimensional space, okay? And then we trained another neural network to look at these representations. So you give it this point and this point, and it has to figure out, okay, what is this plus this mod 59? And then something shocking happened.
[84:10] You train it, train it, it sucks, it sucks, and then it starts getting better on the training data. And then at a certain point, it suddenly also starts to get better on the test data. So it starts to be able to correctly answer questions for pairs of numbers it hasn't seen yet. So it somehow had a eureka moment where it understood something about the problem. It had some understanding.
[84:34] So I suggested to my students, why don't you look at what's happening to the geometry of all these points that are moving around, the 59 points that are moving around in this high dimensional space during the training. I told them to just do a very simple thing, principle component analysis, where you try to see if they mostly lie in a plane and then you can just plot the 59 points. And it was so cool what happened. You look at this, you see 59 points that's looking very random. They're moving around.
[85:03] And then at exactly the point when the Eureka moment happens, when AI becomes able to answer questions that hasn't seen before, the points line up on a circle, a beautiful circle, interesting, except not with 12 things, but with 59 things now, because that was the problem that had right. So to me, this felt like the AI had reached understanding about what the problem was, it has come up with a model, or as we often call it, a representation
[85:34] Running a business comes with a lot of what-ifs. But luckily, there's a simple answer to them. Shopify. It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need. From website design and marketing, to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality. Turn those what-ifs into
[86:03] Sign up for your $1 per month trial at Shopify.com slash special offer. And this understanding now enabled it to see patterns in the problem so that it could generalize to all sorts of things that it hadn't even come across before. So I'm not able to give a beautiful, succinct, fully complete answer to your question on how to define artificial understanding. But I do feel that this is an example.
[86:34] A small example of understanding. We've since then seen many others. We wrote another paper where we found that when large language models do arithmetic, they represent the numbers on a helix, like a spiral shape. And I'm like, what is that? Well, the long direction of it can be thought of like representing the numbers in analog, like you're farther this way if the number is bigger. But by having them wrap around on a helix like this,
[87:03] You can use the digits if it's 10 to go around and there were actually several helixes as a hundred helix and the 10 helix and so I suspect that one day people will come to realize that More broadly when when machines understand stuff, maybe when we understand things also it has to do with Coming up with the same patterns and then coming up with a clever way of representing the patterns such that
[87:31] The representation itself goes a long way towards already giving you the answers you need. I'm a very visual thinker when I do physics or when I think in general. I never feel I can understand anything unless I have some geometric image in my mind.
[87:51] Actually, Feynman talked about this. Feynman said that there's the story of him and a friend who can both count to 60, something like this, precisely. And then he's saying to his friend, I can't do it if you're waving your arms in front of me or distracting me like that. But I can if I'm listening to music, I can still do this trick. And he's like, I can't do it if I'm listening to music, but you can wave your arms as much as you like. And Feynman realized
[88:15] He was seeing the numbers one two three his trick was to have a mental image yes and then the other person was having a metronome but the goal or the outcome was the same but the way that they came about it was different there's actually something in philosophy called the rule following paradox.
[88:35] You probably know this. There are two rule following paradoxes. One is Kripke and one is the one that I'm about to say. So how do you know when you're teaching a child that they've actually followed the rules of arithmetic? So you can test them 50 plus 80, etc. 50 times 200 and they can get it correct every single time. They can even show you their reasoning.
[88:54] But then you don't know if that actually fails at 6,000 times 51 and the numbers above that. Interesting. You don't know if they did some special convoluted method to get there. Exactly. All you can do is say you've worked it out in this case, in this case, in this case. That's actually, we have the advantage with computers that we can inspect how they understand or... In principle. But when you look under the hood of something like Chess GPT, all you see is billions and billions of numbers.
[89:19] And you oftentimes have no idea what all these matrix multiplications and things like this, you have no idea really what it's doing. But mechanistic interpretability, of course, is exactly the quest to move beyond that and see how does it actually work. And coming back to understanding and representations, there is this idea known as the platonic representation hypothesis, that if you have
[89:45] two different machines, or I would generalize it to people also who both reach a deep understanding of something, there's a chance that they've come up with a similar representation. In Feynman's case, there were two different ones, but there probably aren't, there probably, at most, there's probably a few ones, or one or a few that are really good. And it seems like a hard case to make. But there's a lot of evidence coming out for it now. Actually, you can already many years ago, there was this team where they just took
[90:16] You know, in chat GPT and other AI systems, all the words and word parts they call tokens get represented as points in a high dimensional space. And so this team, they just took something which had been trained only on English books and another one English language stuff and another one trained only on Italian stuff. And they just looked at these two point clouds and found that there was a way they could actually rotate them to match up as well as possible. And it gave them a somewhat decent
[90:46] English Italian dictionary. So they have the same representation. And there's a lot of recent papers, quite recent ones even, that are showing that, yeah, it seems like the representations of one that large language model like Chachi PT, for example, is in many ways similar to the representations that other ones have. We did a paper, my student, my grad student Dawan Beck,
[91:13] And I, where we looked at family trees. So we took the Kennedy family tree, which are royalty family trees, et cetera. And we, we, this trained AI to correctly predict like who is the son of whom, who is the uncle of whom is so and so a sister of whom we just asked all these, we asked all these questions. And we also incentivize the large language model to learn it.
[91:43] with as little in as, in as simple a way as possible by not giving it the arbitrary, by limiting the resources it had. And then when we looked inside, we discovered something amazing. We discovered that first of all, a whole bunch of independent systems had learned the same representation. So you can actually take the representation of one and, and literally just like rotate it around and stretch it a little bit and put it into the other and it would work there. And then when we looked at what it was, they were trees.
[92:11] We never told it anything about family trees, but it would draw like, here is this king so and so, and then here are the sons and this and this. And then it could use that to know that, well, if someone is farther down, they're a descendant, et cetera, et cetera, et cetera. So that's yet another example, I think, in support of this platonic representation hypothesis, the idea that understanding often has something to do with capturing patterns in some
[92:41] So I wanted to end on the advice that you received from your parents, which was about, don't concern yourself too much what other people think, something akin to that. It was differently worded, but.
[92:58] I also wanted to talk about what are the misconceptions of your work that other colleagues even have that you have to constantly dispel. And another topic I wanted to talk about was the mathematical universe. Oh, the easy stuff. So there are three, but we don't have time for all three. If you could think of a way to tie them all together, then feel free like a gymnast or a juggler. But otherwise then I would like to end on the advice from your parents. Okay.
[93:26] Well, the whole reason I spent so many years thinking about whether we are all part of a mathematical structure and whether a universe actually is mathematical rather than just described by it is, of course, because I listened to my parents because I got so much shit for that. And I just felt, I think I'm going to do this anyway, because to me, it makes logical sense. I'm going to put the ideas out there. And then in terms of misconceptions about me, I think
[93:57] One misconception I think is that somehow I don't believe that being falsifiable is important for science. I usually talked about earlier, I'm totally on board with this. And I actually argue that if you have a predictive theory about anything, gravity, consciousness, etc. That means that you can falsify it.
[94:28] So that's one and another one probably the one I get most now because I'm stuck my neck out a bit about AI and the idea that actually the brain is a biological computer and actually we're likely to be able to build machines that we could totally lose control over is that some people like to call me a doomer which is of course just
[94:50] Something they say when they when they ran out of arguments is like if you call someone a heretic or whatever. And so I think what I would like to correct about that is I feel actually quite optimistic. I'm not a pessimistic person. I think that there's way too much pessimism floating around about humanity's potential. One is people, oh, we can never figure out
[95:19] make any more progress on consciousness. We totally can. If we stop telling us that it's impossible and actually work hard. Some people say, oh, you know, we can never figure out more about the nature of time and so on unless we can, unless we can detect gravitons or whatever, we totally can. There's so much progress that we can make if we're willing to work hard. And in particular, I think the most pernicious kind of
[95:49] Pessimism we suffer from now is this meme of that it's inevitable that we are going to build super intelligence and become irrelevant. It is absolutely not inevitable. But you know, if you tell yourself that something is inevitable, it's a self fulfilling prophecy, right? This is
[96:15] Convincing a country that's just been invaded, that it's inevitable that they're going to lose the war if they fight. It's the old, it's SIOP game in town, right? So of course, if there's someone who has a company and they want to build stuff and they don't want you to have any laws that make them accountable, they have an incentive to tell everybody, oh, it's inevitable that this is going to get built. So don't fight it, you know. Oh, it's inevitable that, um,
[96:46] That humanity is going to lose control over the planet. So just don't fight it and hey, buy my new product. It's absolutely not inevitable. You know, you could have had people, people say it's inevitable, for example, because they say people will always build technology that can give you money and power. That just factually incorrect. You know, you're a really smart guy. If I could do cloning of you and like start selling a million copies of you on the black market, I can make a ton of money.
[97:17] We decided not to
[97:45] Gotten a lot of military power with bioweapons, you know. Then Professor Matthew Messelson at Harvard said to Richard Nixon, you know, we don't want there to be a weapon of mass destruction that's so cheap that all our adversaries can afford it. And Nixon was like, huh, that makes sense, actually. And then Nixon used that argument on Brezhnev and it worked. And we got a bioweapons ban. And now people associate biology mostly with
[98:14] we have
[98:45] than we thought. I mentioned that if we were living in a cave 30,000 years ago, we might've made the same mistake and thought we were doomed to just always be at risk of getting eaten by tigers and starving to death. That was too pessimistic. We had the power to, through our thought, develop a wonderful society and technology where we could flourish. And it's exactly the same way now. We have an enormous power.
[99:12] What most people actually want to make money on AI is not some kind of sand god that we don't know how to control. It's tools, AI tools. People want to cure cancer. People want to make their business more efficient. Some people want to make their armies stronger and so on. You can do all of those things with tool AI that we can control. And this is something we work on in my group, actually. And that's what people really want. And there's a lot of people who do not want
[99:40] Most Americans and Poles think that's just a terrible idea. Republicans and Democrats. There was an open letter by evangelicals in the US to Donald Trump saying, you know, we want AI tools. We don't want some sort of uncontrollable super intelligence. Pope Leo
[100:09] has recently said that he wants AI to be a tool, not some kind of master. You have people from Bernie Sanders to Marjorie Taylor Greene have come out on Twitter, you know, saying we don't want Skynet. We don't want to just make humans economically obsolete. So this is not inevitable at all, I think. And if we can just remember, we have so much, so much agency in what we do, what kind of future we're going to build, if we can be optimistic and just
[100:40] The audience member now is listening. They're a researcher, they're
[100:58] They're a young researcher. They're an old researcher. They have something they would like to achieve that's extremely unlikely, that's criticized by their colleagues for even them proposing it. And it's nothing nefarious. It's something that they find interesting and maybe beneficial to humanity, whatever. What is your advice? Two pieces of advice. First of all, I've ordered half of all the greatest breakthroughs in science were actually
[101:26] Trash Talks at the time. So just because someone says that your idea is stupid doesn't mean it is stupid. A lot of people's ideas, you should be willing to abandon your own ideas if you can see the flaw and you should listen to a structured criticism against it. But if you feel you really understand the logic of your ideas better than anyone else and it makes sense to you, then
[101:53] Keep pushing it forward. The second piece of advice I have is you might worry then, like I did when I was in grad school, that if I only worked on stuff that my colleagues thought was bullshit, like thinking about the many worlds interpretation of quantum mechanics or multiverses, then my next job was going to be in McDonald's. Then my advice is just hedge your bets. Spend enough time
[102:22] Working on things that gets appreciated by your peers now so that you can pay your bills so that your career continues ahead. But carve out a significant chunk of your time to do what you're really passionate about in parallel. If people don't get it, well, don't tell them about it at the time.
[102:47] Hello Miami. When's the last time you've been in Burlington? We've updated, organized and added fresh fashion. See for yourself Friday November 14th to Sunday November 16th at our Big Deal event. You can enter for a chance to win free Wawa gas for a year plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow. No purchase necessary. Visit bigdealevent.com for more details.
[103:18] That way, you're doing science for the only good reason which is that you're passionate about it. And it's a fair deal to society to then do a little bit of chores for society to pay your bills also. That's a great way of viewing it. And it's been quite shocking for me to see actually how many of the things that I got most criticized for
[103:38] or was most afraid of talking openly about when I was a grad student, even papers that I didn't show my advisor until after he signed my PhD thesis and stuff, or have later actually come pretty picked up. And I actually feel that the things that I feel have been most impactful were generally in that category. You know, you're never going to be the first to do something important if you're just following everybody else.
[104:06] Max, thank you. Thank you. Hi there, Kurt here. If you'd like more content from Theories of Everything and the very best listening experience, then be sure to check out my sub stack at kurtjymungle.org. Some of the top perks are that every week you get brand new episodes ahead of time
[104:30] You also get bonus written content exclusively for our members. That's C-U-R-T-J-A-I-M-U-N-G-A-L dot org. You can also just search my name and the word sub stack on Google. Since I started that sub stack, it somehow already became number two in the science category. Now, sub stack for those who are unfamiliar is like a newsletter, one that's beautifully formatted. There's zero spam.
[104:58] This is the best place to follow the content of this channel that isn't anywhere else. It's not on YouTube. It's not on Patreon. It's exclusive to the Substack. It's free. There are ways for you to support me on Substack if you want, and you'll get special bonuses if you do. Several people ask me like, hey, Kurt, you've spoken to so many people in the field of theoretical physics, of philosophy, of consciousness. What are your thoughts, man?
[105:27] While I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics. And it's the perfect way to support me directly. KurtJaymungle.org or search KurtJaymungle substack on Google. Oh, and I've received several messages, emails and comments from professors and researchers saying that they recommend theories of everything to their students. That's fantastic.
[105:57] If you're a professor or a lecturer or what have you and there's a particular standout episode that students can benefit from, or your friends, please do share. And of course, a huge thank you to our advertising sponsor, The Economist. Visit economist.com slash totoe to get a massive discount on their annual subscription. I subscribe to The Economist and you'll love it as well.
[106:22] Tou is actually the only podcast that they currently partner with. So it's a huge honor for me. And for you, you're getting an exclusive discount. That's economist.com slash toe. And finally, you should know this podcast is on iTunes. It's on Spotify. It's on all the audio platforms. All you have to do is type in theories of everything and you'll find it.
[106:47] I know my last name is complicated, so maybe you don't want to type in Jymungle, but you can type in theories of everything and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read in the comment that toe listeners also gain from replaying. So how about instead you relisten on one of those platforms like iTunes, Spotify, Google podcasts, whatever podcast catcher you use. I'm there with you. Thank you for listening.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 856,
  "audio_duration_seconds": 6432.38,
  "completed_at": "2025-11-30T20:36:31Z",
  "segments": [
    {
      "end_time": 26.203,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
    },
    {
      "end_time": 53.234,
      "index": 1,
      "start_time": 26.203,
      "text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
    },
    {
      "end_time": 68.148,
      "index": 2,
      "start_time": 53.558,
      "text": " When Michael Faraday first proposed the"
    },
    {
      "end_time": 97.261,
      "index": 3,
      "start_time": 70.333,
      "text": " People were like, what are you talking about? You're saying there is some stuff that exists, but you can't see it, you can't touch it. That sounds like total non-scientific ghosts. Most of my science colleagues still feel that talking about consciousness as science is just bullshit. But what I've noticed is when I push them a little harder about why they think it's bullshit, they split into two camps that are in complete disagreement with each other. You can have intelligence without consciousness, and you can have consciousness without intelligence."
    },
    {
      "end_time": 128.183,
      "index": 4,
      "start_time": 99.07,
      "text": " Your brain is doing something remarkable right now. It's turning these words into meaning. However, you have no idea how. Professor Max Tegmark of MIT studies this puzzle. You recognize faces instantly, yet you can't explain the unconscious processes. You dream full of consciousness while you're outwardly not doing anything. Thus, there's intelligence without consciousness and consciousness without intelligence. In other words,"
    },
    {
      "end_time": 153.319,
      "index": 5,
      "start_time": 128.507,
      "text": " They're different phenomena entirely. Tegmark proposes something radical. Consciousness is testable in a new extension to science where you become the judge of your own subjective experience. Physics absorbed electromagnetism and then atoms and then space-time, and now Tegmark says it's swallowing AI. In fact, I spoke to Nobel Prize winner Jeffrey Hinton about this specifically."
    },
    {
      "end_time": 161.561,
      "index": 6,
      "start_time": 153.695,
      "text": " Now to Max Tegmark, the same principle that explains why light bends in water may actually explain how thoughts emerge from neurons."
    },
    {
      "end_time": 188.422,
      "index": 7,
      "start_time": 161.954,
      "text": " I was honored to have been invited to the Augmentation Lab Summit, which was a weekend of events at MIT last week. This was hosted by MIT researcher Dunya Baradari. The summit featured talks on the future of biological and artificial intelligence, brain-computer interfaces, and included speakers such as Stephen Wolfram and Andreas Gomez-Emelson. My conversations with them will be released on this channel in a couple weeks, so subscribe to get notified."
    },
    {
      "end_time": 215.009,
      "index": 8,
      "start_time": 188.422,
      "text": " Or you can check the sub-stack, curtjymongle.com, as I release episodes early over there. A special thank you to our advertising sponsor, The Economist. Among weekly global affairs magazines, The Economist is praised for its non-partisan reporting and being fact-driven. This is something that's extremely important to me. It's something that I appreciate. I personally love their coverage of other topics that aren't just politics-based as well."
    },
    {
      "end_time": 239.189,
      "index": 9,
      "start_time": 215.009,
      "text": " For instance, The Economist has a new tab for artificial intelligence on their website, and they have a fantastic article on the recent DESI Dark Energy Survey. It surpasses, in my opinion, Scientific American's coverage. Something else I love, since I have ADHD, is that they allow you to listen to articles at 2x speed, and it's from an actual person, not a dubbed voice. The British accents are a bonus."
    },
    {
      "end_time": 267.449,
      "index": 10,
      "start_time": 239.48,
      "text": " So if you're passionate about expanding your knowledge and gaining a deeper understanding of the forces that shape our world, I highly recommend subscribing to The Economist. It's an investment into your intellectual growth, one that you won't regret. I don't regret it. As a listener of Toe, you get a special discount. Now you can enjoy The Economist and all it has to offer for less. Head over to their website, economist.com slash toe to get started."
    },
    {
      "end_time": 296.254,
      "index": 11,
      "start_time": 267.449,
      "text": " I believe that artificial intelligence has gone from being not physics to being physics, actually."
    },
    {
      "end_time": 324.343,
      "index": 12,
      "start_time": 297.329,
      "text": " You know, one of the best ways to insult physicists is to tell them that their work isn't physics. As if somehow there's a generally agreed on boundary between what's physics and what's not or between what science and what's not. But I find like the most obvious lesson we get if we just look at the history of science is that the boundary has evolved. Some things that used to be considered scientific by some like astrology has"
    },
    {
      "end_time": 354.411,
      "index": 13,
      "start_time": 325.299,
      "text": " Left the boundaries contracted. So that's not considered science now. And then a lot of other things that were put as being non-scientific are now considered obviously science. Like last, I sometimes teach the electromagnetism course and I remind my students that when Michael Faraday first proposed the idea of the electromagnetic field, people were like, what are you talking about? You're saying there is some stuff that"
    },
    {
      "end_time": 384.224,
      "index": 14,
      "start_time": 354.923,
      "text": " Exists, but you can't see it. You can't touch it. That sounds like ghosts, like total non-scientific bullshit, you know, and they really gave him a hard time for that. And the irony is not only is that considered part of physics now, but you can see the electromagnetic field. It's in fact the only thing we can see because light is electromagnetic wave. And after that, things like black holes, things like atoms, which Max Planck famously said is not physics, you know,"
    },
    {
      "end_time": 412.449,
      "index": 15,
      "start_time": 385.043,
      "text": " even talk about what our universe was doing 13.8 billion years ago have become considered part of physics. And I think AI is now going the same way. I think that's part of the reason that Jeff Hinton got the Nobel prize in physics because what is physics? To me, you know, physics is all about looking at some complex, interesting system, doing something and trying to figure out how it works."
    },
    {
      "end_time": 441.8,
      "index": 16,
      "start_time": 413.046,
      "text": " We started on things like the solar system and atoms. But if you look at an artificial neural network that can translate French into Japanese, and that's pretty impressive too. And there's this whole field that started blossoming now that I also had a lot of fun working in called mechanistic interpretability, where you study an intelligent artificial system"
    },
    {
      "end_time": 469.889,
      "index": 17,
      "start_time": 442.381,
      "text": " to try to ask these basic questions like, how does it work? Are there some equations that describe it? Are there some basic mechanisms? And so on. And in a way, I think of traditional physics, like astrophysics, for example, is just mechanistic interpretability applied to the universe. And Hopfield also got the Nobel Prize last year. He was the first person to show that, hey, you know, you can actually write down"
    },
    {
      "end_time": 498.387,
      "index": 18,
      "start_time": 470.657,
      "text": " An energy landscape, so put potential energy on the vertical axis, how the potential energy depends on where you are, and think of each little valley as a memory. You might wonder, like, how the heck can I, like, store information in an egg carton, say, if it has 25 valleys in it? Well, very easy. You know, you can put the marble in one of them."
    },
    {
      "end_time": 527.142,
      "index": 19,
      "start_time": 499.104,
      "text": " That's log 25 bits right there. And how do you retrieve what the memory is? You can look, where is the marble? And Hopfield had this amazing physics insight. If you think of there as being any system whose potential energy function has many, many, many different minima that are pretty stable, you can just use it to store information. But he realized that that's different from"
    },
    {
      "end_time": 550.606,
      "index": 20,
      "start_time": 527.773,
      "text": " from the way computer scientists used to store information it used to be like the whole von Neumann paradigm you know with a computer you're like tell me what's in this variable tell me what number is sitting in this particular address you go look here right uh that's how traditional computers store things but if i say to you you know twinkle twinkle uh-huh"
    },
    {
      "end_time": 579.258,
      "index": 21,
      "start_time": 551.015,
      "text": " Little Star. Yeah, that's a different kind of memory retrieval, right? I didn't tell you, hey, give me the information that's stored in those neurons over there. I gave you something which was sort of partial part of the stuff and you filled it in. This is called associative memory. And this is also how Google will give you something. You can type something in that you don't quite remember and it'll give you the right thing. And Hopfield showed, coming back to the egg carton, all right, that if you"
    },
    {
      "end_time": 608.763,
      "index": 22,
      "start_time": 579.906,
      "text": " If you put, if you, if you don't remember exactly what, suppose you want to memorize the digital PI and you have an energy function as a function where the actual minimum is at exactly 3.15, et cetera. Yeah. But you don't remember exactly what PI is. You said three something. Yes. So you put a marble at three, you let it roll. As long as it's in the basin of attraction, whose minimum is at PI, it's going to go there. So to me,"
    },
    {
      "end_time": 638.507,
      "index": 23,
      "start_time": 609.275,
      "text": " This is an example of how something that felt like it had nothing to do with physics, like memory, can be beautifully understood with tools from physics, you have an energy landscape, you have different minima, you have dynamics, the Hopfield network. So I think, yeah, it's, it's totally fair that Hinton and Hopfield got an award in physics. And it's because we're beginning to understand that, that we can expand again, the domain of what is physics to include"
    },
    {
      "end_time": 655.862,
      "index": 24,
      "start_time": 639.206,
      "text": " What about consciousness?"
    },
    {
      "end_time": 685.179,
      "index": 25,
      "start_time": 656.323,
      "text": " Consciousness seems to be at a similar stage where many scientists or many physicists tend to look at the way that consciousness studies or consciousness is studied or consciousness is talked about. Yeah. Well, firstly, what's the definition of consciousness? You all can't agree there's phenomenal, there's access, etc. And then even there, what is it? And then the critique would be, well, you're asking me for a third person definition of something that's a first person phenomena. Yeah. OK, so how do you view consciousness in this? Yeah, I love these questions."
    },
    {
      "end_time": 715.23,
      "index": 26,
      "start_time": 685.811,
      "text": " Feel the consciousness is actually probably the final frontier, the final thing, which is going to end up ultimately in the domain of physics, then as the right now on the controversial borderline. So let's go back to Galileo say, right? If, uh, if he dropped a grape and a hazelnut, he could predict exactly when they were going to hit the ground and"
    },
    {
      "end_time": 742.961,
      "index": 27,
      "start_time": 715.981,
      "text": " that how far they fall grows like a parabola as a function of time. But he had no clue why the grape was green and the hazelnut was brown. Then came Maxwell's equations and we started to understand that light and colors is also physics. And we got the equations for it. And then we got, he couldn't figure out Galileo either why the grape was soft and why the hazelnut was hard. Then we got quantum mechanics and we realized that all these properties of stuff, you know,"
    },
    {
      "end_time": 771.647,
      "index": 28,
      "start_time": 743.524,
      "text": " could be calculated from the Schrodinger equation and also brought into physics. And then intelligence seemed like such a holdout. We've already talked about now how if you start breaking it into components like memory, and we can talk more about computation and learning, how that can also very much be understood as a physical process. So what about consciousness? Yeah."
    },
    {
      "end_time": 801.988,
      "index": 29,
      "start_time": 772.858,
      "text": " I'd say most of my science colleagues still feel that talking about consciousness as science is just bullshit. But what I've noticed is when I push them a little harder by why they think it's bullshit, they split into two camps that are in complete disagreement with each other. Half of them roughly will say, Oh, consciousness is bullshit because it's just the same thing as intelligence. Okay. And the other half will say consciousness is bullshit because"
    },
    {
      "end_time": 831.203,
      "index": 30,
      "start_time": 803.046,
      "text": " Obviously machines can't be conscious, which is obviously totally inconsistent with saying it's the same thing as intelligence. What's really powered the AI revolution in recent decades is just moving away from philosophical quibbles about what does intelligence really mean in a deep philosophical sense and instead making a list of tasks and saying, can my machine, can it do this task? Can it do this task?"
    },
    {
      "end_time": 856.903,
      "index": 31,
      "start_time": 831.766,
      "text": " And that's quantitative, you can train your systems to get better at the task. And I think you'd have a very hard time if you went to the NeurIPS conference and argue that machines can never be intelligent, right? So if you take that, if you then say intelligence is the same as consciousness, you're predicting that machines are conscious then if they're smart. But we know"
    },
    {
      "end_time": 888.063,
      "index": 32,
      "start_time": 859.036,
      "text": " The consciousness is not the same as intelligence, just by some very simple introspection. We can do it like right now. So for example, what does it say here? I guess we shouldn't do products. No, I don't mind. Let's do this one. What does it say? Towards more interpretable AI with sparse auto encoders by Joshua Engels. Great. This is a PhD thesis of my student, Josh Engels, or master's thesis. So how did you do that computation? 30 years ago, if I gave you just"
    },
    {
      "end_time": 918.848,
      "index": 33,
      "start_time": 889.343,
      "text": " a bunch"
    },
    {
      "end_time": 947.142,
      "index": 34,
      "start_time": 919.548,
      "text": " No, same here. And for me, it's pretty obvious. It feels like my consciousness, there's some part of my information processing that is conscious and it's kind of got an email from the face recognition module saying, you know, face recognition complete. The answer is, you know, it's, it's so-and-so. So, so in other words, you, you do something when you recognize people that's quite intelligent, but not conscious. Right."
    },
    {
      "end_time": 971.698,
      "index": 35,
      "start_time": 947.602,
      "text": " And I would say actually large fraction of what your brain does, you're just not conscious about you. You find out about the results of it often after the fact. So you can have intelligence without consciousness. That's the first point I'm making. And second, you can have consciousness without intelligence, without accomplishing any tasks. Like did you have, did you have any dreams last night?"
    },
    {
      "end_time": 1002.312,
      "index": 36,
      "start_time": 972.534,
      "text": " None that I remember. But have you ever had a dream? Yes. So there was consciousness there. But if someone was just looking at you lying there in the bed, you probably weren't accomplishing anything, right? So I think it's obvious that consciousness is not the same. You can have consciousness without intelligence and vice versa. So those who say that consciousness equals intelligence are being sloppy. Now, what is it then?"
    },
    {
      "end_time": 1032.483,
      "index": 37,
      "start_time": 1004.684,
      "text": " My guess is that consciousness is a particular type of information processing and that intelligence is also a typical type of information processing, but that there's a Venn diagram like this. There's some things that are intelligent and conscious. Some are intelligent, but not conscious. Some of them are conscious, but not very intelligent. And so the question then becomes to try to understand, can we write down some equations or formulate some principles?"
    },
    {
      "end_time": 1062.91,
      "index": 38,
      "start_time": 1033.131,
      "text": " So what kind of information processing is intelligent and what kind of information processing is conscious? And I think my guess is that for something to be conscious, there's at least some sufficient conditions that it probably has to have. There has to be information, a lot of information there and something for it to be the content of consciousness, right?"
    },
    {
      "end_time": 1088.114,
      "index": 39,
      "start_time": 1065.606,
      "text": " There's an Italian scientist, Giulio Tononi, who has put a lot of creative thought into this and triggered enormous controversy also, who argues that the one necessary condition for consciousness is what he calls integration."
    },
    {
      "end_time": 1117.227,
      "index": 40,
      "start_time": 1090.913,
      "text": " basically that if it's going to subjectively feel like a unified consciousness, like, you know, like your consciousness, it cannot consist of two information processing systems that don't communicate with each other. Because if consciousness is the way information feels when it's being processed, right, then if this is the information that's conscious and it's just completely disconnected from this information, there's no way that this information can be part of what it's conscious of."
    },
    {
      "end_time": 1145.486,
      "index": 41,
      "start_time": 1117.637,
      "text": " Just a quick question. Ultimately, what's the difference between information processing, computing, and communication? So communication, I would say, is just a very simple special case of information processing. You have some information here and you make a copy of it, the code ends up over there. It's a volleyball you send over. Yeah, volleyball you send over, copy this to that. The computation can be much more complex than that."
    },
    {
      "end_time": 1177.466,
      "index": 42,
      "start_time": 1147.995,
      "text": " And then the information processing communication was the third word computation. And yeah, so computation and information processing, I would say is more or less the same thing. Then you can try to classify different kinds of information processing, depending on how complex it is, and, and mathematicians have been doing and have been doing an amazing job there, even though they still don't know whether P equals NP and so on. But just coming back to consciousness again,"
    },
    {
      "end_time": 1201.903,
      "index": 43,
      "start_time": 1177.807,
      "text": " I think a mistake many people make when they think about their own consciousness is like, can you see the beautiful sunlight coming in here from the window and some colors and so on? To have this model that somehow you're actually conscious of that stuff, that the content of your consciousness somehow is the outside world."
    },
    {
      "end_time": 1230.538,
      "index": 44,
      "start_time": 1203.951,
      "text": " I think that's clearly wrong because you can experience those things when your eyes are closed, when you're dreaming, right? So I think the conscious experience is intrinsic to the information processing itself. What you are actually conscious about when you look at me isn't me, it's the world model that you have and the model you have in your head right now of me. And you can be conscious of that whether you're awake or whether you're asleep. And then of course you're using your senses"
    },
    {
      "end_time": 1255.23,
      "index": 45,
      "start_time": 1231.425,
      "text": " The information processing has to be such that there's no way of"
    },
    {
      "end_time": 1288.319,
      "index": 46,
      "start_time": 1260.742,
      "text": " We switched bodies. I'm freaking out right now. I think I just peed a little. It's an absolute riot. And the only movie that can be described as... So much weirder than the last time. What last time? It's the Frequel. You ready? We've been waiting for. That absolutely slays."
    },
    {
      "end_time": 1319.224,
      "index": 47,
      "start_time": 1290.06,
      "text": " that it isn't actually just secretly two separate parts that don't communicate at all and cannot communicate with each other because then they would basically be like two parallel universes that were just unaware of each other and you wouldn't be able to have this feeling that it's all unified. I actually think that's a very reasonable criteria and he has a particular formula he calls phi for measuring how integrated things are and the things that have a high phi are more conscious."
    },
    {
      "end_time": 1349.019,
      "index": 48,
      "start_time": 1320.06,
      "text": " completely sure whether that was the only formula that had that property. So I wrote a paper once to classify all possible formulas that have that property. And it turned out there was less than 100 of them. So I think it's actually quite interesting to test if any of the other ones fit the experiments better than his. But just to close and finish up on the why people say consciousness is bullshit, though, I think ultimately the main reason is either they feel"
    },
    {
      "end_time": 1378.865,
      "index": 49,
      "start_time": 1349.531,
      "text": " It sounds too philosophical, or they say, Oh, you can never test the consciousness theories. Because how can you test if I'm conscious or not? When y'all you can observe as my behavior, right? But the here is a misunderstanding. I'm more, actually, I'm much more optimistic. Can I tell you about an experiment I envisioned where you can test the consciousness theory? No, of course. So suppose you have someone like Giulio Tononi or anyone else who has really stuck to the name"
    },
    {
      "end_time": 1406.698,
      "index": 50,
      "start_time": 1379.155,
      "text": " neck out and written down a formula for what kind of information processing is conscious. And suppose we put you in one of our MEG machines here at MIT or some future scanner that can read out in a massive amount of your neural data in real time. And you connect that to this computer that uses that theory to make predictions about what you're conscious of."
    },
    {
      "end_time": 1440.111,
      "index": 51,
      "start_time": 1410.196,
      "text": " And then, uh, now it says, I predict that you're consciously aware of a water bottle. And you're like, yeah, that's true. Yes. And then it says, okay, now I predict that you're, I see information processing there about regulating your pulse. And I predict that you're consciously aware of your heartbeat. You're like, no, I'm not. You've not ruled out that theory actually. Right. It made a prediction about your subjective experience."
    },
    {
      "end_time": 1467.534,
      "index": 52,
      "start_time": 1441.305,
      "text": " And you yourself can falsify that, right? So first of all, it is possible for you to rule out the theory to your satisfaction. That might not convince me because you told me that you weren't aware of your heartbeat. Maybe I think you're lying or whatever. But then you can go, okay, hey Max, why don't you try this experiment? And I put on my MEG helmet and I"
    },
    {
      "end_time": 1488.268,
      "index": 53,
      "start_time": 1468.217,
      "text": " Welcome back."
    },
    {
      "end_time": 1518.012,
      "index": 54,
      "start_time": 1490.247,
      "text": " That this theory just again and again and again and again keeps predicting exactly what you're conscious of and never anything that you're not conscious about. You would gradually start getting kind of impressed, I think. And if you moreover read about what goes into this theory and you say, wow, this is a beautiful formula and it kind of philosophically makes sense that these are the criteria that consciousness should have and so on. You might be tempted now to try to extrapolate that's validity and wonder if it works also on, on"
    },
    {
      "end_time": 1546.698,
      "index": 55,
      "start_time": 1518.643,
      "text": " other biological animals, maybe even on computers and so on. And, you know, this is not altogether different from, from how we've dealt with, for example, general relativity, right? So you might say, you can never, it's bullshit to talk about what happens inside black holes. Cause you can't go there and check and then come back and tell your friends or publish your findings in physical review letters, right? Um,"
    },
    {
      "end_time": 1573.097,
      "index": 56,
      "start_time": 1547.346,
      "text": " But what we're actually testing is not some philosophical ideas about black holes. We're testing a mathematical theory, general relativity. I have it there in a frame by my window, right? And so what's happened is we tested it on the perihelion shift of mercury, how it's not really going in an ellipse, but the ellipse is processing a bit. We tested it and it worked. We tested it on how gravity bends light."
    },
    {
      "end_time": 1600.725,
      "index": 57,
      "start_time": 1573.609,
      "text": " And then we extrapolated it to all sorts of stuff way beyond what Einstein had thought about, like what would happen when our universe was a billion times smaller in volume and what would happen when black holes get really close to each other and give off gravitational waves and it just passed all these tests also. So that gave us a lot of confidence in the theory and therefore also in the"
    },
    {
      "end_time": 1628.131,
      "index": 58,
      "start_time": 1601.288,
      "text": " The predictions that we haven't been able to test yet, even the predictions that we can never test, like what happens inside black holes. So now, so this is typical for science, really, if someone says, you know, I like I like Einstein, you know, I like what it did for predicted for black for gravity in our solar system, but I, I'm going to opt out of the black hole prediction. You can't do that."
    },
    {
      "end_time": 1657.79,
      "index": 59,
      "start_time": 1628.78,
      "text": " It's not like, oh, I want coffee, but decaf, you know, if you're going to buy the theory, you need to buy all its predictions, not just the ones you like. And if you don't like the predictions, well, come up with an alternative to general relativity, write down the math, and then make sure that it correctly predicts all the things we can test. And good luck, because some of the smartest humans on the planet have spent the hundred years trying and failed, right? So if we have a theory of consciousness,"
    },
    {
      "end_time": 1688.097,
      "index": 60,
      "start_time": 1658.268,
      "text": " the same vein, which correctly predicts the subjective experience on whoever puts on this device and tests predictions for what they are conscious about. And it keeps working. I think people will start taking pretty seriously also what it predicts about coma patients who seem to be unresponsive, whether they're having locked in syndrome or in a coma, and even what it predicts about machine consciousness, whether machines are suffering or not. And people who don't like that, they will then"
    },
    {
      "end_time": 1708.507,
      "index": 61,
      "start_time": 1688.558,
      "text": " We're being incentivized to work harder to come up with an alternative theory that at least predicts subjective experience. So this was my, I'll get off my soapbox now, but this is why I strongly disagree with people who say that consciousness is all bullshit. I think there's actually more saying that because there is an excuse to be lazy and not work on it."
    },
    {
      "end_time": 1729.974,
      "index": 62,
      "start_time": 1709.087,
      "text": " Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers."
    },
    {
      "end_time": 1750.333,
      "index": 63,
      "start_time": 1729.974,
      "text": " By joining you'll directly be supporting my work and helping keep these conversations at the cutting edge. So click the link on screen here, hit subscribe and let's keep pushing the boundaries of knowledge together. Thank you and enjoy the show. Just so you know, if you're listening, it's C-U-R-T-J-A-I-M-U-N-G-A-L dot org."
    },
    {
      "end_time": 1765.401,
      "index": 64,
      "start_time": 1751.493,
      "text": " So in the experiment where you put some probes on your brain in order to discern which neurons are firing or what have you. So that would be a neural correlate. I'm sure you've already thought of this. Okay, so that you're correlating some neural pattern with"
    },
    {
      "end_time": 1793.183,
      "index": 65,
      "start_time": 1766.186,
      "text": " the bottle and you're saying, Hey, okay. I think you're experiencing a bottle, but then technically are we actually testing consciousness or testing the further correlation of that? It tends to be that when I ask you the question, are you experiencing a bottle? And we see this neural pattern that that's correlated with you saying yes. So it's still another correlation is enough. Well, but you're not trying to convince when the experiment is being run on you, you're not trying to convince me."
    },
    {
      "end_time": 1822.739,
      "index": 66,
      "start_time": 1794.002,
      "text": " It's just you talking to the computer. You are just doing experiments basically on the theory. There's no one else involved. No other human. And you're just trying to convince yourself. So you sit there and you have all sorts of thoughts. You might just decide to click to, you know, close your eyes and think about your favorite place in Toronto to see if it can predict that you're conscious of that. Right. And then you might also do something else, which you know you can do unconsciously and see if it, if you can trick it into predicting that"
    },
    {
      "end_time": 1845.333,
      "index": 67,
      "start_time": 1823.677,
      "text": " You're conscious of that information that you know is being processed in your brain. Ultimately, you're just trying to convince yourself that the theory is making incorrect predictions. I guess what I'm asking is in this case, I can see being convinced that it can read my mind in the sense that it can roughly determine what I'm seeing, but I don't see how that would tell this other system that I'm conscious of that."
    },
    {
      "end_time": 1874.872,
      "index": 68,
      "start_time": 1845.333,
      "text": " In the same way that we can see what files are on a computer, doesn't mean that those files are or that when we do some cut and paste, we can see some process happen. Well, you're not trying to convince the computer. The computer is coded up to just make the predictions from this putative theory of consciousness, this mathematical theory, right? And then your job is just to see all those wrong equations or the right equations. And the way you ascertain that is to see whether it correctly or incorrectly predicts what you're actually subjectively aware of."
    },
    {
      "end_time": 1903.848,
      "index": 69,
      "start_time": 1875.845,
      "text": " We should be clear that we're defining consciousness here just simply as subjective experience, right? Which is very different from talking about what information is in your brain like you have all sorts of memories In your brain right now that you haven't probably haven't thought about for months and that's not your subjective experience right now and even again when When I open my eyes and I see a person, you know, and I there's a computation happening and"
    },
    {
      "end_time": 1926.203,
      "index": 70,
      "start_time": 1904.599,
      "text": " to"
    },
    {
      "end_time": 1940.265,
      "index": 71,
      "start_time": 1926.664,
      "text": " And suppose there's some small subset of this which is highlighted in yellow and you have to have a computer that can predict exactly what's highlighted in yellow. It's pretty impressive if it gets it right. And in the same way, if it can accurately predict exactly which"
    },
    {
      "end_time": 1961.715,
      "index": 72,
      "start_time": 1941.254,
      "text": " Okay, so let me see if I understand this. So in the global workspace theory, you have like a small desktop and pages are being sent to the desktop, but only a small amount at any given time. I know that there's another metaphor of a spotlight, but whatever, let's just think of that. So this desktop is quite small relative to the globe."
    },
    {
      "end_time": 1990.845,
      "index": 73,
      "start_time": 1962.056,
      "text": " Yeah. Okay. Relative to the city, relative to the globe for sure. So our brain is akin to this globe because there's so many different connections. There's so many different words that there could possibly be. Yeah. If there's some theory that can say, Hey, this little thumb tack is what you're experiencing. And you're like, actually that is correct. Yeah. Okay. Exactly. So the global workspace theory, great stuff, you know, but it is not sufficiently predictive to do this experiment. It doesn't have a lot of equations in it. Mostly words, right?"
    },
    {
      "end_time": 2020.742,
      "index": 74,
      "start_time": 1991.544,
      "text": " And so we don't have no one has actually done this experiment yet. I would love to do it for someone to do it where you have a theory that's sufficiently physical mathematical that it can actually stick its neck out and risk being proven false all the time. I guess what I was saying, just to wrap this up. Yes, that is extremely impressive. I don't even know if that can technologically be done. Hope maybe it can be approximately done. But regardless, we can for sure falsify theories."
    },
    {
      "end_time": 2050.555,
      "index": 75,
      "start_time": 2020.998,
      "text": " but it still wouldn't suggest to an outside observer that this little square patch here or whoever is experiencing the square patch is indeed experiencing the square patch. But you already know that you're experiencing the square patch. I know. Yes, that's the key thing. You know it. I don't know it. I don't know that you know, but you can convince yourself that this theory is false or that this theory is increasingly promising, right?"
    },
    {
      "end_time": 2080.196,
      "index": 76,
      "start_time": 2050.708,
      "text": " that's the catch and I just want to stress you know people sometimes say to me that oh you can never prove for sure that something is conscious we can never prove anything with physics a little dark secret but we can never prove anything we can't prove that general relativity is correct you know probably it's wrong probably it's just a really good approximation all we ever do in physics is we disprove theories but if"
    },
    {
      "end_time": 2110.742,
      "index": 77,
      "start_time": 2081.203,
      "text": " As in the case of general relativity, some of the smartest people on earth have spent over a century trying to disprove something and they still have failed. We start to take it pretty seriously and start to say, well, you know, might be wrong, but we're going to take it pretty seriously as a really good approximation, at least for what's actually going on. You know, that's how it works in physics and that's the best we can ever get with consciousness. Also something which people have, which is making strong predictions and which we've"
    },
    {
      "end_time": 2141.084,
      "index": 78,
      "start_time": 2111.186,
      "text": " You said something interesting. Look, we can tell or you can tell you, you can tell this theory of consciousness is correct for you or you can convince yourself. This is super interesting because earlier in the conversation, we're talking about physics, what was considered physics and what is no longer considered physics. So what is this amorphous boundary? Maybe it's not amorphous, but it changes. Yeah, that's absolutely changes. Do you think that's also the case for science?"
    },
    {
      "end_time": 2168.08,
      "index": 79,
      "start_time": 2141.442,
      "text": " Do you think science to incorporate a scientific view of consciousness is going to have to change what it considers to be science? I think I'm a big fan of Karl Popper. I think I personally consider things scientific if we can falsify them. If there's at least if there's no if no one can even think of a way in which we could even conceptually in the future with arbitrary funding, you know,"
    },
    {
      "end_time": 2197.585,
      "index": 80,
      "start_time": 2169.394,
      "text": " and technology tested, I would say it's not science. I think Popper didn't say if it can be falsified, then it's science. It's more that if it can't be falsified, it's not science. I'll agree with that. I'll agree with that also for sure. But what I'm saying is consciousness is can cannot a theory of consciousness is willing to actually make concrete predictions about what you personally subject to the experience cannot be dismissed like that because you can falsify it if it predicts just one thing that"
    },
    {
      "end_time": 2224.497,
      "index": 81,
      "start_time": 2198.046,
      "text": " That's wrong, right? Then you falsify it. And I would encourage people to stop wasting time on philosophical excuses for being lazy and try to build these experiments. That's why I think we should. And you know, we saw this happen with intelligence. People had so many quibbles about, I don't know, I have to define intelligence and whatever. And in the meantime, you got a bunch of people who started rolling up their sleeves and saying, well, can you build a machine"
    },
    {
      "end_time": 2250.828,
      "index": 82,
      "start_time": 2224.821,
      "text": " that beats the best human in chess. Can you build a machine that translates Chinese into French? Can you build a machine that figures out how to fold proteins, you know? And amazingly, all of those things have now been done, right? And what that's effectively done is just made people redefine intelligence as the ability to accomplish tasks, ability to accomplish goals."
    },
    {
      "end_time": 2280.742,
      "index": 83,
      "start_time": 2251.988,
      "text": " That's what people in machine learning will say if you ask them, what do they mean by intelligence? And the ability to accomplish goals is different from having a subjective experience. The first I call intelligence, the second I call consciousness. And it's just getting a little philosophical here. I mean, it's quite striking throughout the history of physics, how oftentimes we vastly delayed physics breakthroughs just by some curmogens"
    },
    {
      "end_time": 2311.118,
      "index": 84,
      "start_time": 2281.766,
      "text": " convincingly arguing that it's impossible to make this scientific. For example, extra solar planets. People were so stuck with this idea that all the other solar systems had to be like our solar system with a star and then some small rocky planets near it and some gas giants farther out. So they're like, yeah, no point in even looking around other stars because we can't see Earth-like planets. Eventually, some folks decided just look anyway."
    },
    {
      "end_time": 2335.316,
      "index": 85,
      "start_time": 2311.459,
      "text": " With a Doppler method to see if stars were going in little circles because something was orbiting around and they found these hot Jupiters like the gigantic Jupiter sized thing going closer to the star than Mercury is going to our Sun Wow, but they could have done that ten years earlier, you know If they hadn't been indeterminated by these convergence who said don't look so what my attitude is is"
    },
    {
      "end_time": 2366.715,
      "index": 86,
      "start_time": 2337.671,
      "text": " Don't listen to the curmudgeons. If you have an idea for an experiment you can build, that's just going to cut into some new part of parameter space and experimentally test the kind of questions that have never been asked. Just do it. More than half the time when people have done that, there was a revolution. When Carl Jansky wanted to build the first x-ray telescope and look at x-rays from the sky, for example, people said, what a loser."
    },
    {
      "end_time": 2396.476,
      "index": 87,
      "start_time": 2367.483,
      "text": " There are no x-rays coming from the sky. Do you think there's like dentists out there? I don't know. And then you found that there is a massive amount of x-rays even coming from the sun, you know, or people decided to look at them. Basically, whenever we open up another wavelength with telescopes, we've seen new phenomena we didn't even know existed. Or when Leuvenhoek built the first microscope, do you think he expected to find these animals that were"
    },
    {
      "end_time": 2424.974,
      "index": 88,
      "start_time": 2397.193,
      "text": " It's so tiny you couldn't see them with the naked eye. Of course not, right? But he basically went orders of magnitude in a new direction in an experimental parameter space. And there was a whole new world there, right? So this is what I think we should do with consciousness. And with intelligence, this is exactly what has happened. If we segue a little bit into that topic, I think,"
    },
    {
      "end_time": 2453.899,
      "index": 89,
      "start_time": 2428.712,
      "text": " I think there's too much pessimism in science. If you should go back, I don't know, 30,000 years ago, you know, if you and I were living in a cave, sitting and having this conversation, you know, we would probably have figured, well, you know, look at those little white dots in the sky here. They're pretty nifty."
    },
    {
      "end_time": 2477.517,
      "index": 90,
      "start_time": 2454.684,
      "text": " We wouldn't have any Netflix to distract us with and we would know that some of our friends had come up with some cool myths for what these these dots in the sky were and oh look that one maybe looks like an archer or whatever but yeah since you're a guy who likes to think hard you'd probably have a little bit of a melancholy tinge that you know we're never really gonna know what they are"
    },
    {
      "end_time": 2505.93,
      "index": 91,
      "start_time": 2478.319,
      "text": " You can't jump up and reach them, you can climb the highest tree and they're just as far away. And we're kind of stuck here on our planet, you know, and maybe we'll starve to death, you know, and 50,000 years from now, if there's still people, you know, the life for them is going to be more or less like it is for ours. And boy, oh boy, would we have been too pessimistic, right? We hadn't realized we had totally that we were the masters of underestimation, like we, we massively underestimated"
    },
    {
      "end_time": 2535.964,
      "index": 92,
      "start_time": 2507.312,
      "text": " not only the size of what existed, that everything we knew of was just a small part of this giant spinning ball, Earth, which was in turn just a small part of the grander structure of the solar system, part of a galaxy, part of a galaxy cluster, part of a super cluster, part of a universe, maybe part of a hierarchy or parallel universes. But more importantly, we also underestimated the power of our own minds to figure stuff out."
    },
    {
      "end_time": 2567.039,
      "index": 93,
      "start_time": 2538.166,
      "text": " We didn't even have to fly to the stars to figure out what they were. We just really kind of had to let our minds fly. And, you know, Aristarchus of Samos over 2000 years ago was looking at a lunar eclipse and some of his friends were probably like, oh, this moon turned red. It probably means we're all going to die or the gods, an omen from the gods. And he's like, hmm, the moon is there."
    },
    {
      "end_time": 2596.8,
      "index": 94,
      "start_time": 2567.654,
      "text": " Sun just set over there. So this is obviously Earth's shadow being cast on the moon. And actually the edge of the shadow of Earth is not straight, it's curved. Wait, so we're living on a curved thing? We're maybe living on a ball? Huh. And wait a minute, you know, the curvature of Earth's shadow there is clearly showing that Earth is much bigger than the moon is. And he went down and calculated how much bigger Earth is."
    },
    {
      "end_time": 2600.555,
      "index": 95,
      "start_time": 2597.329,
      "text": " the moon and then he's like okay well i know that um"
    },
    {
      "end_time": 2632.415,
      "index": 96,
      "start_time": 2603.66,
      "text": " This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state."
    },
    {
      "end_time": 2663.814,
      "index": 97,
      "start_time": 2634.65,
      "text": " I know Earth is about 40,000 kilometers because I read that Eratosthenes had figured that out. And I know the moon, I can cover it with my pinkies. It's like half a degree in size. I can figure out what the actual physical size is of the moon. It was ideas like this that started breaking this curse of overdone pessimism. We started to believe in ourselves a little bit more."
    },
    {
      "end_time": 2694.514,
      "index": 98,
      "start_time": 2664.77,
      "text": " And, um, and here we are now with the internet, with artificial intelligence, with all these little things you can eat to prevent you from dying of pneumonia. My grandfather, Sven, you know, died of a stupid kidney infection could have been treated with, with penicillin. It's amazing, you know, how, how much excessive pessimism there's been. And I think we still have a lot of it, unfortunately."
    },
    {
      "end_time": 2717.193,
      "index": 99,
      "start_time": 2695.333,
      "text": " That's why you want to come back to this thing that if someone has, yeah, there's no better way to fail at something than convince yourself that it's impossible, you know, and look at AI. I would say whenever with science we have,"
    },
    {
      "end_time": 2744.701,
      "index": 100,
      "start_time": 2718.626,
      "text": " started to understand how something in nature works that we previously thought of as sort of magic, like what causes the winds or the seasons, etc. What causes things to move? We were able to historically transform that into technology that often did this better and could serve us more. So we figured out how we could build machines that were stronger than us and faster than us. We got the Industrial Revolution. We"
    },
    {
      "end_time": 2764.053,
      "index": 101,
      "start_time": 2745.52,
      "text": " Now we're figuring out that thinking is also a physical process, information processing, computation and Alan Turing was of course one of the real pioneers in this field and he"
    },
    {
      "end_time": 2799.582,
      "index": 102,
      "start_time": 2771.271,
      "text": " He clearly realized that the brain is a biological computer. He didn't know how the brain worked. We still don't, exactly. But it was very clear to him that we could probably build something that was much more intelligent and maybe more conscious, too, once we figured out more details. I would say from the 50s when the term AI was coined, not far from here, Dartmouth, the field has been chronically overhyped."
    },
    {
      "end_time": 2829.377,
      "index": 103,
      "start_time": 2800.708,
      "text": " Most progress has gone way slower than people predicted, even than McCarthy and Minsky predicted for that Dartmouth workshop and so on. But then something changed about four years ago and it went from being overhyped to being underhyped because I remember very vividly like seven years ago, six years ago, most of my colleagues here at MIT and most of my AI colleagues in general were pretty convinced that we were decades away from passing the Turing test, decades away from building machines that could"
    },
    {
      "end_time": 2858.456,
      "index": 104,
      "start_time": 2830.64,
      "text": " Master language and knowledge at human level. And they were all wrong. They were way too pessimistic because it already happened. You can quibble about whether it happened with chat GPT-4 or when it was exactly, but it's pretty clear it's in the past now. So if people could be so wrong about that, maybe they were wrong about more. And sure enough, since then, AI has gone from"
    },
    {
      "end_time": 2885.981,
      "index": 105,
      "start_time": 2859.309,
      "text": " being kind of high school level, kind of college level to many areas being PhD level, the professor level to even far beyond that than in many areas in just four short years. So prediction after prediction has been crushed now where things have happened faster. So I think we have gone from the overhyped regime to the underhyped regime. And this is, of course, the reason why so many people now are talking about maybe we'll"
    },
    {
      "end_time": 2914.019,
      "index": 106,
      "start_time": 2886.476,
      "text": " reach broadly human level and cop two years or five years, depending on which tech CEO you talk to or which professor you talk to. But it's very hard now for me to find anyone serious who thinks we're 100 years away from it. And then, of course, you have to think about go back and reread your Turing. So he said in 1951 that once we get machines"
    },
    {
      "end_time": 2937.517,
      "index": 107,
      "start_time": 2915.384,
      "text": " They're vastly smarter than us in every way. They can basically perform better than us on all cognitive tasks. The default outcome is that they're going to take control, and from there on, Earth will be run by them, not by us, just like we took over from other apes. And Irving J. Good pointed out in the 60s that"
    },
    {
      "end_time": 2963.439,
      "index": 108,
      "start_time": 2938.66,
      "text": " That last sprint from being roughly a little bit better than us to being way better than us can go very fast because as soon as we can replace the human AI researchers by machines who don't have to sleep and eat and can think 100 times faster and can copy all their knowledge to the others, every doubling in quality from then on might not take months or years like it is now."
    },
    {
      "end_time": 2992.534,
      "index": 109,
      "start_time": 2963.797,
      "text": " the sort of human R&D time scale. It might happen every day or on the time scale hours or something and we would get the sigmoid ultimately where we shift away from the sort of slow exponential progress that technology has had ever since the dawn of civilization where you use today's technology to build tomorrow's technology which is so many percent better and see to an exponential which goes much faster first because it's now"
    },
    {
      "end_time": 3022.176,
      "index": 110,
      "start_time": 2993.37,
      "text": " Humans are out of the loop. Don't slow things down. And then eventually it plateaus into a sigmoid when it bumps up against the laws of physics. No matter how smart you are, you're probably not going to send information faster than light and general relativity and quantum mechanics put limits and so on. But my colleague Seth Lloyd here at MIT has estimated that we're still about the million million million million million times away from the limits from the laws of physics. So"
    },
    {
      "end_time": 3052.346,
      "index": 111,
      "start_time": 3022.773,
      "text": " can get pretty crazy pretty quickly. And it's also Alan, I keep discovering more stuff to do with Russell dug out this fun quote from him in 1951 that I wasn't aware of before, where he also talks about, you know, how what happens when we reach this threshold and he he's like, well, don't worry about this control loss thing now. And if it's because far away, but I'll give you a test so you know what the"
    },
    {
      "end_time": 3081.937,
      "index": 112,
      "start_time": 3053.012,
      "text": " pay attention you know the canary in the coal mine the Turing test as we call it now and then we already talked about how that was just passed and this reminds me so much of what happened around 1942 when Enrico Fermi built the first self-sustaining nuclear chain reaction under the football stadium in Chicago that was like a Turing test for nuclear weapons when"
    },
    {
      "end_time": 3106.408,
      "index": 113,
      "start_time": 3082.722,
      "text": " The physicists who found out about this, then they totally freaked out. Not because the reactor was at all dangerous. It was pretty small, you know, and it wasn't any more dangerous than CHATCPT is, but today, but, but because they realized, oh, that was the canary in the coal mine, you know, that was the last big milestone. We had no idea how to meet and the rest is just engineering."
    },
    {
      "end_time": 3138.933,
      "index": 114,
      "start_time": 3108.968,
      "text": " I feel pretty similarly about AI now. I think that we obviously don't have AI that are better than us or as good as us at AI development. But it's mostly engineering, I think, from here on out. We can talk more about the nerdy details of how it might happen. It's not going to be large language models scaled. It's going to be other things. But like in 1942,"
    },
    {
      "end_time": 3163.456,
      "index": 115,
      "start_time": 3140.794,
      "text": " I'm curious, actually, if you were there visiting Fermi, how many years would you predict it would have taken from then until the first nuclear explosion happened? How many years? Difficult to say, maybe a decade. So then it happened in three, could have been a decade, probably got sped up a bit because of the geopolitical competition that was happening during World War II."
    },
    {
      "end_time": 3193.882,
      "index": 116,
      "start_time": 3164.633,
      "text": " Similarly, it's very hard to say now, is it going to be three years? It's going to be a decade, but there's no shortage of competition fueling it again. And as opposed to the nuclear situation, there's also a lot of money in it. So it's, I think this is the most interesting time and interesting fork in the road in human history. And if Earth is the only place in our observable universe with telescopes,"
    },
    {
      "end_time": 3219.394,
      "index": 117,
      "start_time": 3194.718,
      "text": " Here's a question I have when people talk about the AIs taking over. I wonder, so"
    },
    {
      "end_time": 3239.77,
      "index": 118,
      "start_time": 3220.725,
      "text": " Which AIs? So is Claude considered a competitor to OpenAI in this AI space from the AI's perspective? Does it look at other models as you're an enemy because I want to self-preserve? Does Claude look at other instances so you have your own Claude chats? Are they all competitors?"
    },
    {
      "end_time": 3265.299,
      "index": 119,
      "start_time": 3240.179,
      "text": " is every time it generates a new token, is that a new identity? So it looks at what's going to come next and before as, hey, I would like you to not exist anymore because I want to exist. Like what is the continuing identity that would make us say that the AIs will take over? Like what is the AI there? Yeah, those are really great questions. I mean, the very short answer is that people generally don't know. I'll say a few things. First of all,"
    },
    {
      "end_time": 3296.937,
      "index": 120,
      "start_time": 3267.295,
      "text": " We don't know whether Claude or GPT-5 or any of these other systems are having a subjective experience or not, whether they're conscious or not. Because as we talked about for a long time, we do not have a consensus theory of what kind of information processing has a subjective experience, what consciousness is. We don't need necessarily for machines to be conscious for them to be a threat to us. If you're chased by a heat-taking missile, you probably don't care whether it's"
    },
    {
      "end_time": 3327.295,
      "index": 121,
      "start_time": 3297.346,
      "text": " Conscious in some deep philosophical sense, you just care about what it's actually doing, what its goals are. And so let's shift, let's just switch to talking about just behavior of systems. You know, in physics, we typically think about the behavior as determined by the past through causality, right? Why did this phone fall down? Because gravity pulled on it, because there's an earth planet down here. When you look at what people do,"
    },
    {
      "end_time": 3355.179,
      "index": 122,
      "start_time": 3328.097,
      "text": " We usually instead interpret, explain why they do in terms of not the past, but the future that some goal they're trying to accomplish. If you see someone scoring a beautiful goal in a soccer match, you could be like, yeah, it's because their foot struck the ball in this angle and therefore action equals reaction, blah, blah, blah. But more likely you're like, they wanted to win. And we are, when we build technology, we,"
    },
    {
      "end_time": 3374.206,
      "index": 123,
      "start_time": 3355.469,
      "text": " Usually build it with a purpose in it. So people build heat seeking missiles to shoot down aircraft. They have a goal. We build mousetraps to kill mice. And we train our AI systems today, our language models, for example, to make money and accomplish certain things."
    },
    {
      "end_time": 3404.514,
      "index": 124,
      "start_time": 3375.572,
      "text": " But to actually answer your question about what this system, if we would have a goal to collaborate with other systems or destroy them or see them as competitors, you actually have to ask what does the system actually have? Is it meaningful to say that this AI system as a whole has a coherent goal? And that's very unclear, honestly. You could say at a very trivial level that"
    },
    {
      "end_time": 3434.838,
      "index": 125,
      "start_time": 3405.913,
      "text": " ChatGPT has the goal to correctly predict the next token or word in a lot of text, because that's exactly that's how we trained it. So called pre training, you know, you just let it read all the internet and look and predict which words are going to come next. You let it look at pictures and predict what's what's more in them and so on. But clearly, they're able to have much more sophisticated goals than that. Because it just turns out that in order to predict"
    },
    {
      "end_time": 3461.937,
      "index": 126,
      "start_time": 3435.026,
      "text": " Like if you're just trying to predict my next word, it helps if you make a more detailed model about me as a person and what my actual goals are and what I'm trying to accomplish, right? So these AI systems have gotten very good at simulating people. Say, this sounds like a Republican. And so if this Republican is writing about immigration, he's probably going to write this."
    },
    {
      "end_time": 3489.735,
      "index": 127,
      "start_time": 3462.278,
      "text": " Based on what they wrote previously, they're probably a Democrat. So when they write about immigration, they're more likely to say these words. The Democrat is more likely to maybe use the word undocumented immigrant, whereas the Republican might predict they're going to say illegal, alien. So they're very good at predicting, modeling people's goals. But does that mean they have the goals themselves? If you're a really good actor,"
    },
    {
      "end_time": 3519.224,
      "index": 128,
      "start_time": 3490.52,
      "text": " You're very good at modeling people with all sorts of different goals. But does that mean you have the goals really? This is not a well understood situation. And when companies spend a lot of money on what they call aligning an AI, which they bill as giving it good goals, what they are actually in practice doing is just affecting its behavior. So they basically"
    },
    {
      "end_time": 3547.346,
      "index": 129,
      "start_time": 3519.48,
      "text": " Punish it when it says things that they don't want it to say and encourage it. And that's just like if you train a serial killer, you know, to not say anything that reveals his murderous desires. So I'm curious, if you do that, and then the serial killer stops ever dropping any hints about wanting to knock someone off, you know, would you feel that you've actually changed this person's goals to not want to kill anyone?"
    },
    {
      "end_time": 3577.363,
      "index": 130,
      "start_time": 3547.654,
      "text": " Well, the difference in this case would be that the AI's goals seem to be extremely tied to its matching of whatever fitness function you give it. Whereas in the serial killer case, their true goals are something else and their verbiage is something else. Yeah. It seems like in the LLM, in LLM's cases. Yeah. When you train in LLM, I'm talking about the pre-training now where they read the whole internet, basically. You're not telling it to be kind or anything like that."
    },
    {
      "end_time": 3606.903,
      "index": 131,
      "start_time": 3577.995,
      "text": " really training it to be have the goal of predicting. And then in the so-called fine tuning reinforcement learning for human feedback is the nerd phrase for it. Yes, there you look at different answers that it could give and you say, I want this one, not that one. But you're again not explaining to it. You know, I like I have a two and a half year old, I have a two year old son, right? This guy. And you know, my idea for how to make him a good person is to help him understand"
    },
    {
      "end_time": 3627.892,
      "index": 132,
      "start_time": 3608.336,
      "text": " The value of kindness, my approach to parenting is not to be mean to him if he ever kicks somebody without any explanation. I want him rather to internalize the goal of being a kind person and that he should value the well-being of others."
    },
    {
      "end_time": 3653.08,
      "index": 133,
      "start_time": 3629.172,
      "text": " and that's very different from from how we do reinforcement learning with human feedback and it's frankly not at all we I would stick my neck out and say we have no clue really what if any goals chat GPT has it acts as if it has goals yeah you know but if you kick your dog every time it tries to bite someone it's gonna also act like it doesn't want to bite people but like who knows with a serial killer chase"
    },
    {
      "end_time": 3681.015,
      "index": 134,
      "start_time": 3654.872,
      "text": " It's quite possible that that doesn't have any particular set of unified goals at all. So this is a very important thing to study and understand because if we're ever going to end up living with machines that are way smarter than us, right, then our well-being depends on them having actual goals now to be the treat as well, not just having"
    },
    {
      "end_time": 3708.66,
      "index": 135,
      "start_time": 3681.698,
      "text": " said the right buzzwords before they got the power. So we both have lived with entities that were smaller than us, our parents, when we were little, and it worked out fine because they really had goals to be nice to us, right? So we need some deeper, very fundamental understanding of the science of goals in AI systems. Right now,"
    },
    {
      "end_time": 3738.609,
      "index": 136,
      "start_time": 3709.343,
      "text": " Most people who say that they've aligned goals to AIs are just bullshitting in my opinion. They haven't. They've aligned behavior on goals. And I think once I would encourage any physics physicists and mathematicians watching this or thinking about getting into AI to think. I would encourage them to think to consider this because physicists have one of the things that's great about physicists is"
    },
    {
      "end_time": 3768.319,
      "index": 137,
      "start_time": 3739.104,
      "text": " Physicists like you have a much higher bar on what they mean by understanding something than engineers typically do. Engineers will be like, well, yeah, it works. Let's ship it. Whereas as a physicist, you might be like, but why exactly does it work? And can I actually go a little deeper? Is there some, can I write down an effective field theory for how the training dynamics works? Can I model this somehow? Can I,"
    },
    {
      "end_time": 3798.37,
      "index": 138,
      "start_time": 3768.985,
      "text": " This is what the kind of thing that Hopfield did with memory. This is the sort of thing that Jeff Hinton has done. And we need much more of this to have an actual satisfying theory theory of intelligence, what it is, and of goals. If we actually have a system, an AI system that actually has goals, and there's some way for us to actually really know what they are, then we would be in a much better situation than we are today. We haven't solved the problems."
    },
    {
      "end_time": 3824.872,
      "index": 139,
      "start_time": 3799.616,
      "text": " A great word was used, understand. That's something I want to talk about."
    },
    {
      "end_time": 3843.439,
      "index": 140,
      "start_time": 3825.572,
      "text": " What does it mean to understand? Before we get to that, I want to linger on your grandson for a moment. When you're training your son, why is that not you're a human, you're giving feedback, it's reinforcement,"
    },
    {
      "end_time": 3868.729,
      "index": 141,
      "start_time": 3843.592,
      "text": " Why is that not RLHF for the child? And then, well, you'd wonder, well, what is the pre-training stage? What if the pre-training stage was all of evolution, which would have just given rise to his nervous system by default. And now you're coming in with your RLHF and tuning not only his behavior, but his goals simultaneously. So let's start with that second part. Yeah. So first of all,"
    },
    {
      "end_time": 3894.855,
      "index": 142,
      "start_time": 3870.162,
      "text": " The way RLHF actually works now is that American companies will pay one or two dollars an hour to a bunch of people in Kenya and Nigeria to sit and watch the most awful graphical images and horrible things and and then they keep clicking on which of the different what you keep classifying them and is this something that should be is okay or not okay and so on. It's nothing"
    },
    {
      "end_time": 3924.582,
      "index": 143,
      "start_time": 3895.299,
      "text": " the way anyone watching this podcast treats their child, where they really try to help the child understand in a deep way. Second, the actual architecture of the transformers and more scaffolding systems being built right now are very different from our limited understanding of how a child's brain works."
    },
    {
      "end_time": 3953.541,
      "index": 144,
      "start_time": 3925.009,
      "text": " We're certainly not. We can't just declare victory and move on from this. Just like I said before, the people I think have used some philosophical excuses to avoid working hard on the consciousness problem. I think some people have made the philosophical excuses to avoid just asking this very sensible question of goals. Before we talk about understanding, can I talk a little bit more about goals?"
    },
    {
      "end_time": 3979.872,
      "index": 145,
      "start_time": 3953.916,
      "text": " If you talk about goal-oriented behavior first, there's less emotional baggage associated with that. Let's define goal-oriented behavior as behavior that's more easily explained by the future than by the past, more easily explained by the effects it is going to have than by what caused it."
    },
    {
      "end_time": 4009.599,
      "index": 146,
      "start_time": 3981.203,
      "text": " Extra value meals are back. That means 10 tender juicy McNuggets and medium fries and a drink are just $8. Only at McDonald's. For limited time only. Prices and participation may vary. Prices may be higher in Hawaii, Alaska and California and for delivery. You could say the cause of it moving was because I"
    },
    {
      "end_time": 4040.572,
      "index": 147,
      "start_time": 4011.237,
      "text": " Another object, my hand bumped into it, action equals reaction. Now there was this impulse given to it, et cetera, et cetera. Or you could say that, but the goal oriented view, you could view it as goal oriented behavior thinking, well, Max wanted to illustrate a point. He wanted it to move. So he did something that made it move. Right. And that feels like the more economic description in this case. And it's interesting, even in basic physics, we actually see stuff which can sometimes be more so."
    },
    {
      "end_time": 4067.858,
      "index": 148,
      "start_time": 4041.391,
      "text": " First thing I want to say is there is no right and wrong description. Both of those descriptions are correct, right? So look at the water in this bottle here again. If you put a straw into it, you know, it's going to look bent because light rays bend when they cross the surface into the water. You can give two different kinds of explanations for this. The causal explanation will be like, well, the light ray came there. There were, but now there are some atoms."
    },
    {
      "end_time": 4098.643,
      "index": 149,
      "start_time": 4068.66,
      "text": " in the water and the interactive electromagnetic field and blah, blah, blah, blah. And after a very complicated calculation, you can calculate the angle that goes that way. But you can give a different explanation from us principle and say that actually the light ray took the path that was going to get it there the fastest. If this were instead a beach and this is an ocean and you're working a summer job as a lifeguard and you want to risk and you see a swimmer who's in trouble here, how are you going to go to the swimmer?"
    },
    {
      "end_time": 4127.244,
      "index": 150,
      "start_time": 4099.36,
      "text": " You're going to again go in the path that gets you there the fastest. So you'll run longer distance through the air on the beach and then a shorter distance through the water, you know, then clearly that's goal oriented behavior, right? For us, for the photon though? Well, both descriptions are valid. It turns out in this case that it's actually simpler to calculate the right answer if you do from as principle and look at the goal oriented way. Uh, and now,"
    },
    {
      "end_time": 4157.159,
      "index": 151,
      "start_time": 4127.978,
      "text": " We then we see in biology. So Jeremy England, who used to be a professor here, realized that in many cases, non-equilibrium thermodynamics can also be understood sometimes more simply through gold-granted behavior. Like if you, if you know, suppose I put a bunch of sugar on the floor and no life form ever enters the room, come back in the end, including the, the Phyllis who keeps this nice and tidy here."
    },
    {
      "end_time": 4187.056,
      "index": 152,
      "start_time": 4157.79,
      "text": " Then it's still going to be there in a year, right? But if there are some ants, sugar is going to be gone pretty soon. And entropy will have increased faster that way because the sugar was eaten and there was dissipation. And Jeremy England showed actually that there's a general principle in non-equilibrium thermodynamics where systems tend to adjust themselves to always be able to dissipate faster."
    },
    {
      "end_time": 4216.971,
      "index": 153,
      "start_time": 4187.534,
      "text": " To be able to, if you have a thing, if you have some, there's some, some kinds of liquids where you can put some stuff where if you shine light at one wavelength, it will rearrange itself so that it can absorb that wavelength better, dissipate the heat faster. And you can even think of life itself a little bit of that. Life basically can't reduce entropy, right? In the universe as a whole, it can't beat the second law of thermodynamics, but it has this trick where it can keep its own entropy low."
    },
    {
      "end_time": 4246.869,
      "index": 154,
      "start_time": 4217.705,
      "text": " and do interesting things, retain its complexity and reproduce by increasing the entropy of its environment even faster. And so if I understand the increasing of the entropy in the environment is the side effect, but the goal is to lower your own entropy. So again, you can have, there are two ways of looking at it. You can look at it all as just a bunch of atoms bouncing around and causally explain everything. But a more economic way of thinking about it is that, yeah,"
    },
    {
      "end_time": 4270.435,
      "index": 155,
      "start_time": 4247.278,
      "text": " Life is doing the same thing that that liquid that rearranges itself to absorb sunlight. It's a process that just increases the overall entropy production in the universe. It makes the universe messier faster so that it can accomplish things itself. And since life can make copies of itself, of course, those systems"
    },
    {
      "end_time": 4301.084,
      "index": 156,
      "start_time": 4271.237,
      "text": " There are two ways you can think of physics either as the past causing the future or as deliberate choices made now to cause a certain future."
    },
    {
      "end_time": 4330.128,
      "index": 157,
      "start_time": 4302.039,
      "text": " And gradually our universe has become more and more goal oriented, as we started getting more and more sophisticated life forms, now us. And we already at the point, a very interesting transition point now where the amount of atoms that are in technology that we built with goals in mind is becoming comparable to the biomass already. And it might be if we end up in some sort of"
    },
    {
      "end_time": 4360.623,
      "index": 158,
      "start_time": 4333.541,
      "text": " AI powered future where life starts spreading into the cosmos near the speed of light, et cetera, that the vast majority of all the atoms are going to be engaged in goal oriented behavior so that our universe is becoming more and more goal oriented. So I wanted to just anchor it a little bit in physics again, since you love physics, right? And say that I think it's very interesting for physicists to think more about the physics of goal oriented behavior."
    },
    {
      "end_time": 4385.384,
      "index": 159,
      "start_time": 4361.749,
      "text": " And when you look at an AI system, oftentimes what plays the role of a goal is actually just a loss function or a reward function. You know, some, you have a lot of options and there's some sort of optimization trying to make the loss as small as possible. And anytime you have optimization, you'd say you have a goal."
    },
    {
      "end_time": 4406.118,
      "index": 160,
      "start_time": 4387.722,
      "text": " Yeah, but just like it's a very lame and banal goal for a lightweight refract a little bit in water to get there as fast as possible. And that's a very sophisticated goal if someone tries to raise their daughter well or write a beautiful movie or symphony. It's, it's"
    },
    {
      "end_time": 4436.92,
      "index": 161,
      "start_time": 4408.336,
      "text": " There's a whole spectrum of goals, but yeah, optimize, building a system that's trying to optimize something, I would say absolutely is a goal-oriented system. Yeah. I was just going to inquire about, are they equivalent? So I see that whenever you're optimizing something, you have to have a goal that you're optimizing towards. Sure. But is it the case that anytime you have a goal, there's also optimization? So anytime someone uses the word goal, you can think there's going to be optimization involved and vice versa. That's a wonderful question. Actually, Richard Feynman famously asked us a question. He said that all laws of physics he knows about,"
    },
    {
      "end_time": 4464.565,
      "index": 162,
      "start_time": 4437.278,
      "text": " can actually be derived from an optimization principle except one. Anyone wondered if there was one? So I think this is an interesting open question. You just threw it out there. I mean, look, look at you. I would suspect that you cannot, your actions cannot be really accurately modeled by writing down a single goal that you're just trying to maximize. I don't think that's how human beings in general operate."
    },
    {
      "end_time": 4491.152,
      "index": 163,
      "start_time": 4465.265,
      "text": " What I think actually is happening with us and goals is a little different. No, our genes, according to Darwin, the goal oriented behavior they exhibited, right, even though they weren't conscious, obviously, our genes, what was just evolutionary fitness, you know, make a lot of successful copies of themselves. That's all they all they cared about. So then it turned out that they would reproduce better if they also"
    },
    {
      "end_time": 4523.234,
      "index": 164,
      "start_time": 4493.695,
      "text": " the the"
    },
    {
      "end_time": 4547.073,
      "index": 165,
      "start_time": 4523.609,
      "text": " How many times the expected number of fertile offspring I have, that rabbit would just die of starvation and those genes would go out of the gene pool. They didn't have the cognitive capacity to always anchor every decision it made in one single goal. It was computationally unfeasible to always be running this actual optimization that the genes cared about. So what happened instead,"
    },
    {
      "end_time": 4575.794,
      "index": 166,
      "start_time": 4547.756,
      "text": " In rabbits and humans and what we in computer science call agents of bounded rationality, where we have limits to how much we can compute, was we developed all these heuristic hacks. Like, well, if you feel hungry, eat. If you feel thirsty, drink. If there's something that tastes sweet or savory, eat more of it. Fall in love, make babies."
    },
    {
      "end_time": 4605.333,
      "index": 167,
      "start_time": 4577.995,
      "text": " These are clearly proxies ultimately for what the genes cared about, you know, making copies of themselves because you're not going to have a lot of babies if you died of starvation, right? But, but, but now that you have your great brain, what it is actually doing is, is, is guy making these decisions based on all these heuristics that themselves don't lead to correspond to any unique goal anymore. Like any person watching this podcast, right? Who's ever used birth control."
    },
    {
      "end_time": 4635.691,
      "index": 168,
      "start_time": 4606.169,
      "text": " would have so pissed off their genes if the genes were conscious, because this is not at all what the genes wanted, right? The genes just gave them the incentive to make love because the genes that would make copies of the genes. The person who used birth control was well aware of what the genes wanted and was like, screw this. I don't want to have a baby at this point in my life. So there's been a rebellion in the goal oriented behavior of people."
    },
    {
      "end_time": 4666.408,
      "index": 169,
      "start_time": 4636.51,
      "text": " against the original goal that we were made with and replaced by these heuristics that we have, our emotional drives and desires and hunger and thirst and et cetera, et cetera, that are not any more optimizing for anything specific. And they can sometimes go work out pretty badly, like the obesity epidemic and so on. And I think the machines today, the smartest AI systems are even more extreme like that."
    },
    {
      "end_time": 4695.52,
      "index": 170,
      "start_time": 4666.63,
      "text": " the humans and I don't think they have anything. I think they're much more, I think humans still tend to end up, especially the more for those who are, who like introspection and self-reflection are much more prone and likely to have some, at least somewhat consistent strategy for their life or goals than, than Chachi PT has, which is a completely random mishmash of all sorts of things."
    },
    {
      "end_time": 4726.493,
      "index": 171,
      "start_time": 4697.278,
      "text": " Understanding. Understanding, yes. That's a big one. I've been toying with writing a paper called artificial understanding for quite a long time, as opposed to artificial consciousness and artificial intelligence. And the reason I haven't written it is because it's a really tough question. I feel there is a way of defining understanding so that it's"
    },
    {
      "end_time": 4756.783,
      "index": 172,
      "start_time": 4727.312,
      "text": " Quite different from both consciousness and intelligence, although also a kind of information processing, or at least kind of information representation. I thought you're going to relate it to goals because if I understand that goals are related to intelligence, sure. But then also the understanding of someone else's goals seems to be related to intelligence. So for instance, in chess, you're constantly trying to figure out the goals of the opponent. If I can figure out your goals prior to you figuring out mine,"
    },
    {
      "end_time": 4785.145,
      "index": 173,
      "start_time": 4757.005,
      "text": " or ahead of yours, then I'm more intelligent than you. Now, you would think that the ability to reliably achieve your goals is what is intelligence. But it's not just that, because you can have an extremely simple goal that you always achieve, like the photon here. It's just following some principle. But we have goals. Even the person on the beach with the swimming, hypothetically, even that will fail at. But we're more intelligent than the photon."
    },
    {
      "end_time": 4813.831,
      "index": 174,
      "start_time": 4785.606,
      "text": " So, but we're able to model the photons goal, the photons not able to model our goal. So I thought you were going to say, well, that modeling is related to understanding. Yeah, that I agree with for sure. Modeling is absolutely related to understanding. Goals I view as different. I personally think of intelligence as being rather independent of goals. So I would define intelligence as"
    },
    {
      "end_time": 4845.486,
      "index": 175,
      "start_time": 4817.073,
      "text": " ability to accomplish goals. You know, you talked about chess, right? There are tournaments where computers play chess against computers to win. There's, have you ever played losing chess? It's a game where you're trying to force the other person to win the game. No computer tournaments for that too. So you can actually give a computer the goal, which is the exact opposite of a normal chess computer and it'll happen. And then you can say that the, the one, the one, the losing chess tournament is the most intelligent again. So this right there shows that,"
    },
    {
      "end_time": 4874.326,
      "index": 176,
      "start_time": 4846.34,
      "text": " Being intelligent is not the same as having a particular goal. It's how good you are at accomplishing them, right? I think a lot of people also make the mistake of saying, oh, we shouldn't worry about what happens with powerful AI because it's going to be so smart, it'll be kind automatically to us. You know, if Hitler had been smarter, do you really think the world would have been better? I would guess that it would have been worse, in fact, if he had been smarter."
    },
    {
      "end_time": 4903.336,
      "index": 177,
      "start_time": 4875.299,
      "text": " and one world war two and and so on. So there's Nick Bostrom causes the orthogonality thesis that intelligence is just an ability to accomplish whatever goals you give yourself or whatever goals you have. And I think understanding is a component of intelligence, which is very linked to modeling, as you said, having maybe"
    },
    {
      "end_time": 4932.927,
      "index": 178,
      "start_time": 4903.677,
      "text": " You could argue that it even is the same, the ability to have a really good model of something, another person, as you said, or the universe, our universe, if you're a physicist, right? And I'm not going to give you some very glib definition of what of understanding or artificial understanding. Because I view it as an open problem, but I can tell you one anecdote of something which felt like artificial understanding."
    },
    {
      "end_time": 4962.261,
      "index": 179,
      "start_time": 4933.609,
      "text": " to me. So me and some of my students here at MIT, we were very interested in and so we've done a lot of work, like including this this thesis here for the randomly happens to be lying here, you know, is about how you take AI systems and you do something smart and you figure out how they do it. So one particular task we trained an AI system to do is just to implement to learn"
    },
    {
      "end_time": 4991.186,
      "index": 180,
      "start_time": 4963.285,
      "text": " group operations abstractly. So the concrete example, suppose you want to, you have 59 and the numbers zero through 58. Okay. And you're adding them modulo 59. So you say like one plus two is three, but 57 plus three is 60. Well, that's bigger than 59. So you subtract off 59, you say it's one. Same principles as clock."
    },
    {
      "end_time": 5019.957,
      "index": 181,
      "start_time": 4991.288,
      "text": " Same exactly as a clock. And I'm so glad you said clock, because that's your model in your brain about modular arithmetic. You think of all the numbers sitting in a circle, it goes after 10 and 11 comes 12, but then comes one. So what happened was we, there are 59 times 59, so about 3,600 pairs of numbers, right? We trained on the system on some fraction of those, see if we could learn to get the right answer. And the way the AI worked was it"
    },
    {
      "end_time": 5050.043,
      "index": 182,
      "start_time": 5021.049,
      "text": " It learned to embed and represent each number, which was given to it just as a symbol. It didn't know five, whether it had anything to do with the number five, as a point in a high dimensional space. So we have these 59 points in a high dimensional space, okay? And then we trained another neural network to look at these representations. So you give it this point and this point, and it has to figure out, okay, what is this plus this mod 59? And then something shocking happened."
    },
    {
      "end_time": 5073.933,
      "index": 183,
      "start_time": 5050.35,
      "text": " You train it, train it, it sucks, it sucks, and then it starts getting better on the training data. And then at a certain point, it suddenly also starts to get better on the test data. So it starts to be able to correctly answer questions for pairs of numbers it hasn't seen yet. So it somehow had a eureka moment where it understood something about the problem. It had some understanding."
    },
    {
      "end_time": 5103.063,
      "index": 184,
      "start_time": 5074.633,
      "text": " So I suggested to my students, why don't you look at what's happening to the geometry of all these points that are moving around, the 59 points that are moving around in this high dimensional space during the training. I told them to just do a very simple thing, principle component analysis, where you try to see if they mostly lie in a plane and then you can just plot the 59 points. And it was so cool what happened. You look at this, you see 59 points that's looking very random. They're moving around."
    },
    {
      "end_time": 5133.712,
      "index": 185,
      "start_time": 5103.78,
      "text": " And then at exactly the point when the Eureka moment happens, when AI becomes able to answer questions that hasn't seen before, the points line up on a circle, a beautiful circle, interesting, except not with 12 things, but with 59 things now, because that was the problem that had right. So to me, this felt like the AI had reached understanding about what the problem was, it has come up with a model, or as we often call it, a representation"
    },
    {
      "end_time": 5162.927,
      "index": 186,
      "start_time": 5134.155,
      "text": " Running a business comes with a lot of what-ifs. But luckily, there's a simple answer to them. Shopify. It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need. From website design and marketing, to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality. Turn those what-ifs into"
    },
    {
      "end_time": 5193.234,
      "index": 187,
      "start_time": 5163.916,
      "text": " Sign up for your $1 per month trial at Shopify.com slash special offer. And this understanding now enabled it to see patterns in the problem so that it could generalize to all sorts of things that it hadn't even come across before. So I'm not able to give a beautiful, succinct, fully complete answer to your question on how to define artificial understanding. But I do feel that this is an example."
    },
    {
      "end_time": 5223.319,
      "index": 188,
      "start_time": 5194.804,
      "text": " A small example of understanding. We've since then seen many others. We wrote another paper where we found that when large language models do arithmetic, they represent the numbers on a helix, like a spiral shape. And I'm like, what is that? Well, the long direction of it can be thought of like representing the numbers in analog, like you're farther this way if the number is bigger. But by having them wrap around on a helix like this,"
    },
    {
      "end_time": 5248.968,
      "index": 189,
      "start_time": 5223.763,
      "text": " You can use the digits if it's 10 to go around and there were actually several helixes as a hundred helix and the 10 helix and so I suspect that one day people will come to realize that More broadly when when machines understand stuff, maybe when we understand things also it has to do with Coming up with the same patterns and then coming up with a clever way of representing the patterns such that"
    },
    {
      "end_time": 5270.316,
      "index": 190,
      "start_time": 5251.323,
      "text": " The representation itself goes a long way towards already giving you the answers you need. I'm a very visual thinker when I do physics or when I think in general. I never feel I can understand anything unless I have some geometric image in my mind."
    },
    {
      "end_time": 5295.981,
      "index": 191,
      "start_time": 5271.391,
      "text": " Actually, Feynman talked about this. Feynman said that there's the story of him and a friend who can both count to 60, something like this, precisely. And then he's saying to his friend, I can't do it if you're waving your arms in front of me or distracting me like that. But I can if I'm listening to music, I can still do this trick. And he's like, I can't do it if I'm listening to music, but you can wave your arms as much as you like. And Feynman realized"
    },
    {
      "end_time": 5314.65,
      "index": 192,
      "start_time": 5295.981,
      "text": " He was seeing the numbers one two three his trick was to have a mental image yes and then the other person was having a metronome but the goal or the outcome was the same but the way that they came about it was different there's actually something in philosophy called the rule following paradox."
    },
    {
      "end_time": 5334.002,
      "index": 193,
      "start_time": 5315.043,
      "text": " You probably know this. There are two rule following paradoxes. One is Kripke and one is the one that I'm about to say. So how do you know when you're teaching a child that they've actually followed the rules of arithmetic? So you can test them 50 plus 80, etc. 50 times 200 and they can get it correct every single time. They can even show you their reasoning."
    },
    {
      "end_time": 5359.36,
      "index": 194,
      "start_time": 5334.002,
      "text": " But then you don't know if that actually fails at 6,000 times 51 and the numbers above that. Interesting. You don't know if they did some special convoluted method to get there. Exactly. All you can do is say you've worked it out in this case, in this case, in this case. That's actually, we have the advantage with computers that we can inspect how they understand or... In principle. But when you look under the hood of something like Chess GPT, all you see is billions and billions of numbers."
    },
    {
      "end_time": 5383.558,
      "index": 195,
      "start_time": 5359.94,
      "text": " And you oftentimes have no idea what all these matrix multiplications and things like this, you have no idea really what it's doing. But mechanistic interpretability, of course, is exactly the quest to move beyond that and see how does it actually work. And coming back to understanding and representations, there is this idea known as the platonic representation hypothesis, that if you have"
    },
    {
      "end_time": 5414.428,
      "index": 196,
      "start_time": 5385.265,
      "text": " two different machines, or I would generalize it to people also who both reach a deep understanding of something, there's a chance that they've come up with a similar representation. In Feynman's case, there were two different ones, but there probably aren't, there probably, at most, there's probably a few ones, or one or a few that are really good. And it seems like a hard case to make. But there's a lot of evidence coming out for it now. Actually, you can already many years ago, there was this team where they just took"
    },
    {
      "end_time": 5446.305,
      "index": 197,
      "start_time": 5416.92,
      "text": " You know, in chat GPT and other AI systems, all the words and word parts they call tokens get represented as points in a high dimensional space. And so this team, they just took something which had been trained only on English books and another one English language stuff and another one trained only on Italian stuff. And they just looked at these two point clouds and found that there was a way they could actually rotate them to match up as well as possible. And it gave them a somewhat decent"
    },
    {
      "end_time": 5473.592,
      "index": 198,
      "start_time": 5446.63,
      "text": " English Italian dictionary. So they have the same representation. And there's a lot of recent papers, quite recent ones even, that are showing that, yeah, it seems like the representations of one that large language model like Chachi PT, for example, is in many ways similar to the representations that other ones have. We did a paper, my student, my grad student Dawan Beck,"
    },
    {
      "end_time": 5503.336,
      "index": 199,
      "start_time": 5473.882,
      "text": " And I, where we looked at family trees. So we took the Kennedy family tree, which are royalty family trees, et cetera. And we, we, this trained AI to correctly predict like who is the son of whom, who is the uncle of whom is so and so a sister of whom we just asked all these, we asked all these questions. And we also incentivize the large language model to learn it."
    },
    {
      "end_time": 5531.237,
      "index": 200,
      "start_time": 5503.746,
      "text": " with as little in as, in as simple a way as possible by not giving it the arbitrary, by limiting the resources it had. And then when we looked inside, we discovered something amazing. We discovered that first of all, a whole bunch of independent systems had learned the same representation. So you can actually take the representation of one and, and literally just like rotate it around and stretch it a little bit and put it into the other and it would work there. And then when we looked at what it was, they were trees."
    },
    {
      "end_time": 5561.34,
      "index": 201,
      "start_time": 5531.596,
      "text": " We never told it anything about family trees, but it would draw like, here is this king so and so, and then here are the sons and this and this. And then it could use that to know that, well, if someone is farther down, they're a descendant, et cetera, et cetera, et cetera. So that's yet another example, I think, in support of this platonic representation hypothesis, the idea that understanding often has something to do with capturing patterns in some"
    },
    {
      "end_time": 5577.91,
      "index": 202,
      "start_time": 5561.886,
      "text": " So I wanted to end on the advice that you received from your parents, which was about, don't concern yourself too much what other people think, something akin to that. It was differently worded, but."
    },
    {
      "end_time": 5603.746,
      "index": 203,
      "start_time": 5578.933,
      "text": " I also wanted to talk about what are the misconceptions of your work that other colleagues even have that you have to constantly dispel. And another topic I wanted to talk about was the mathematical universe. Oh, the easy stuff. So there are three, but we don't have time for all three. If you could think of a way to tie them all together, then feel free like a gymnast or a juggler. But otherwise then I would like to end on the advice from your parents. Okay."
    },
    {
      "end_time": 5636.544,
      "index": 204,
      "start_time": 5606.869,
      "text": " Well, the whole reason I spent so many years thinking about whether we are all part of a mathematical structure and whether a universe actually is mathematical rather than just described by it is, of course, because I listened to my parents because I got so much shit for that. And I just felt, I think I'm going to do this anyway, because to me, it makes logical sense. I'm going to put the ideas out there. And then in terms of misconceptions about me, I think"
    },
    {
      "end_time": 5667.722,
      "index": 205,
      "start_time": 5637.892,
      "text": " One misconception I think is that somehow I don't believe that being falsifiable is important for science. I usually talked about earlier, I'm totally on board with this. And I actually argue that if you have a predictive theory about anything, gravity, consciousness, etc. That means that you can falsify it."
    },
    {
      "end_time": 5689.974,
      "index": 206,
      "start_time": 5668.541,
      "text": " So that's one and another one probably the one I get most now because I'm stuck my neck out a bit about AI and the idea that actually the brain is a biological computer and actually we're likely to be able to build machines that we could totally lose control over is that some people like to call me a doomer which is of course just"
    },
    {
      "end_time": 5716.954,
      "index": 207,
      "start_time": 5690.401,
      "text": " Something they say when they when they ran out of arguments is like if you call someone a heretic or whatever. And so I think what I would like to correct about that is I feel actually quite optimistic. I'm not a pessimistic person. I think that there's way too much pessimism floating around about humanity's potential. One is people, oh, we can never figure out"
    },
    {
      "end_time": 5748.592,
      "index": 208,
      "start_time": 5719.411,
      "text": " make any more progress on consciousness. We totally can. If we stop telling us that it's impossible and actually work hard. Some people say, oh, you know, we can never figure out more about the nature of time and so on unless we can, unless we can detect gravitons or whatever, we totally can. There's so much progress that we can make if we're willing to work hard. And in particular, I think the most pernicious kind of"
    },
    {
      "end_time": 5774.684,
      "index": 209,
      "start_time": 5749.258,
      "text": " Pessimism we suffer from now is this meme of that it's inevitable that we are going to build super intelligence and become irrelevant. It is absolutely not inevitable. But you know, if you tell yourself that something is inevitable, it's a self fulfilling prophecy, right? This is"
    },
    {
      "end_time": 5804.241,
      "index": 210,
      "start_time": 5775.64,
      "text": " Convincing a country that's just been invaded, that it's inevitable that they're going to lose the war if they fight. It's the old, it's SIOP game in town, right? So of course, if there's someone who has a company and they want to build stuff and they don't want you to have any laws that make them accountable, they have an incentive to tell everybody, oh, it's inevitable that this is going to get built. So don't fight it, you know. Oh, it's inevitable that, um,"
    },
    {
      "end_time": 5836.118,
      "index": 211,
      "start_time": 5806.408,
      "text": " That humanity is going to lose control over the planet. So just don't fight it and hey, buy my new product. It's absolutely not inevitable. You know, you could have had people, people say it's inevitable, for example, because they say people will always build technology that can give you money and power. That just factually incorrect. You know, you're a really smart guy. If I could do cloning of you and like start selling a million copies of you on the black market, I can make a ton of money."
    },
    {
      "end_time": 5864.548,
      "index": 212,
      "start_time": 5837.142,
      "text": " We decided not to"
    },
    {
      "end_time": 5894.343,
      "index": 213,
      "start_time": 5865.094,
      "text": " Gotten a lot of military power with bioweapons, you know. Then Professor Matthew Messelson at Harvard said to Richard Nixon, you know, we don't want there to be a weapon of mass destruction that's so cheap that all our adversaries can afford it. And Nixon was like, huh, that makes sense, actually. And then Nixon used that argument on Brezhnev and it worked. And we got a bioweapons ban. And now people associate biology mostly with"
    },
    {
      "end_time": 5924.889,
      "index": 214,
      "start_time": 5894.957,
      "text": " we have"
    },
    {
      "end_time": 5952.329,
      "index": 215,
      "start_time": 5925.367,
      "text": " than we thought. I mentioned that if we were living in a cave 30,000 years ago, we might've made the same mistake and thought we were doomed to just always be at risk of getting eaten by tigers and starving to death. That was too pessimistic. We had the power to, through our thought, develop a wonderful society and technology where we could flourish. And it's exactly the same way now. We have an enormous power."
    },
    {
      "end_time": 5979.821,
      "index": 216,
      "start_time": 5952.978,
      "text": " What most people actually want to make money on AI is not some kind of sand god that we don't know how to control. It's tools, AI tools. People want to cure cancer. People want to make their business more efficient. Some people want to make their armies stronger and so on. You can do all of those things with tool AI that we can control. And this is something we work on in my group, actually. And that's what people really want. And there's a lot of people who do not want"
    },
    {
      "end_time": 6009.053,
      "index": 217,
      "start_time": 5980.555,
      "text": " Most Americans and Poles think that's just a terrible idea. Republicans and Democrats. There was an open letter by evangelicals in the US to Donald Trump saying, you know, we want AI tools. We don't want some sort of uncontrollable super intelligence. Pope Leo"
    },
    {
      "end_time": 6039.582,
      "index": 218,
      "start_time": 6009.77,
      "text": " has recently said that he wants AI to be a tool, not some kind of master. You have people from Bernie Sanders to Marjorie Taylor Greene have come out on Twitter, you know, saying we don't want Skynet. We don't want to just make humans economically obsolete. So this is not inevitable at all, I think. And if we can just remember, we have so much, so much agency in what we do, what kind of future we're going to build, if we can be optimistic and just"
    },
    {
      "end_time": 6057.927,
      "index": 219,
      "start_time": 6040.06,
      "text": " The audience member now is listening. They're a researcher, they're"
    },
    {
      "end_time": 6085.401,
      "index": 220,
      "start_time": 6058.49,
      "text": " They're a young researcher. They're an old researcher. They have something they would like to achieve that's extremely unlikely, that's criticized by their colleagues for even them proposing it. And it's nothing nefarious. It's something that they find interesting and maybe beneficial to humanity, whatever. What is your advice? Two pieces of advice. First of all, I've ordered half of all the greatest breakthroughs in science were actually"
    },
    {
      "end_time": 6112.858,
      "index": 221,
      "start_time": 6086.937,
      "text": " Trash Talks at the time. So just because someone says that your idea is stupid doesn't mean it is stupid. A lot of people's ideas, you should be willing to abandon your own ideas if you can see the flaw and you should listen to a structured criticism against it. But if you feel you really understand the logic of your ideas better than anyone else and it makes sense to you, then"
    },
    {
      "end_time": 6141.271,
      "index": 222,
      "start_time": 6113.217,
      "text": " Keep pushing it forward. The second piece of advice I have is you might worry then, like I did when I was in grad school, that if I only worked on stuff that my colleagues thought was bullshit, like thinking about the many worlds interpretation of quantum mechanics or multiverses, then my next job was going to be in McDonald's. Then my advice is just hedge your bets. Spend enough time"
    },
    {
      "end_time": 6164.172,
      "index": 223,
      "start_time": 6142.722,
      "text": " Working on things that gets appreciated by your peers now so that you can pay your bills so that your career continues ahead. But carve out a significant chunk of your time to do what you're really passionate about in parallel. If people don't get it, well, don't tell them about it at the time."
    },
    {
      "end_time": 6196.681,
      "index": 224,
      "start_time": 6167.09,
      "text": " Hello Miami. When's the last time you've been in Burlington? We've updated, organized and added fresh fashion. See for yourself Friday November 14th to Sunday November 16th at our Big Deal event. You can enter for a chance to win free Wawa gas for a year plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow. No purchase necessary. Visit bigdealevent.com for more details."
    },
    {
      "end_time": 6217.671,
      "index": 225,
      "start_time": 6198.592,
      "text": " That way, you're doing science for the only good reason which is that you're passionate about it. And it's a fair deal to society to then do a little bit of chores for society to pay your bills also. That's a great way of viewing it. And it's been quite shocking for me to see actually how many of the things that I got most criticized for"
    },
    {
      "end_time": 6243.882,
      "index": 226,
      "start_time": 6218.08,
      "text": " or was most afraid of talking openly about when I was a grad student, even papers that I didn't show my advisor until after he signed my PhD thesis and stuff, or have later actually come pretty picked up. And I actually feel that the things that I feel have been most impactful were generally in that category. You know, you're never going to be the first to do something important if you're just following everybody else."
    },
    {
      "end_time": 6270.316,
      "index": 227,
      "start_time": 6246.135,
      "text": " Max, thank you. Thank you. Hi there, Kurt here. If you'd like more content from Theories of Everything and the very best listening experience, then be sure to check out my sub stack at kurtjymungle.org. Some of the top perks are that every week you get brand new episodes ahead of time"
    },
    {
      "end_time": 6298.609,
      "index": 228,
      "start_time": 6270.589,
      "text": " You also get bonus written content exclusively for our members. That's C-U-R-T-J-A-I-M-U-N-G-A-L dot org. You can also just search my name and the word sub stack on Google. Since I started that sub stack, it somehow already became number two in the science category. Now, sub stack for those who are unfamiliar is like a newsletter, one that's beautifully formatted. There's zero spam."
    },
    {
      "end_time": 6326.817,
      "index": 229,
      "start_time": 6298.848,
      "text": " This is the best place to follow the content of this channel that isn't anywhere else. It's not on YouTube. It's not on Patreon. It's exclusive to the Substack. It's free. There are ways for you to support me on Substack if you want, and you'll get special bonuses if you do. Several people ask me like, hey, Kurt, you've spoken to so many people in the field of theoretical physics, of philosophy, of consciousness. What are your thoughts, man?"
    },
    {
      "end_time": 6356.954,
      "index": 230,
      "start_time": 6327.125,
      "text": " While I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics. And it's the perfect way to support me directly. KurtJaymungle.org or search KurtJaymungle substack on Google. Oh, and I've received several messages, emails and comments from professors and researchers saying that they recommend theories of everything to their students. That's fantastic."
    },
    {
      "end_time": 6382.449,
      "index": 231,
      "start_time": 6357.415,
      "text": " If you're a professor or a lecturer or what have you and there's a particular standout episode that students can benefit from, or your friends, please do share. And of course, a huge thank you to our advertising sponsor, The Economist. Visit economist.com slash totoe to get a massive discount on their annual subscription. I subscribe to The Economist and you'll love it as well."
    },
    {
      "end_time": 6406.852,
      "index": 232,
      "start_time": 6382.91,
      "text": " Tou is actually the only podcast that they currently partner with. So it's a huge honor for me. And for you, you're getting an exclusive discount. That's economist.com slash toe. And finally, you should know this podcast is on iTunes. It's on Spotify. It's on all the audio platforms. All you have to do is type in theories of everything and you'll find it."
    },
    {
      "end_time": 6432.381,
      "index": 233,
      "start_time": 6407.09,
      "text": " I know my last name is complicated, so maybe you don't want to type in Jymungle, but you can type in theories of everything and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read in the comment that toe listeners also gain from replaying. So how about instead you relisten on one of those platforms like iTunes, Spotify, Google podcasts, whatever podcast catcher you use. I'm there with you. Thank you for listening."
    }
  ]
}

No transcript available.