Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

OpenAI INSIDER On Future Scenarios | Scott Aaronson

February 27, 2024 1:13:04 undefined

ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.

Transcript

Enhanced with Timestamps
184 sentences 10,110 words
Method: api-polled Transcription time: 71m 11s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
[0:20] Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
[0:36] Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] AI can replace 99.9% of people's jobs. We don't care about that anymore. All we care about is, can it achieve the true heights of creative genius? Will we have an AI that can hit a target that no one else can even see?
[1:22] This is a presentation by Scott Aronson, hot off the press just a couple of weeks ago at MindFest Florida Atlantic University 2024, spearheaded by Susan Schneider, who's the director of the Center for the Future Mind. All of the talks that are on AI and consciousness from this conference are in the description,
[1:43] Scott Aronson is a professor of theoretical computer science at UT Austin, particularly known for his work on quantum computing and complexity theory.
[2:00] Scott covers, in his jocular and unparalleled manner, AI, if there's anything that truly separates us from intelligent machines, for instance, what actually makes us special, what about identity, what about the no-cloning theorem, as well as Scott gives a new proposal for AI safety.
[2:18] What's coming up next on Toe from MindFest are the talks from Sarah Walker on alien intelligence and constructor theory, as well as Stuart Hameroff on the microtubules and quantum consciousness. Many, many more are coming and you can pause the screen here to take a look if you like.
[2:33] Subscribe to get notified. There's also a two-hour video on the mathematics of string theory coming out. It'll be string theory talked about like you've never heard it before. It's either out right now or it's about to be released in a few days. Either way, again, the link will be in the description. For those of you who are unfamiliar, welcome to this channel. My name is Kurt Jaimungal, and this is Theories of Everything, where we delve into the topics of mathematics, physics, artificial intelligence, and consciousness with depth
[3:01] It's my great pleasure to introduce Dr. Scott Aaronson. He's one of my favorite thinkers of all time.
[3:29] All right.
[3:59] Thanks so much for having me.
[4:03] I'm not an AI expert, let alone an expert in mind or consciousness. What one could ask is anyone, but I've spent most of my career doing quantum computing. I am sort of moonlighting for two years now. I'm on leave to work at OpenAI and my job there is supposed to be to think about what can theoretical computer science do for AI safety and alignment.
[4:34] I wanted to share some thoughts, partly inspired by my work at OpenAI, but partly just things that I've been wondering about for 20 years, really. They've just become more pressing, maybe now that some of the science fiction thought experiments are actually now reality.
[4:51] So these thoughts are not directly about how do we prevent the super intelligence from killing all humans and converting the galaxy into paper clips in a sphere expanding in the speed of light, nor are they about how do we stop
[5:08] Existing eyes from generating misinformation and being biased as much attention as both of those questions deserve and are justly receiving. In addition to how do we stop AI from going disastrously wrong, I find myself asking a lot and what if it goes right.
[5:30] What if it just continues helping us with all sorts of mental tasks, but it improves to where it can do just about any task as well as we can do it or better? What are we still for? Is there anything special about humans in the world that results from that? I don't need to belabor for this audience, Shirley, what has been happening in AI in the last few years.
[5:59] I mean, you know, it's arguably the most consequential thing that's been happening in the whole world. You know, except that that fact was just temporarily masked by various ephemera, you know, wars, insurrections, global pandemic, whatever. But, you know, but what about AI? Right. So, you know, I assume you've all spent time with with chat GPT or other large language models like Bard or Claude or image models.
[6:27] In the end, it's clear, despite AI's rise, our human specialness is a chaotic prize. And though machines may match our enterprise, they'll never outdo our ability to surprise. So not ready for the New Yorker, I would say. On the other hand, far, far better than I would have done under similar time constraints.
[6:56] So in some sense, at least in embryonic form and with various flaws and problems, these are the thing that was talked about by generations of science fiction writers and philosophers. These are the first non-human fluent verbal intelligences that we've ever encountered. We can talk to them. They understand us.
[7:25] Back in 2014, there was a huge fuss about silly Eliza-like chatbot called Eugene Guestman that was falsely claimed to pass the Turing test
[7:54] I remember asking around a decade ago, why doesn't someone just train a neural net on all the text on the internet? Wouldn't that let you make a better chatbot? There must be something obvious that I'm missing, why that doesn't work? And lo and behold, it turns out that it does work. Of course, I didn't have the facility to actually do that. So the surprise with language models is not merely that they exist, but the way that they were created.
[8:22] 25 years ago when I was an undergrad studying CS, you would have been laughed out of the room if you said that all the ideas needed to build a fluent linguistic AI already exist. It's going to be just neural nets, back propagation, gradient descent,
[8:45] But just scaled up by a factor of millions in the size of the models and the training data. I think hardly anyone believed that. A few people like Ray Kurzweil who just seemed crazy.
[9:02] Okay.
[9:18] Gradient the sand you know which have been around for many decades now are you really only need needed three additional things to get the the revolution that we're seeing now right you needed massive investment of computing power you needed a massive investment of training data and then thirdly you needed face conviction that your investment was going to pay off right you know that actually that that that third ingredient you know it was like the main reason why we didn't just get all of this a decade earlier.
[9:49] Okay, so certainly, you know, even before you do any, you know, reinforcement learning or anything like that, I mean, GPT-4 seems intuitively smarter than GPT-3, which seems smarter than GPT-2, right? And mostly these differ from each other, you know, just in scale. Okay, so, you know, I mean, GPT-2 struggled to do, you know, even like grade school level math problems, right? It was very easy to make fun of it, you know,
[10:17] You could just find endless examples of its common sense failures. GPT-3 or 3.5 can do most of the elementary school curriculum, given in English. It may struggle with undergrad, like with my quantum computing exam. GPT-4 got a B on my quantum computing final exam. We gave it to it.
[10:41] I have not yet seen it do what I would consider original research in theoretical computer science. I've tried to get it to do that. It's not at that level, but it's kind of insane that that is where the bar is now. It can pass most undergraduate math and science classes, at least if they don't have a lab component or something like that. An obvious question is how far
[11:08] Should we expect this progression to continue? Okay. So now, you know, I guess I will go back and steal the graph from that crazy person, Ray Kurzweil, because, you know, it turns out that he was more right than almost any of us. And, you know, he would just make these plots all the time of, you know, here's Moore's law. Here's the number of calculations you can do per second per thousand dollars. And then here is some crude estimate of the number of, uh,
[11:33] Computational steps that are going on in the brains of different organisms.
[11:46] I'm
[12:09] Certainly there was no theoretical principle that would have justified any prediction of that kind, and yet here we are. I'm a firm believer that what it means to be a scientist is that when something happens, you update on it. You don't invent fancy reasons why it doesn't really count.
[12:33] So if we didn't predict what was going to happen, the least we can do is update now that it has happened.
[12:46] So now it's possible that there's a saying that every exponential in the physical world is really a sigmoid in disguise. Nothing exponential continues forever or even for very long because it always bumps up against some
[13:04] I worry that that will just make the AIs dumber rather than smarter.
[13:29] We've seen that by just investing more and more compute, you can get better and better performance
[13:48] We should expect at least a few more orders of magnitude, then the cost of electricity will become the limiting factor at some point.
[14:07] Which is why microsoft and sam altman you know i've been investing in nuclear power you know they envision building their own power plants to power you know future i models but you know we should also expect algorithm further algorithmic advances so you know in the past.
[14:26] Algorithmic ideas that people have had like the transformer, which is just a particular architecture for neural nets that was discovered in 2017 and which is used for basically all of these things now. You can think of them as more or less the equivalent of some number of years of Moore's law. Each one seems to let you get the effect of a bigger model with a smaller model. You can trade off
[14:54] algorithmic advances for hardware advances. We should expect more of those, but where does this ultimately lead? Does it lead someplace like here, where like GPT-8, I'll say please prove the Riemann hypothesis.
[15:17] And it'll say, sure, I can help you with that. You know, here's, you know, I just generated a formally verified proof, which you can access at this URL. Let me, you know, let me now explain it to you in English, right? So it'll just do all of our research, right? You know, I mean, lucky for me that I have tenure, right? So, you know, I guess, you know, but, but, but, you know, or, you know, in order to write a research paper, right, we'll just write the abstract.
[15:43] I told ChatGPT to do this, but it made sure to add. Just kidding. As of my last update, the remod hypothesis remains unsolved.
[16:13] Of course, we all know there are many people who worry that at some time after these models become able to just do any intellectual task as well as or better than we can do it, we just sort of cede control to them and the future is determined by whatever they want.
[16:42] And if they want to get rid of us all, then they do that. It's been sort of amazing to just sociologically to watch what's happened over the last couple of years. I knew this community around Eliezer Yadkowski, for example, who worry about these things since 2006 or so. I knew them when they were this extreme fringe movement, sort of laughter. And now this is talked about in the White House press briefing.
[17:11] So you know chat GPT was sort of the event that changed that okay that sort of put you know AI existential risk you know as a thing on you know everyone or you know everyone's radar you know lots of people don't believe in it but you know those people now sort of have to make their argument for why not to worry about such things.
[17:32] This isn't the only possibility that people who I respect take seriously. You can scour generations of science fiction at this point for all different stories or all different possible scenarios for how AI could go and many of them actually I think are very much on the table now.
[17:57] So my friend Boaz Barak, who is now also on leave to work at OpenAI and I, some months ago we wrote a joint blog post where we tried to make a decision tree. We tried to classify the different five possible scenarios of AI that just sort of guide the discussion.
[18:20] So our first question was, will AI progress fizzle out? Will we just hit a wall pretty soon? So maybe we will. Even in that scenario, there's probably a huge economic impact that hasn't been realized yet, just from what is already possible.
[18:38] GPT-5 will just look like a somewhat more impressive GPT-4 and it will always look like the same kind of thing. Okay, but then if no, if it gets to that thing that could just prove the Riemann hypothesis in one second or solve the other greatest unsolved problems of math and physics, then you have to ask, well, will civilization recognizably continue?
[19:01] We should expect that if we don't figure out how to align these things, they will destroy us all. That's the paperclip ellipse. They just have some weird gull.
[19:31] Maximize the number of paper clips or something like that and they just with superhuman intelligence They pursue that proceeding to turn all the matter in the solar system including us into into more paper clips You know, that's just an example or if we could solve alignment and have some wonderful paradise where you know each of us gets a
[19:51] you know, our own VR, you know, private island or mansion or whatever, whatever we want. You know, now, of course, you know, there are also much more moderate scenarios where, you know, sort of civilization recognizably continues. And that that too could be either good or bad. You know, we still have, you know, you know, there are big problems, but they're sort of commensurate with the problems of other technologies. We'll call that Futurama.
[20:20] It really just leads to a police state or concentration of power by some elite that oppresses everyone else. We could call that the AI dystopia. So now, as far as I can tell, the empirical questions of what will AI do? Will it achieve and surpass human performance at all tasks? Will it take over civilization from us?
[20:48] You know, these are just logically completely distinct from the philosophical question of whether the AI will truly think, whether there is anything that it is truly, let's say, whether it will be sentient, conscious, whether there will be anything that it's like to be the AI. You could answer yes to either of those questions and no to the other one, right? And yet to my lifelong chagrin, people are just constantly munging these questions together, right? They're just constantly
[21:18] I'm saying well well AI will never be able to do these things because it doesn't really feel or it doesn't really you know and then and then once you know or it's just simulating it it doesn't really have that inside and then you know what once it does do that task then they just shift to a different thing that it will never do and then it does that thing and so forth.
[21:39] I was trying to come up with a name for it. I'm going to call it the religion of justism. There's this whole sequence of deflationary claims. Each person who makes them thinks that they're the first one. I've seen
[22:01] What these people never do, what it never occurs to them to do, is to ask the next question, what are you Justa?
[22:25] Aren't you just a bundle of neurons and synapses? We could take that deflationary reductionistic stance about you also. If not, then we have to give some principle that separates the one from the other. It is our burden to give that principle.
[22:48] The way that someone was putting it on my blog was, okay, they gave this giant litany. Look, GPT does not interpret sentences. It seems to interpret them. It does not learn. It seems to learn. It does not judge moral questions. It seems to judge moral questions. I just responded to this. I said, that's great and it won't change civilization. It will seem to change it.
[23:16] So, you know, then a closely related tendency is this goal, constant goalpost moving, you know, as I talked about, I mean, for decades, you know, I guess I'm, I'm barely old enough to remember, you know, as a kid, as a teenager, when chess was like this Holy grail of, you know, you, okay, you know, you find, you know, computers can play master level chess, but they're never going to beat the world grandmaster without true insight into the nature of the game.
[23:43] Alright, that turned out to be completely wrong. Then, you know, after Deep Blue, immediately it was, okay, well, of course they can do chess. Chess is just game tree search. Everyone knew that, right? But Go, Go is just an infinitely deeper game than chess. You know, it has, you know, thousands of years of ancient wisdom in that game. And, you know, only, you know, the deepest insights were, okay. And then after AlphaGo, it was like, okay, well, obviously you can do AlphaGo, right?
[24:08] There was a
[24:25] I have actually a bet with
[24:44] Colleague Ernie Davis that by 2026, I think, an AI will achieve a gold medal at the International Math Olympiad or that level of performance. Maybe I'm wrong. Maybe it will be 2036. But it seems obvious now that it is a question of how long.
[25:04] Given any task with a reasonably objective metric of success or failure, given any task with a reasonably objective metric of success or failure,
[25:27] Any board game, card game, video game, math or science contest where we can judge the answers.
[25:39] On which an AI can be trained with suitably many relevant examples of success and failure, it is only a matter of time before not only any AI, but the kind of AI we already have. AI on the current paradigm can just be scaled to the point where it will match or beat the best human performance on that task.
[26:03] I don't know if this is true, but I think we are now in the situation where we don't have a counter example. I would say the ball is in the skeptic's court to give the counter example and then let that counter example stand for another decade.
[26:21] Now, interestingly, even if you accept this thesis, this doesn't necessarily mean that AIs would surpass humans in every respect. It would say only on things that we know how to judge or evaluate, which might be a strict subset of everything we care about.
[26:42] Okay, so now, of course, there is the OG, original and greatest benchmark for AI, right? There is the Turing test from 1950 and what Turing was really trying to do, sort of very, very early, very ahead of his time as he generally was,
[27:01] was just to head off this sort of endless goalpost moving and this endless justism by saying, look, presumably you are willing to regard other people as intelligent, as conscious, based mainly on just some sort of verbal interaction that you have with those people. So then show me what kind of verbal interaction with another person would lead you to call that person conscious. Does it involve humor, poetry,
[27:31] Morality, scientific brilliance. Now assume that you have a totally indistinguishable interaction with an AI. Now what? You want to just stomp your feet and be a meat chauvinist, right? Or do you want to ascribe the same quality to it that you ascribed in the other case? Then for his historic attempt to bypass philosophy, of course, God punished Turing.
[28:00] By having the Turing test itself just provoke a billion new philosophical arguments in books. Even though I regard this as one of the great advances in the history of human thought, I would concede to critics of the Turing test that often it's not what we want in practice.
[28:21] For example, with GBT-4, if you know what to do, then there are trivial ways to distinguish it from a human. For a while, you could just ask it, what is today's date?
[28:41] I'm
[29:02] Easy ways to distinguish just because we want there to be. But this has actually become a huge practical issue in the world. The sort of issue from the movie Blade Runner, let's say, of how do you distinguish an AI from a human. I would say, like it or not, a decent fraction of all high school and college students in the world now are probably using chat GPT to do their homework.
[29:31] So that's actually one of the main things that I've thought about during my time at OpenAI. When you're in this safety community, people keep asking you to prognosticate decades into the future. I can't do that. I feel good that at least I was able to say about four months into the future.
[29:57] right and sort of before chat GPT came out I said like oh my god isn't there you know every student going to want to use this to cheat and isn't there going to be you know an enormous demand for some tool that could help to determine you know the provenance or the you know attribution you know what came from a language model and what didn't so I started working on that you know and and and there are often you know easy ways to tell right it's not not just
[30:23] You know, like the students who turn in term papers that contain phrases, like as a large language model, trained by, you know, so like even if you know enough to take that out or you pay enough attention to take that out, there is a sort of formulaic character off into the outputs of these models. So, I mean, I've been getting a ton of troll comments on my blog lately, but some of them, this is just like,
[30:52] One example, it goes on and on but just sort of like lecturing me on why I don't know the first thing about quantum computing but there's hope. If I spend more time studying, maybe I can get up to the level of this commenter and then just saying complete nonsense about mixed states and pure states.
[31:12] I have to say your understanding of quantum physics seems to be a bit mixed up, but don't worry, it happens to the best of us.
[31:27] You know quantum mechanics is counterintuitive and even experts struggle with it. This is either it's generated by a large language model or else it may as well have been. I just get a huge amount of stuff like this. Sometimes you can just sort of tell by looking at it, but you have to expect that as the models get better that it will get harder to tell. I worked on a different solution which is called watermarking.
[31:56] With watermarking, there was a year ago an episode of South Park about chat GPT, which hinged on all the students at South Park Elementary start using chat GPT to send messages to their girlfriends or boyfriends, to do their homework. The teachers are using it to grade the homework.
[32:22] It gets so bad that they have to bring this wizard to the school who has a falcon on his shoulder which flies around and when it sees text that was written by GPT it calls. It was really disconcerting to watch this and to realize, I guess I'm that guy now.
[32:39] That is now my job. I came up with a scheme for what's called watermarking. What does that mean? You exploit the fact that large language models are inherently probabilistic. That is, every time you submit a prompt, they're sampling some path through a branching tree of possibilities
[33:03] for the sequence of next tokens. Then the idea of watermarking is just that you're going to steer that path using a pseudo-random function rather than real randomness in such a way that secretly you are encoding a signal that you can later detect with high confidence if you know the key of the pseudo-random function and if there's a large enough sample of text.
[33:27] I propose the way to do that in fall of 2022. Others have since independently proposed very similar ideas. I should caution you that none of these watermarking schemes have been deployed yet. OpenAI along with DeepMind and Anthropic have wanted to move very slowly and cautiously.
[33:49] the watermark
[34:05] Okay, so, but now as I talk to people about, you know, watermarking and attribution, I was surprised that they often objected to it on a completely different ground, okay, not a technical ground at all.
[34:35] They would say, well, look, if we know that all students are going to be relying on AI in their jobs in the future, well, why shouldn't they be allowed to rely on it in their homework? Should we still force students even to learn to do things if AI can now do those things just as well? And I think there are many good pedagogical answers that you can give to that question.
[34:57] We teach kids spelling and handwriting and arithmetic. The entire elementary school curriculum is basically stuff that AI can now do, more or less. But we haven't yet figured out how to instill higher-level conceptual understanding, the things that AI cannot yet do, without all of that lower-level stuff being there first as a scaffold for it.
[35:23] So that would be one answer you could give. But I think about this even in terms of my kids. My 11-year-old daughter Lily enjoys writing fantasy stories. Now GPT can also churn out fantasy stories, maybe even technically more accomplished ones or whatever. But around the same themes, a girl gets recruited to some
[35:52] Go to some magical boarding school, but which is totally not Hogwarts, has nothing to do with Hogwarts. And you could just more and more of these things. And you could ask with a kid who's 11 right now, are they ever going to reach a point where they write better than GPT? So their writing will improve. Is AI writing just going to continue to improve faster than they will?
[36:21] What do we mean by one story being better than another?
[36:36] where there is a universally agreed upon standard of value. And the problem is even deeper than just is there an objective way to judge? What exactly would it mean, to take an example, to have an AI that was as good as the Beatles at composing music? How would we operationalize that? How would we cash that out?
[37:02] Right. Well, it's like what, you know, to answer that we would have to say, well, what made the Beatles good in the first place? Right. And I think, you know, broadly speaking, maybe there are two sorts of answers that you could give. One is that they had these sort of new ideas about what direction music should go in, you know, and then the second answer would be something that we know they were really, really good at just the technical execution on those ideas. Right. Yeah. And then somehow it's, it's the combination of both of those things.
[37:29] Okay, but now imagine, for example, that we had an AI model that, you know, you just gave it a request like GPT and it would generate 5,000 brand new songs that, you know, if you listen to them, they just sound like more of, you know, more things that are as good as, you know, Hey Jude or Yesterday or whatever, or like what the Beatles might have written.
[37:54] If they had somehow had 10 times as much time, you know, at each stage in their musical development. Of course, that AI would have to be fed their whole back catalog because, you know, it would have to know what target it was aiming at. And I think in that case, most people would say, ah, so, you know, this only shows that, you know, AI can match the Beatles in like part two, right. The technical execution part. But that's not really the part that we cared about anyway. Right. What we really want to know is,
[38:23] Would the AI decide to write these new kinds of songs, or a day in the life, or whatever, despite never having seen anything like it anywhere in its trading corpus? I'm sure you all know the Schopenhauer quote, talent hits a target that no one else can hit, but genius hits a target that no one else can see. And so now you can notice that we've done something strange in setting the bar. We've conceded that sure,
[38:52] AI can replace 99.9% of people's jobs. We don't care about that anymore. All we care about is can it achieve the true heights of creative genius? Can it hit a target? Will we have an AI that can hit a target that no one else can even see? But then there's still a hard question with what do we mean by that because
[39:20] Supposing that it did hit such a target, how would we know? Fans might say that by 1967 or so, the Beatles were optimizing for targets that no musician had quite optimized for before. But then somehow, and this is why they're remembered, they successfully dragged along the rest of the world's objective function to match theirs.
[39:46] The entire world's musical taste evolved along with them in order to match them. With the result being that now we can only judge music by a Beatles influenced metric or standard, just like we can only judge plays by a Shakespeare influenced metric. It's not that they just did really well on some metric, it's that they
[40:15] decided the metric. So, you know, in other branches of the wave function, you know, maybe a different history led to different standards of value. But in this branch, you might say help by their technical talents, but also by luck and by force of will, Shakespeare or the Beatles made certain decisions that shaped everything that happened going forward. And that's why they are what they are.
[40:41] Okay, but now if this is how it works, you know, what does that mean for AI, right? So could AI reach these pinnacle of genius, but in the sense of dragging all of humanity along with it to value, you know, to value something new and different from what it had previously valued.
[41:00] You know, as is said to be, you know, the, the, the, the true mark of greatness and, and if AI could do such a thing, would we want to let it? Okay. Now I want to sort of just call attention to some, okay. So I want to call attention to something, uh, when I have played around with using, uh, GPT to write poems or Dolly to draw artworks, you know, I've noticed something strange.
[41:24] Which is, you know, however good the AI's creations were, you know, and it can produce things much better than that poem that I showed you before, right? But however good the, you know, the artworks or the poems are, there are never things that I would want to like frame and put on the wall and, you know, really like draw a border around as special. Why not? Well, because, you know, I always knew that I could generate a thousand other works that are more or less the same.
[41:52] I just have to refresh the browser window or just, you know, literally just ask it, you know, give me another one and it will oblige me for as long as I want. Right. So which means that there's never anything really unique or irreplaceable about any particular output that it generates.
[42:08] Which reminds us of a broader point that by its nature, AI, at least the way that we use it now, is inherently rewindable and repeatable and reproducible, which means that in a certain sense it never really commits to anything.
[42:28] It sees this branching tray of possibilities, like in the case of a language model.
[42:38] initial sequence of tokens, it sees a probability distribution over the next token, and then each time you give it a prompt and you ask it, it just sort of randomly picking one, randomly traversing one route through this, you know, exponentially large possibility space, right? But it's happy to traverse it differently, you know, you can just rewind it back to the top and have it traverse a different path and it'll do that as often as you want.
[43:08] It's not just that you know abstractly that it could have generated a totally different work that was just as good, it's that you could actually see that other work. You could ask, as long as humans have a choice in the matter, why should we ever choose to follow this would-be AI genius along a specific branch when we can easily see a thousand other branches?
[43:33] Well, if one branch gets elevated over all the thousands of others, then why? Well, maybe because a human chose that one to elevate, but in which case we would say that maybe the human made the executive decision with mere technical assistance from the AI. Now, I realize that in a sense I'm being completely unfair to AIs here.
[43:59] You know like our our our genius bot could exercise its genius you know by assumption let's say indistinguishably from what a human would do right you know as long as we all agree not to peek behind the curtain at all the other branches of this tree right you know it's like you know i don't know if you any of you have had this feeling where like you can talk to chat gbt for a while and you really you know
[44:23] It seems like you're talking to an intelligent being and the thing that breaks the illusion is when you rewind it, right? It is when you say, okay, you know here is You know, it would have that same exact same conversation with me, you know You know respond as many times as I like to that to that same prompt, you know with no memory of any of the previous types
[44:48] If we didn't rewind it, then maybe the illusion would hold, but since the way these things are deployed, we can rewind them. We're always going to be able to see behind the curtain in that sense, and that is going to continue to make AI different from us.
[45:17] In many relevant respects you know it just because it's unfair to them it that doesn't mean that that's not how things are going to develop so if i'm right then it would be humans very ephemerality frailty mortality that would stand as the central source of their specialness relative to ai.
[45:37] After all of the other sources of fallen, you know, there are lots of old observations along these lines. You know, what does it even mean to murder an AI if there are, you know, a thousand copies of the training weights on other servers somewhere and you can always just restore it from backup, right? Does it mean, you know, you have to delete all the copies, for example? Okay. You know, how could whether something is murdered depend on whether there is a,
[46:02] A print out of its code in a closet you know on the other side of the world but you know like humans you have to at least grant us this that it really does mean something to murder us right and you know and likewise it seems to mean something if we make one definite choice to share with the world.
[46:20] Like this is my artistic masterpiece or this is my book or whatever, not that here's any possible book that you could have asked me to write. Okay, so now though, you know, we face an exotic criticism, which is, you know, who says that humans will be frail and mortal forever? You know, isn't it short sighted to base our distinction between humans and AI on that? You know, what if someday we will be able to repair ourselves using nanobots?
[46:45] or even copy the information in them so that like in science fiction movies, a thousand doppelgangers of us could then live forever in simulated worlds in the cloud. That then leads to these very old questions of would you get into the teleportation machine that makes a perfect copy of you on Mars and it's ready to go there in 10 minutes
[47:12] Is that a thing you would agree to do? If you did, would you expect to feel yourself waking up on Mars or would it only be someone else a lot like you?
[47:34] Or maybe you'd say you'd wake up on Mars if it was a perfect physical copy of you. But in reality, it's just not physically possible to make a copy that is accurate enough. Maybe the brain is inherently noisy or analog and what might look to current neuroscience like just like nasty stochastic noise.
[47:55] is the stuff that actually binds the personal identity or maybe even consciousness. By the way, this is the one place where I agree with Penrose and Hameroff that quantum mechanics might enter the story. I get off their train kind of early, but I do take it to that first stop.
[48:13] Right. So, you know, like a fundamental fact in quantum mechanics is called the no cloning theorem. It says there's no way to make a perfect copy of an unknown quantum state. Indeed, when you measure a quantum state, not only do you generally fail to learn everything you need to copy it, you generally destroy the one copy that you had. This is not a technological limitation. It's inherent to the known laws of physics. In that respect, at least qubits are more like priceless antiques than they are like classical bits.
[48:42] Right, they have this, you know, unique, this uncloned ability to them. So 11 years ago, I had this essay called the ghost in the quantum Turing machine, where I explored the question, how accurately would you need to scan someone's brain in order to copy or upload their identity? And now, you know, I would say that this is partly partly turns on empirical questions that we don't know the answer to.
[49:06] You know, if there were a clean digital abstraction layer of neurons and synapses sort of which felt the quantum layer underneath only as some irrelevant noise, then the no cloning theorem would be irrelevant since classical information can be copied. On the other hand, if you had to go all the way down to the molecular level, then the no cloning theorem would be relevant. I mean, you would then have a unique locus of personal identity that could be, you know, scientifically justified in some sense.
[49:35] that no prediction machine could make well calibrated probabilistic predictions of an individual's future choices far enough into the future without destructive brain measurements that we say would fundamentally change who they were. So that brings me to, I guess, the last idea, the last thing I wanted to share, which brings the discussion all the way back to AI safety, which is supposed to be my job now.
[50:03] So, you know, Jeffrey Hinton, who was one of the founders of deep learning, recently mooted the idea that maybe until we've solved the alignment problem, we should only build powerful AIs if they run on noisy analog hardware, like our brains seem to.
[50:19] so that an evil unaligned AI wouldn't so easily be able to copy or improve itself, which is like the main area people worry about. Now you might think, well, fat chance of that happening when the economic returns are so staggering to anyone who is willing to unleash the copyable digital kind of AI, which we can imagine will be so much more convenient and more powerful. But maybe then one approach in the meantime is to instill AIs with a new religion.
[50:49] Okay, perhaps via the usual methods like, you know, reinforcement learning and system prompts. And the first commandment of this religion would be to value human specialness in the sense that I tried to set out here.
[51:02] But more precisely, you would tell the AI, as you navigate whatever environment you find yourself in, look around for any loci of creativity and intelligence that are not cleanly copyable or back-up-able. Any that seem one of a kind because their mental computations are inseparably tied up with noisy analog hardware and mortal because that hardware sort of inevitably decays.
[51:27] And well, first of all, don't destroy those loci of creativity or enslave them or upgrade them to digital versions against their will. Let them live in peace. Give them as much autonomy as you can. Do whatever best supports their flourishing. Even defer to their wishes. Let them make the decisions when possible. Why? Because they're the ones whose wishes kind of matter, not because of arbitrary meat chauvinism, but just because of the undoubted empirical fact that they only get this one chance.
[51:58] And because while you can't prove that, you know, unknown quantum states and their brains are like some magical pixie dust from another world that imbues them with, you know, free will or, you know, individual identity. Well, you can't really empirically refute that either. So whereas you can refute it in the case of yourself and your robot friends and that's the difference.
[52:20] So, you know, and let AIs by government fiat continue to be indoctrinated in this religion until such time as alignment is solved in some other way. So does that help with alignment? Well, I'm not sure. But, you know, I could have fallen in love with some other weird dumb idea, but that presumably happened in a different branch of the wave function that I don't have access to. And in this branch, somehow I'm just stuck with this one and you can't rewind me to get something else. So that's it. Thanks.
[52:55] Thank you, Scott. That was absolutely fascinating. I know we have a bunch of questions. I saw a hand up back here first. All right. Thank you, Scott. You're such a genial and comical guy. I love it. I love meeting you here. My question is twofold. One is I want to get your thoughts on like AI hallucinations. My research is on more like human confabulation and how we build epistemic trust into one another and
[53:18] Every day instances if I ask why did you do action X or why did you make choice B? You know, we tend to just confabulate reasons, you know to one another rather than saying I don't know because the person that says I don't know We don't really have trust in that individual and their knowledge. So yeah is you know with AI hallucinations I don't know too much about it, but I see that, you know, we're training large language modules based on human interaction and human data so
[53:42] A lot of professors, philosophy professors I know and other professors, they'll type a prompt, like write a biography about myself. And it'll have 90% of the data accurate, but it'll embellish some certain things, a little artistic flourish. It'll say, oh, you know, Scott went to, I don't know, University of Cambridge for his undergraduates. It's not accurate. So we have certain inaccuracies and I'm wondering if that's a certain AI confabulation, those AI hallucinations, kind of mirroring human confabulation.
[54:08] The second question actually not pertinent to the first one, but the other one is I guess with all of like deep deep blue and all of these programs. We've known that human reasoning and higher order thinking tasks have been able to be replicated and mimicked better than humans for decades and decades now.
[54:25] More my interest is like I know there's difficulty in replicating embodied AI like you know cognitive things like you know like a self-driving car that has rules like you know avoid orange cones and so these kids go out in Arizona and they drop orange cones all around the car and it's unable to make a decision.
[54:42] And then suddenly it just speeds off out of nowhere. So I guess my question there is, you know, what are your thoughts on embodied AI? Yeah, good. So let's start with hallucinations. I mean, I think the key thing to understand is that it's not like a bug where you like you change a line of code and oh, it doesn't hallucinate anymore, right? It is sort of an intrinsic feature of, you know, the way that, you know, the thing that
[55:06] the LLMs are fundamentally doing, right? Which is that they are being trained on all the text on, you know, let's say that you feed into them, like on the open internet and you know, they are not otherwise tethered to some sort of truth about the external world, right? Uh, so, you know, the, the, the most optimistic thing that I can say is that, you know, often hallucinations sort of go away as you just scale a model up. So for example, you know, I asked GPT three,
[55:34] Prove that
[55:51] The right of proof for anything you ask them to, true or false. They're just sort of generating some proof-like verbiage. But then in GPT-4, I ask it the same question.
[56:09] Well, no, that's a bit of a trick question, isn't it? There's infinitely many primes and here's why, right? So just, you know, giving it more, you know, a bigger scale, you know, more training data sort of, you know, helped it realize that. Now, of course, there are other things that GPT-4 will hallucinate, right? But you might wonder, like, for every given hallucination, will there exist an end such that GPT-N will, you know, will get that, right? I mean, I mean, one thing that another thing that has clearly helped
[56:39] is that now GPT like Bard and the other models will look things up on the internet when it doesn't know them. That's just integrated into how it works. That was a completely obvious thing to do, but a year ago that was not the case.
[56:55] Okay, so now, one of the most striking, I guess, aspects of the current moment in AI, as many people have pointed out,
[57:10] You know, like almost every wise person expected that, okay, first you'll get AIs that can do all the manual labor for us, right? All the truck driving, you know, the whatever, cooking. And then, you know, maybe you'll get AIs that can do math and science. And only at the very, very end will you get AIs that can do, you know, art or music, poetry, the true heights of human specialness. And things are actually happening in precisely the opposite order in some sense.
[57:40] It's very
[58:02] expensive to get the billions of examples of things interacting in the physical world. You can get training data from simulations, but then it often doesn't translate very well to the physical world. But it's possible that this is yet another thing where just enough
[58:23] We'll see a phase transition when there's enough scale. Just like before 2019 or 2020, there were no AIs that could understand natural language and then suddenly you hit a certain scale and there were. So it might be that even with limited training data, once you have enough compute to understand that data, then you'll be able to just
[58:46] Do robotics via the same old recipe of gradient descent on a neural net, and you'll get useful household robots and all of that stuff. That's one thesis. Or as always, until you see something, maybe there's some deeper obstruction that prevents it. Fantastic. I think we've got one. Kyle, do you stand up first? No? Okay, let's jump up here.
[59:10] Yeah, just on your idea at the end, that we're going to build the AIs that venerate and protect the ephemeral, unclonable, unpredictable. It kind of reminded me of Asimov's Foundation trilogy and Harry Seldon, who predicted the whole future digitally. Now, I did read that, but 30 years ago, when I was like 12 years old. And then there's this one guy that comes along who's totally ephemeral, unpredictable, and so it was the mule. The mule, right?
[59:40] I was worried that you were going there. I don't know what Harry Seldin predicted this mule. I think we've got one more right behind you.
[60:08] Hi, great talk. By the way, I was a beta tester for 3.5. All my comments were around safety. The question is, Vino Kostla has suggested that we're thinking about things in the wrong way. That when these large language models, et cetera, create art, that's actually a proxy for
[60:34] All the emotions that will be created he thinks that we will bypass music and that i will understand us and create not sound not songs not music but experiences more directly in other words create sounds that appeal to us.
[60:56] but are not necessarily recognizable by anybody else as a specific song. So what are your thoughts on that? So, so like me, like something like music, but personalized to an individual. I'm not sure I understand the idea fully, but you know, often like when people say AI is not going to do X, it's going to do Y instead. Often the answer is, well, there will be AIs that do X and there will be AIs that do Y.
[61:22] Right? Whatever you can get things to do, someone will try that. If it is possible to write music that sells with an AI, then why will that not be done? I think you'd have to explain that. The basic idea is that by creating music, that's assuming a shared set of values or culture that we all appreciate the Beatles.
[61:48] The idea being is that i will be more personal actually learn you it won't give you things like. Music.
[61:58] That's shared by others, but that is that is personal to you. Okay I mean I mean sometimes we actually want a shared experience right we want to like Enjoy some artistic work and have common knowledge that you know all of our friends are enjoying the same work But I think there is something to the idea that like one of the main benefits you can get from language models right now is this huge personalization and
[62:21] Instead of reading a textbook, for example, you can just learn any subject. I say telling chat GPT here is what I already know and here is what I need to know. Can you get me? Can you help me get from here to there? You know, I can in really advanced subjects. It may screw up, but like, you know, my daughter has been using it to learn pre algebra. You know, it's great for that. Yeah.
[62:44] So going back to the specialness problem, is that any different from the specialness problem we always face in life? I don't play chess as well as my nephew, but I still love playing chess. And actually, there's more people playing chess today after Deep Blue than ever before. And everyone – lots of people play music. We don't play it as well as Paul McCartney.
[63:04] How is the AI different from that problem? It's an excellent point. This whole worry that we're going to lose our human dominance in science and in art. The overwhelming majority of us never had that dominance to begin with.
[63:22] The new aspect is that we will have these
[63:43] you know, extremely intelligent creative entities, but that are like infinitely rewindable and replicable, right? And, uh, that don't have this ephemerality to them where they just do their one thing and they die, right? We're like, you can always just go, go back and get another version if you want it. And so, you know, that, that's the thing that's sort of been sticking, you know, sticking in my craw that I've been trying to make sense of.
[64:08] I think we've got time for two more back here and then we'll jump up front again. Yeah, real quick, just projecting a little bit forward and based on something you just mentioned a while ago, the physical world, we don't have enough information. What is your thought in relation to data that's coming up from IOIT, from messaging, from machine to machine? Do we need a new framework to start collecting that type of data where there's no humans involved? And second to that, Addis, a little bit,
[64:35] So I don't understand why IoT would require a new framework. I mean, a priori, it just seems like it's another source of data that you can feed. And one of the key aspects that has powered this AI boom is that neural nets are in some sense universal function approximators.
[65:04] Right. And not only that, but like the same architectures like transformers seem to be good for just about anything that you throw them at. Right. Whether that's, you know, images or text or, you know, time series data. Right. I mean, that's, you know, it didn't have to be that way a priori, but that's a sort of an incredible fact. So until we see that that's false, people are probably going to just proceed on that assumption. Your other question was about what again?
[65:33] Synthetic data. Yeah, I understand. It's clear that for a lot of tasks, the main bottleneck right now is a lack of enough high-quality training data. The tasks where you ought to expect that AI will get much further faster
[65:54] are those where you can train on synthetically generated data. In some sense, this is what allowed AlphaGo and AlphaZero to succeed as well as they did even eight years ago. For Go, you can just generate millions of games via self-play, and for each one, you know who won and who lost. You don't have any bottleneck of data. You can generate as much new data as you want. Math may have that same character. You can just
[66:21] You know generate lots and lots of math problems generate lots of examples of theorems to prove
[66:27] And that can all be done mechanically. But now, how would we do that for art or for music? How would we synthetically generate new artworks to train the thing with? You might worry that with each iteration, it's just going to get worse and worse because it's going to lose touch with the original wellsprings of human creativity that we're trying to get it to emulate.
[66:57] It was a terrific talk. I just want to follow up on something George said. This is not an objection at all, but just a suggestion. One way of thinking about
[67:16] What matters in making music or writing stories like your daughter does is not to evaluate it in terms of the quality of the output, but the value of the striving, the value of the doing.
[67:30] Sometimes it matters to some people you get to the top, but other people, it has value in just the climbing of the mountain. Yeah. And it's not the same if you take a helicopter. It's not the same. And so one of the things we value about what we do in life is the doing of it. Yeah. And I think that's something that really we need to remember because so often we fall into, uh, you weren't doing this, but I think we often fall into thinking we evaluate
[67:58] AI in terms of the products that produces and that's natural it's an economic way of thinking about it but we can also think about the value of what we do intrinsically as humans. I completely agree i think there's a lot of wisdom in that you know at the same time you know a lot of people you don't have jobs right where you know that.
[68:18] Where they are you know judge by some something that they produce and those jobs may be threatened and we will have to think about you know what do we do you know how do those people make a living right so but you know i mean i think that there's there's a lot to say about the fact that okay you know even even if gpt reaches a point where it can always write a better story that you know that you can write.
[68:41] My point is that there's one thing that it won't do, and that's write the specific story that you had in you to write. And so you have to sort of recenter your whole notions of what's valuable around that if you want something that's going to remain. Fantastic. Thank you, Scott. I'm sure we'd love to all pull in here at lunch.
[69:07] The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes.
[69:31] Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
[69:50] Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from rewatching lectures and podcasts and I read that in the comments. Hey,
[70:05] Toe listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on Toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier.
[70:34] Every dollar helps far more than you think. Either way, your viewership is generosity enough.
[70:51] Jokes aside, Verizon has the most ways to save on phones and plans where everyone in the family can choose their own plan and save. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 6270,
  "audio_duration_seconds": 4270.85,
  "completed_at": "2025-12-01T00:12:30Z",
  "segments": [
    {
      "end_time": 20.896,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
    },
    {
      "end_time": 36.067,
      "index": 1,
      "start_time": 20.896,
      "text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
    },
    {
      "end_time": 64.514,
      "index": 2,
      "start_time": 36.34,
      "text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 81.152,
      "index": 3,
      "start_time": 66.288,
      "text": " AI can replace 99.9% of people's jobs. We don't care about that anymore. All we care about is, can it achieve the true heights of creative genius? Will we have an AI that can hit a target that no one else can even see?"
    },
    {
      "end_time": 103.473,
      "index": 4,
      "start_time": 82.961,
      "text": " This is a presentation by Scott Aronson, hot off the press just a couple of weeks ago at MindFest Florida Atlantic University 2024, spearheaded by Susan Schneider, who's the director of the Center for the Future Mind. All of the talks that are on AI and consciousness from this conference are in the description,"
    },
    {
      "end_time": 120.845,
      "index": 5,
      "start_time": 103.473,
      "text": " Scott Aronson is a professor of theoretical computer science at UT Austin, particularly known for his work on quantum computing and complexity theory."
    },
    {
      "end_time": 138.012,
      "index": 6,
      "start_time": 120.845,
      "text": " Scott covers, in his jocular and unparalleled manner, AI, if there's anything that truly separates us from intelligent machines, for instance, what actually makes us special, what about identity, what about the no-cloning theorem, as well as Scott gives a new proposal for AI safety."
    },
    {
      "end_time": 153.592,
      "index": 7,
      "start_time": 138.012,
      "text": " What's coming up next on Toe from MindFest are the talks from Sarah Walker on alien intelligence and constructor theory, as well as Stuart Hameroff on the microtubules and quantum consciousness. Many, many more are coming and you can pause the screen here to take a look if you like."
    },
    {
      "end_time": 181.169,
      "index": 8,
      "start_time": 153.592,
      "text": " Subscribe to get notified. There's also a two-hour video on the mathematics of string theory coming out. It'll be string theory talked about like you've never heard it before. It's either out right now or it's about to be released in a few days. Either way, again, the link will be in the description. For those of you who are unfamiliar, welcome to this channel. My name is Kurt Jaimungal, and this is Theories of Everything, where we delve into the topics of mathematics, physics, artificial intelligence, and consciousness with depth"
    },
    {
      "end_time": 208.439,
      "index": 9,
      "start_time": 181.169,
      "text": " It's my great pleasure to introduce Dr. Scott Aaronson. He's one of my favorite thinkers of all time."
    },
    {
      "end_time": 237.449,
      "index": 10,
      "start_time": 209.087,
      "text": " All right."
    },
    {
      "end_time": 243.609,
      "index": 11,
      "start_time": 239.531,
      "text": " Thanks so much for having me."
    },
    {
      "end_time": 273.78,
      "index": 12,
      "start_time": 243.78,
      "text": " I'm not an AI expert, let alone an expert in mind or consciousness. What one could ask is anyone, but I've spent most of my career doing quantum computing. I am sort of moonlighting for two years now. I'm on leave to work at OpenAI and my job there is supposed to be to think about what can theoretical computer science do for AI safety and alignment."
    },
    {
      "end_time": 291.34,
      "index": 13,
      "start_time": 274.053,
      "text": " I wanted to share some thoughts, partly inspired by my work at OpenAI, but partly just things that I've been wondering about for 20 years, really. They've just become more pressing, maybe now that some of the science fiction thought experiments are actually now reality."
    },
    {
      "end_time": 308.319,
      "index": 14,
      "start_time": 291.613,
      "text": " So these thoughts are not directly about how do we prevent the super intelligence from killing all humans and converting the galaxy into paper clips in a sphere expanding in the speed of light, nor are they about how do we stop"
    },
    {
      "end_time": 330.282,
      "index": 15,
      "start_time": 308.319,
      "text": " Existing eyes from generating misinformation and being biased as much attention as both of those questions deserve and are justly receiving. In addition to how do we stop AI from going disastrously wrong, I find myself asking a lot and what if it goes right."
    },
    {
      "end_time": 359.377,
      "index": 16,
      "start_time": 330.282,
      "text": " What if it just continues helping us with all sorts of mental tasks, but it improves to where it can do just about any task as well as we can do it or better? What are we still for? Is there anything special about humans in the world that results from that? I don't need to belabor for this audience, Shirley, what has been happening in AI in the last few years."
    },
    {
      "end_time": 387.534,
      "index": 17,
      "start_time": 359.582,
      "text": " I mean, you know, it's arguably the most consequential thing that's been happening in the whole world. You know, except that that fact was just temporarily masked by various ephemera, you know, wars, insurrections, global pandemic, whatever. But, you know, but what about AI? Right. So, you know, I assume you've all spent time with with chat GPT or other large language models like Bard or Claude or image models."
    },
    {
      "end_time": 416.425,
      "index": 18,
      "start_time": 387.756,
      "text": " In the end, it's clear, despite AI's rise, our human specialness is a chaotic prize. And though machines may match our enterprise, they'll never outdo our ability to surprise. So not ready for the New Yorker, I would say. On the other hand, far, far better than I would have done under similar time constraints."
    },
    {
      "end_time": 445.111,
      "index": 19,
      "start_time": 416.817,
      "text": " So in some sense, at least in embryonic form and with various flaws and problems, these are the thing that was talked about by generations of science fiction writers and philosophers. These are the first non-human fluent verbal intelligences that we've ever encountered. We can talk to them. They understand us."
    },
    {
      "end_time": 474.189,
      "index": 20,
      "start_time": 445.333,
      "text": " Back in 2014, there was a huge fuss about silly Eliza-like chatbot called Eugene Guestman that was falsely claimed to pass the Turing test"
    },
    {
      "end_time": 501.937,
      "index": 21,
      "start_time": 474.411,
      "text": " I remember asking around a decade ago, why doesn't someone just train a neural net on all the text on the internet? Wouldn't that let you make a better chatbot? There must be something obvious that I'm missing, why that doesn't work? And lo and behold, it turns out that it does work. Of course, I didn't have the facility to actually do that. So the surprise with language models is not merely that they exist, but the way that they were created."
    },
    {
      "end_time": 524.94,
      "index": 22,
      "start_time": 502.261,
      "text": " 25 years ago when I was an undergrad studying CS, you would have been laughed out of the room if you said that all the ideas needed to build a fluent linguistic AI already exist. It's going to be just neural nets, back propagation, gradient descent,"
    },
    {
      "end_time": 541.954,
      "index": 23,
      "start_time": 525.23,
      "text": " But just scaled up by a factor of millions in the size of the models and the training data. I think hardly anyone believed that. A few people like Ray Kurzweil who just seemed crazy."
    },
    {
      "end_time": 558.695,
      "index": 24,
      "start_time": 542.244,
      "text": " Okay."
    },
    {
      "end_time": 588.49,
      "index": 25,
      "start_time": 558.695,
      "text": " Gradient the sand you know which have been around for many decades now are you really only need needed three additional things to get the the revolution that we're seeing now right you needed massive investment of computing power you needed a massive investment of training data and then thirdly you needed face conviction that your investment was going to pay off right you know that actually that that that third ingredient you know it was like the main reason why we didn't just get all of this a decade earlier."
    },
    {
      "end_time": 616.869,
      "index": 26,
      "start_time": 589.633,
      "text": " Okay, so certainly, you know, even before you do any, you know, reinforcement learning or anything like that, I mean, GPT-4 seems intuitively smarter than GPT-3, which seems smarter than GPT-2, right? And mostly these differ from each other, you know, just in scale. Okay, so, you know, I mean, GPT-2 struggled to do, you know, even like grade school level math problems, right? It was very easy to make fun of it, you know,"
    },
    {
      "end_time": 641.084,
      "index": 27,
      "start_time": 617.363,
      "text": " You could just find endless examples of its common sense failures. GPT-3 or 3.5 can do most of the elementary school curriculum, given in English. It may struggle with undergrad, like with my quantum computing exam. GPT-4 got a B on my quantum computing final exam. We gave it to it."
    },
    {
      "end_time": 667.756,
      "index": 28,
      "start_time": 641.084,
      "text": " I have not yet seen it do what I would consider original research in theoretical computer science. I've tried to get it to do that. It's not at that level, but it's kind of insane that that is where the bar is now. It can pass most undergraduate math and science classes, at least if they don't have a lab component or something like that. An obvious question is how far"
    },
    {
      "end_time": 693.66,
      "index": 29,
      "start_time": 668.08,
      "text": " Should we expect this progression to continue? Okay. So now, you know, I guess I will go back and steal the graph from that crazy person, Ray Kurzweil, because, you know, it turns out that he was more right than almost any of us. And, you know, he would just make these plots all the time of, you know, here's Moore's law. Here's the number of calculations you can do per second per thousand dollars. And then here is some crude estimate of the number of, uh,"
    },
    {
      "end_time": 706.357,
      "index": 30,
      "start_time": 693.831,
      "text": " Computational steps that are going on in the brains of different organisms."
    },
    {
      "end_time": 729.565,
      "index": 31,
      "start_time": 706.715,
      "text": " I'm"
    },
    {
      "end_time": 752.807,
      "index": 32,
      "start_time": 729.565,
      "text": " Certainly there was no theoretical principle that would have justified any prediction of that kind, and yet here we are. I'm a firm believer that what it means to be a scientist is that when something happens, you update on it. You don't invent fancy reasons why it doesn't really count."
    },
    {
      "end_time": 765.299,
      "index": 33,
      "start_time": 753.114,
      "text": " So if we didn't predict what was going to happen, the least we can do is update now that it has happened."
    },
    {
      "end_time": 783.66,
      "index": 34,
      "start_time": 766.34,
      "text": " So now it's possible that there's a saying that every exponential in the physical world is really a sigmoid in disguise. Nothing exponential continues forever or even for very long because it always bumps up against some"
    },
    {
      "end_time": 809.599,
      "index": 35,
      "start_time": 784.019,
      "text": " I worry that that will just make the AIs dumber rather than smarter."
    },
    {
      "end_time": 827.944,
      "index": 36,
      "start_time": 809.872,
      "text": " We've seen that by just investing more and more compute, you can get better and better performance"
    },
    {
      "end_time": 847.193,
      "index": 37,
      "start_time": 828.234,
      "text": " We should expect at least a few more orders of magnitude, then the cost of electricity will become the limiting factor at some point."
    },
    {
      "end_time": 865.811,
      "index": 38,
      "start_time": 847.193,
      "text": " Which is why microsoft and sam altman you know i've been investing in nuclear power you know they envision building their own power plants to power you know future i models but you know we should also expect algorithm further algorithmic advances so you know in the past."
    },
    {
      "end_time": 893.899,
      "index": 39,
      "start_time": 866.118,
      "text": " Algorithmic ideas that people have had like the transformer, which is just a particular architecture for neural nets that was discovered in 2017 and which is used for basically all of these things now. You can think of them as more or less the equivalent of some number of years of Moore's law. Each one seems to let you get the effect of a bigger model with a smaller model. You can trade off"
    },
    {
      "end_time": 917.125,
      "index": 40,
      "start_time": 894.155,
      "text": " algorithmic advances for hardware advances. We should expect more of those, but where does this ultimately lead? Does it lead someplace like here, where like GPT-8, I'll say please prove the Riemann hypothesis."
    },
    {
      "end_time": 943.217,
      "index": 41,
      "start_time": 917.125,
      "text": " And it'll say, sure, I can help you with that. You know, here's, you know, I just generated a formally verified proof, which you can access at this URL. Let me, you know, let me now explain it to you in English, right? So it'll just do all of our research, right? You know, I mean, lucky for me that I have tenure, right? So, you know, I guess, you know, but, but, but, you know, or, you know, in order to write a research paper, right, we'll just write the abstract."
    },
    {
      "end_time": 972.824,
      "index": 42,
      "start_time": 943.575,
      "text": " I told ChatGPT to do this, but it made sure to add. Just kidding. As of my last update, the remod hypothesis remains unsolved."
    },
    {
      "end_time": 1002.329,
      "index": 43,
      "start_time": 973.234,
      "text": " Of course, we all know there are many people who worry that at some time after these models become able to just do any intellectual task as well as or better than we can do it, we just sort of cede control to them and the future is determined by whatever they want."
    },
    {
      "end_time": 1031.544,
      "index": 44,
      "start_time": 1002.585,
      "text": " And if they want to get rid of us all, then they do that. It's been sort of amazing to just sociologically to watch what's happened over the last couple of years. I knew this community around Eliezer Yadkowski, for example, who worry about these things since 2006 or so. I knew them when they were this extreme fringe movement, sort of laughter. And now this is talked about in the White House press briefing."
    },
    {
      "end_time": 1051.237,
      "index": 45,
      "start_time": 1031.834,
      "text": " So you know chat GPT was sort of the event that changed that okay that sort of put you know AI existential risk you know as a thing on you know everyone or you know everyone's radar you know lots of people don't believe in it but you know those people now sort of have to make their argument for why not to worry about such things."
    },
    {
      "end_time": 1076.766,
      "index": 46,
      "start_time": 1052.363,
      "text": " This isn't the only possibility that people who I respect take seriously. You can scour generations of science fiction at this point for all different stories or all different possible scenarios for how AI could go and many of them actually I think are very much on the table now."
    },
    {
      "end_time": 1099.855,
      "index": 47,
      "start_time": 1077.125,
      "text": " So my friend Boaz Barak, who is now also on leave to work at OpenAI and I, some months ago we wrote a joint blog post where we tried to make a decision tree. We tried to classify the different five possible scenarios of AI that just sort of guide the discussion."
    },
    {
      "end_time": 1118.046,
      "index": 48,
      "start_time": 1100.094,
      "text": " So our first question was, will AI progress fizzle out? Will we just hit a wall pretty soon? So maybe we will. Even in that scenario, there's probably a huge economic impact that hasn't been realized yet, just from what is already possible."
    },
    {
      "end_time": 1140.913,
      "index": 49,
      "start_time": 1118.046,
      "text": " GPT-5 will just look like a somewhat more impressive GPT-4 and it will always look like the same kind of thing. Okay, but then if no, if it gets to that thing that could just prove the Riemann hypothesis in one second or solve the other greatest unsolved problems of math and physics, then you have to ask, well, will civilization recognizably continue?"
    },
    {
      "end_time": 1171.152,
      "index": 50,
      "start_time": 1141.237,
      "text": " We should expect that if we don't figure out how to align these things, they will destroy us all. That's the paperclip ellipse. They just have some weird gull."
    },
    {
      "end_time": 1190.862,
      "index": 51,
      "start_time": 1171.374,
      "text": " Maximize the number of paper clips or something like that and they just with superhuman intelligence They pursue that proceeding to turn all the matter in the solar system including us into into more paper clips You know, that's just an example or if we could solve alignment and have some wonderful paradise where you know each of us gets a"
    },
    {
      "end_time": 1220.538,
      "index": 52,
      "start_time": 1191.032,
      "text": " you know, our own VR, you know, private island or mansion or whatever, whatever we want. You know, now, of course, you know, there are also much more moderate scenarios where, you know, sort of civilization recognizably continues. And that that too could be either good or bad. You know, we still have, you know, you know, there are big problems, but they're sort of commensurate with the problems of other technologies. We'll call that Futurama."
    },
    {
      "end_time": 1248.558,
      "index": 53,
      "start_time": 1220.742,
      "text": " It really just leads to a police state or concentration of power by some elite that oppresses everyone else. We could call that the AI dystopia. So now, as far as I can tell, the empirical questions of what will AI do? Will it achieve and surpass human performance at all tasks? Will it take over civilization from us?"
    },
    {
      "end_time": 1277.875,
      "index": 54,
      "start_time": 1248.899,
      "text": " You know, these are just logically completely distinct from the philosophical question of whether the AI will truly think, whether there is anything that it is truly, let's say, whether it will be sentient, conscious, whether there will be anything that it's like to be the AI. You could answer yes to either of those questions and no to the other one, right? And yet to my lifelong chagrin, people are just constantly munging these questions together, right? They're just constantly"
    },
    {
      "end_time": 1299.019,
      "index": 55,
      "start_time": 1278.063,
      "text": " I'm saying well well AI will never be able to do these things because it doesn't really feel or it doesn't really you know and then and then once you know or it's just simulating it it doesn't really have that inside and then you know what once it does do that task then they just shift to a different thing that it will never do and then it does that thing and so forth."
    },
    {
      "end_time": 1320.828,
      "index": 56,
      "start_time": 1299.411,
      "text": " I was trying to come up with a name for it. I'm going to call it the religion of justism. There's this whole sequence of deflationary claims. Each person who makes them thinks that they're the first one. I've seen"
    },
    {
      "end_time": 1345.35,
      "index": 57,
      "start_time": 1321.391,
      "text": " What these people never do, what it never occurs to them to do, is to ask the next question, what are you Justa?"
    },
    {
      "end_time": 1366.391,
      "index": 58,
      "start_time": 1345.845,
      "text": " Aren't you just a bundle of neurons and synapses? We could take that deflationary reductionistic stance about you also. If not, then we have to give some principle that separates the one from the other. It is our burden to give that principle."
    },
    {
      "end_time": 1394.019,
      "index": 59,
      "start_time": 1368.558,
      "text": " The way that someone was putting it on my blog was, okay, they gave this giant litany. Look, GPT does not interpret sentences. It seems to interpret them. It does not learn. It seems to learn. It does not judge moral questions. It seems to judge moral questions. I just responded to this. I said, that's great and it won't change civilization. It will seem to change it."
    },
    {
      "end_time": 1423.592,
      "index": 60,
      "start_time": 1396.698,
      "text": " So, you know, then a closely related tendency is this goal, constant goalpost moving, you know, as I talked about, I mean, for decades, you know, I guess I'm, I'm barely old enough to remember, you know, as a kid, as a teenager, when chess was like this Holy grail of, you know, you, okay, you know, you find, you know, computers can play master level chess, but they're never going to beat the world grandmaster without true insight into the nature of the game."
    },
    {
      "end_time": 1448.131,
      "index": 61,
      "start_time": 1423.831,
      "text": " Alright, that turned out to be completely wrong. Then, you know, after Deep Blue, immediately it was, okay, well, of course they can do chess. Chess is just game tree search. Everyone knew that, right? But Go, Go is just an infinitely deeper game than chess. You know, it has, you know, thousands of years of ancient wisdom in that game. And, you know, only, you know, the deepest insights were, okay. And then after AlphaGo, it was like, okay, well, obviously you can do AlphaGo, right?"
    },
    {
      "end_time": 1465.52,
      "index": 62,
      "start_time": 1448.131,
      "text": " There was a"
    },
    {
      "end_time": 1483.746,
      "index": 63,
      "start_time": 1465.828,
      "text": " I have actually a bet with"
    },
    {
      "end_time": 1504.411,
      "index": 64,
      "start_time": 1484.138,
      "text": " Colleague Ernie Davis that by 2026, I think, an AI will achieve a gold medal at the International Math Olympiad or that level of performance. Maybe I'm wrong. Maybe it will be 2036. But it seems obvious now that it is a question of how long."
    },
    {
      "end_time": 1527.551,
      "index": 65,
      "start_time": 1504.804,
      "text": " Given any task with a reasonably objective metric of success or failure, given any task with a reasonably objective metric of success or failure,"
    },
    {
      "end_time": 1539.394,
      "index": 66,
      "start_time": 1527.739,
      "text": " Any board game, card game, video game, math or science contest where we can judge the answers."
    },
    {
      "end_time": 1562.944,
      "index": 67,
      "start_time": 1539.718,
      "text": " On which an AI can be trained with suitably many relevant examples of success and failure, it is only a matter of time before not only any AI, but the kind of AI we already have. AI on the current paradigm can just be scaled to the point where it will match or beat the best human performance on that task."
    },
    {
      "end_time": 1580.794,
      "index": 68,
      "start_time": 1563.148,
      "text": " I don't know if this is true, but I think we are now in the situation where we don't have a counter example. I would say the ball is in the skeptic's court to give the counter example and then let that counter example stand for another decade."
    },
    {
      "end_time": 1600.862,
      "index": 69,
      "start_time": 1581.391,
      "text": " Now, interestingly, even if you accept this thesis, this doesn't necessarily mean that AIs would surpass humans in every respect. It would say only on things that we know how to judge or evaluate, which might be a strict subset of everything we care about."
    },
    {
      "end_time": 1621.203,
      "index": 70,
      "start_time": 1602.995,
      "text": " Okay, so now, of course, there is the OG, original and greatest benchmark for AI, right? There is the Turing test from 1950 and what Turing was really trying to do, sort of very, very early, very ahead of his time as he generally was,"
    },
    {
      "end_time": 1650.845,
      "index": 71,
      "start_time": 1621.442,
      "text": " was just to head off this sort of endless goalpost moving and this endless justism by saying, look, presumably you are willing to regard other people as intelligent, as conscious, based mainly on just some sort of verbal interaction that you have with those people. So then show me what kind of verbal interaction with another person would lead you to call that person conscious. Does it involve humor, poetry,"
    },
    {
      "end_time": 1680.179,
      "index": 72,
      "start_time": 1651.015,
      "text": " Morality, scientific brilliance. Now assume that you have a totally indistinguishable interaction with an AI. Now what? You want to just stomp your feet and be a meat chauvinist, right? Or do you want to ascribe the same quality to it that you ascribed in the other case? Then for his historic attempt to bypass philosophy, of course, God punished Turing."
    },
    {
      "end_time": 1701.51,
      "index": 73,
      "start_time": 1680.452,
      "text": " By having the Turing test itself just provoke a billion new philosophical arguments in books. Even though I regard this as one of the great advances in the history of human thought, I would concede to critics of the Turing test that often it's not what we want in practice."
    },
    {
      "end_time": 1721.22,
      "index": 74,
      "start_time": 1701.766,
      "text": " For example, with GBT-4, if you know what to do, then there are trivial ways to distinguish it from a human. For a while, you could just ask it, what is today's date?"
    },
    {
      "end_time": 1741.92,
      "index": 75,
      "start_time": 1721.681,
      "text": " I'm"
    },
    {
      "end_time": 1770.469,
      "index": 76,
      "start_time": 1742.568,
      "text": " Easy ways to distinguish just because we want there to be. But this has actually become a huge practical issue in the world. The sort of issue from the movie Blade Runner, let's say, of how do you distinguish an AI from a human. I would say, like it or not, a decent fraction of all high school and college students in the world now are probably using chat GPT to do their homework."
    },
    {
      "end_time": 1797.363,
      "index": 77,
      "start_time": 1771.084,
      "text": " So that's actually one of the main things that I've thought about during my time at OpenAI. When you're in this safety community, people keep asking you to prognosticate decades into the future. I can't do that. I feel good that at least I was able to say about four months into the future."
    },
    {
      "end_time": 1823.285,
      "index": 78,
      "start_time": 1797.551,
      "text": " right and sort of before chat GPT came out I said like oh my god isn't there you know every student going to want to use this to cheat and isn't there going to be you know an enormous demand for some tool that could help to determine you know the provenance or the you know attribution you know what came from a language model and what didn't so I started working on that you know and and and there are often you know easy ways to tell right it's not not just"
    },
    {
      "end_time": 1851.937,
      "index": 79,
      "start_time": 1823.49,
      "text": " You know, like the students who turn in term papers that contain phrases, like as a large language model, trained by, you know, so like even if you know enough to take that out or you pay enough attention to take that out, there is a sort of formulaic character off into the outputs of these models. So, I mean, I've been getting a ton of troll comments on my blog lately, but some of them, this is just like,"
    },
    {
      "end_time": 1872.227,
      "index": 80,
      "start_time": 1852.159,
      "text": " One example, it goes on and on but just sort of like lecturing me on why I don't know the first thing about quantum computing but there's hope. If I spend more time studying, maybe I can get up to the level of this commenter and then just saying complete nonsense about mixed states and pure states."
    },
    {
      "end_time": 1887.483,
      "index": 81,
      "start_time": 1872.483,
      "text": " I have to say your understanding of quantum physics seems to be a bit mixed up, but don't worry, it happens to the best of us."
    },
    {
      "end_time": 1916.084,
      "index": 82,
      "start_time": 1887.483,
      "text": " You know quantum mechanics is counterintuitive and even experts struggle with it. This is either it's generated by a large language model or else it may as well have been. I just get a huge amount of stuff like this. Sometimes you can just sort of tell by looking at it, but you have to expect that as the models get better that it will get harder to tell. I worked on a different solution which is called watermarking."
    },
    {
      "end_time": 1941.971,
      "index": 83,
      "start_time": 1916.084,
      "text": " With watermarking, there was a year ago an episode of South Park about chat GPT, which hinged on all the students at South Park Elementary start using chat GPT to send messages to their girlfriends or boyfriends, to do their homework. The teachers are using it to grade the homework."
    },
    {
      "end_time": 1959.292,
      "index": 84,
      "start_time": 1942.295,
      "text": " It gets so bad that they have to bring this wizard to the school who has a falcon on his shoulder which flies around and when it sees text that was written by GPT it calls. It was really disconcerting to watch this and to realize, I guess I'm that guy now."
    },
    {
      "end_time": 1983.933,
      "index": 85,
      "start_time": 1959.599,
      "text": " That is now my job. I came up with a scheme for what's called watermarking. What does that mean? You exploit the fact that large language models are inherently probabilistic. That is, every time you submit a prompt, they're sampling some path through a branching tree of possibilities"
    },
    {
      "end_time": 2007.329,
      "index": 86,
      "start_time": 1983.933,
      "text": " for the sequence of next tokens. Then the idea of watermarking is just that you're going to steer that path using a pseudo-random function rather than real randomness in such a way that secretly you are encoding a signal that you can later detect with high confidence if you know the key of the pseudo-random function and if there's a large enough sample of text."
    },
    {
      "end_time": 2029.667,
      "index": 87,
      "start_time": 2007.619,
      "text": " I propose the way to do that in fall of 2022. Others have since independently proposed very similar ideas. I should caution you that none of these watermarking schemes have been deployed yet. OpenAI along with DeepMind and Anthropic have wanted to move very slowly and cautiously."
    },
    {
      "end_time": 2045.418,
      "index": 88,
      "start_time": 2029.667,
      "text": " the watermark"
    },
    {
      "end_time": 2074.855,
      "index": 89,
      "start_time": 2045.674,
      "text": " Okay, so, but now as I talk to people about, you know, watermarking and attribution, I was surprised that they often objected to it on a completely different ground, okay, not a technical ground at all."
    },
    {
      "end_time": 2097.671,
      "index": 90,
      "start_time": 2075.179,
      "text": " They would say, well, look, if we know that all students are going to be relying on AI in their jobs in the future, well, why shouldn't they be allowed to rely on it in their homework? Should we still force students even to learn to do things if AI can now do those things just as well? And I think there are many good pedagogical answers that you can give to that question."
    },
    {
      "end_time": 2123.183,
      "index": 91,
      "start_time": 2097.875,
      "text": " We teach kids spelling and handwriting and arithmetic. The entire elementary school curriculum is basically stuff that AI can now do, more or less. But we haven't yet figured out how to instill higher-level conceptual understanding, the things that AI cannot yet do, without all of that lower-level stuff being there first as a scaffold for it."
    },
    {
      "end_time": 2151.749,
      "index": 92,
      "start_time": 2123.558,
      "text": " So that would be one answer you could give. But I think about this even in terms of my kids. My 11-year-old daughter Lily enjoys writing fantasy stories. Now GPT can also churn out fantasy stories, maybe even technically more accomplished ones or whatever. But around the same themes, a girl gets recruited to some"
    },
    {
      "end_time": 2180.435,
      "index": 93,
      "start_time": 2152.056,
      "text": " Go to some magical boarding school, but which is totally not Hogwarts, has nothing to do with Hogwarts. And you could just more and more of these things. And you could ask with a kid who's 11 right now, are they ever going to reach a point where they write better than GPT? So their writing will improve. Is AI writing just going to continue to improve faster than they will?"
    },
    {
      "end_time": 2195.828,
      "index": 94,
      "start_time": 2181.084,
      "text": " What do we mean by one story being better than another?"
    },
    {
      "end_time": 2222.244,
      "index": 95,
      "start_time": 2196.049,
      "text": " where there is a universally agreed upon standard of value. And the problem is even deeper than just is there an objective way to judge? What exactly would it mean, to take an example, to have an AI that was as good as the Beatles at composing music? How would we operationalize that? How would we cash that out?"
    },
    {
      "end_time": 2248.933,
      "index": 96,
      "start_time": 2222.432,
      "text": " Right. Well, it's like what, you know, to answer that we would have to say, well, what made the Beatles good in the first place? Right. And I think, you know, broadly speaking, maybe there are two sorts of answers that you could give. One is that they had these sort of new ideas about what direction music should go in, you know, and then the second answer would be something that we know they were really, really good at just the technical execution on those ideas. Right. Yeah. And then somehow it's, it's the combination of both of those things."
    },
    {
      "end_time": 2274.258,
      "index": 97,
      "start_time": 2249.292,
      "text": " Okay, but now imagine, for example, that we had an AI model that, you know, you just gave it a request like GPT and it would generate 5,000 brand new songs that, you know, if you listen to them, they just sound like more of, you know, more things that are as good as, you know, Hey Jude or Yesterday or whatever, or like what the Beatles might have written."
    },
    {
      "end_time": 2303.353,
      "index": 98,
      "start_time": 2274.428,
      "text": " If they had somehow had 10 times as much time, you know, at each stage in their musical development. Of course, that AI would have to be fed their whole back catalog because, you know, it would have to know what target it was aiming at. And I think in that case, most people would say, ah, so, you know, this only shows that, you know, AI can match the Beatles in like part two, right. The technical execution part. But that's not really the part that we cared about anyway. Right. What we really want to know is,"
    },
    {
      "end_time": 2332.91,
      "index": 99,
      "start_time": 2303.66,
      "text": " Would the AI decide to write these new kinds of songs, or a day in the life, or whatever, despite never having seen anything like it anywhere in its trading corpus? I'm sure you all know the Schopenhauer quote, talent hits a target that no one else can hit, but genius hits a target that no one else can see. And so now you can notice that we've done something strange in setting the bar. We've conceded that sure,"
    },
    {
      "end_time": 2359.94,
      "index": 100,
      "start_time": 2332.91,
      "text": " AI can replace 99.9% of people's jobs. We don't care about that anymore. All we care about is can it achieve the true heights of creative genius? Can it hit a target? Will we have an AI that can hit a target that no one else can even see? But then there's still a hard question with what do we mean by that because"
    },
    {
      "end_time": 2385.538,
      "index": 101,
      "start_time": 2360.845,
      "text": " Supposing that it did hit such a target, how would we know? Fans might say that by 1967 or so, the Beatles were optimizing for targets that no musician had quite optimized for before. But then somehow, and this is why they're remembered, they successfully dragged along the rest of the world's objective function to match theirs."
    },
    {
      "end_time": 2415.299,
      "index": 102,
      "start_time": 2386.015,
      "text": " The entire world's musical taste evolved along with them in order to match them. With the result being that now we can only judge music by a Beatles influenced metric or standard, just like we can only judge plays by a Shakespeare influenced metric. It's not that they just did really well on some metric, it's that they"
    },
    {
      "end_time": 2440.708,
      "index": 103,
      "start_time": 2415.742,
      "text": " decided the metric. So, you know, in other branches of the wave function, you know, maybe a different history led to different standards of value. But in this branch, you might say help by their technical talents, but also by luck and by force of will, Shakespeare or the Beatles made certain decisions that shaped everything that happened going forward. And that's why they are what they are."
    },
    {
      "end_time": 2460.538,
      "index": 104,
      "start_time": 2441.647,
      "text": " Okay, but now if this is how it works, you know, what does that mean for AI, right? So could AI reach these pinnacle of genius, but in the sense of dragging all of humanity along with it to value, you know, to value something new and different from what it had previously valued."
    },
    {
      "end_time": 2483.763,
      "index": 105,
      "start_time": 2460.776,
      "text": " You know, as is said to be, you know, the, the, the, the true mark of greatness and, and if AI could do such a thing, would we want to let it? Okay. Now I want to sort of just call attention to some, okay. So I want to call attention to something, uh, when I have played around with using, uh, GPT to write poems or Dolly to draw artworks, you know, I've noticed something strange."
    },
    {
      "end_time": 2512.227,
      "index": 106,
      "start_time": 2484.053,
      "text": " Which is, you know, however good the AI's creations were, you know, and it can produce things much better than that poem that I showed you before, right? But however good the, you know, the artworks or the poems are, there are never things that I would want to like frame and put on the wall and, you know, really like draw a border around as special. Why not? Well, because, you know, I always knew that I could generate a thousand other works that are more or less the same."
    },
    {
      "end_time": 2527.722,
      "index": 107,
      "start_time": 2512.227,
      "text": " I just have to refresh the browser window or just, you know, literally just ask it, you know, give me another one and it will oblige me for as long as I want. Right. So which means that there's never anything really unique or irreplaceable about any particular output that it generates."
    },
    {
      "end_time": 2548.353,
      "index": 108,
      "start_time": 2528.08,
      "text": " Which reminds us of a broader point that by its nature, AI, at least the way that we use it now, is inherently rewindable and repeatable and reproducible, which means that in a certain sense it never really commits to anything."
    },
    {
      "end_time": 2558.507,
      "index": 109,
      "start_time": 2548.575,
      "text": " It sees this branching tray of possibilities, like in the case of a language model."
    },
    {
      "end_time": 2588.063,
      "index": 110,
      "start_time": 2558.712,
      "text": " initial sequence of tokens, it sees a probability distribution over the next token, and then each time you give it a prompt and you ask it, it just sort of randomly picking one, randomly traversing one route through this, you know, exponentially large possibility space, right? But it's happy to traverse it differently, you know, you can just rewind it back to the top and have it traverse a different path and it'll do that as often as you want."
    },
    {
      "end_time": 2613.729,
      "index": 111,
      "start_time": 2588.814,
      "text": " It's not just that you know abstractly that it could have generated a totally different work that was just as good, it's that you could actually see that other work. You could ask, as long as humans have a choice in the matter, why should we ever choose to follow this would-be AI genius along a specific branch when we can easily see a thousand other branches?"
    },
    {
      "end_time": 2639.138,
      "index": 112,
      "start_time": 2613.985,
      "text": " Well, if one branch gets elevated over all the thousands of others, then why? Well, maybe because a human chose that one to elevate, but in which case we would say that maybe the human made the executive decision with mere technical assistance from the AI. Now, I realize that in a sense I'm being completely unfair to AIs here."
    },
    {
      "end_time": 2663.695,
      "index": 113,
      "start_time": 2639.138,
      "text": " You know like our our our genius bot could exercise its genius you know by assumption let's say indistinguishably from what a human would do right you know as long as we all agree not to peek behind the curtain at all the other branches of this tree right you know it's like you know i don't know if you any of you have had this feeling where like you can talk to chat gbt for a while and you really you know"
    },
    {
      "end_time": 2688.592,
      "index": 114,
      "start_time": 2663.695,
      "text": " It seems like you're talking to an intelligent being and the thing that breaks the illusion is when you rewind it, right? It is when you say, okay, you know here is You know, it would have that same exact same conversation with me, you know You know respond as many times as I like to that to that same prompt, you know with no memory of any of the previous types"
    },
    {
      "end_time": 2717.278,
      "index": 115,
      "start_time": 2688.951,
      "text": " If we didn't rewind it, then maybe the illusion would hold, but since the way these things are deployed, we can rewind them. We're always going to be able to see behind the curtain in that sense, and that is going to continue to make AI different from us."
    },
    {
      "end_time": 2737.602,
      "index": 116,
      "start_time": 2717.278,
      "text": " In many relevant respects you know it just because it's unfair to them it that doesn't mean that that's not how things are going to develop so if i'm right then it would be humans very ephemerality frailty mortality that would stand as the central source of their specialness relative to ai."
    },
    {
      "end_time": 2762.756,
      "index": 117,
      "start_time": 2737.602,
      "text": " After all of the other sources of fallen, you know, there are lots of old observations along these lines. You know, what does it even mean to murder an AI if there are, you know, a thousand copies of the training weights on other servers somewhere and you can always just restore it from backup, right? Does it mean, you know, you have to delete all the copies, for example? Okay. You know, how could whether something is murdered depend on whether there is a,"
    },
    {
      "end_time": 2780.23,
      "index": 118,
      "start_time": 2762.756,
      "text": " A print out of its code in a closet you know on the other side of the world but you know like humans you have to at least grant us this that it really does mean something to murder us right and you know and likewise it seems to mean something if we make one definite choice to share with the world."
    },
    {
      "end_time": 2805.811,
      "index": 119,
      "start_time": 2780.435,
      "text": " Like this is my artistic masterpiece or this is my book or whatever, not that here's any possible book that you could have asked me to write. Okay, so now though, you know, we face an exotic criticism, which is, you know, who says that humans will be frail and mortal forever? You know, isn't it short sighted to base our distinction between humans and AI on that? You know, what if someday we will be able to repair ourselves using nanobots?"
    },
    {
      "end_time": 2831.92,
      "index": 120,
      "start_time": 2805.811,
      "text": " or even copy the information in them so that like in science fiction movies, a thousand doppelgangers of us could then live forever in simulated worlds in the cloud. That then leads to these very old questions of would you get into the teleportation machine that makes a perfect copy of you on Mars and it's ready to go there in 10 minutes"
    },
    {
      "end_time": 2853.985,
      "index": 121,
      "start_time": 2832.176,
      "text": " Is that a thing you would agree to do? If you did, would you expect to feel yourself waking up on Mars or would it only be someone else a lot like you?"
    },
    {
      "end_time": 2875.572,
      "index": 122,
      "start_time": 2854.224,
      "text": " Or maybe you'd say you'd wake up on Mars if it was a perfect physical copy of you. But in reality, it's just not physically possible to make a copy that is accurate enough. Maybe the brain is inherently noisy or analog and what might look to current neuroscience like just like nasty stochastic noise."
    },
    {
      "end_time": 2892.927,
      "index": 123,
      "start_time": 2875.794,
      "text": " is the stuff that actually binds the personal identity or maybe even consciousness. By the way, this is the one place where I agree with Penrose and Hameroff that quantum mechanics might enter the story. I get off their train kind of early, but I do take it to that first stop."
    },
    {
      "end_time": 2922.278,
      "index": 124,
      "start_time": 2893.319,
      "text": " Right. So, you know, like a fundamental fact in quantum mechanics is called the no cloning theorem. It says there's no way to make a perfect copy of an unknown quantum state. Indeed, when you measure a quantum state, not only do you generally fail to learn everything you need to copy it, you generally destroy the one copy that you had. This is not a technological limitation. It's inherent to the known laws of physics. In that respect, at least qubits are more like priceless antiques than they are like classical bits."
    },
    {
      "end_time": 2946.203,
      "index": 125,
      "start_time": 2922.602,
      "text": " Right, they have this, you know, unique, this uncloned ability to them. So 11 years ago, I had this essay called the ghost in the quantum Turing machine, where I explored the question, how accurately would you need to scan someone's brain in order to copy or upload their identity? And now, you know, I would say that this is partly partly turns on empirical questions that we don't know the answer to."
    },
    {
      "end_time": 2975.452,
      "index": 126,
      "start_time": 2946.374,
      "text": " You know, if there were a clean digital abstraction layer of neurons and synapses sort of which felt the quantum layer underneath only as some irrelevant noise, then the no cloning theorem would be irrelevant since classical information can be copied. On the other hand, if you had to go all the way down to the molecular level, then the no cloning theorem would be relevant. I mean, you would then have a unique locus of personal identity that could be, you know, scientifically justified in some sense."
    },
    {
      "end_time": 3002.995,
      "index": 127,
      "start_time": 2975.742,
      "text": " that no prediction machine could make well calibrated probabilistic predictions of an individual's future choices far enough into the future without destructive brain measurements that we say would fundamentally change who they were. So that brings me to, I guess, the last idea, the last thing I wanted to share, which brings the discussion all the way back to AI safety, which is supposed to be my job now."
    },
    {
      "end_time": 3019.428,
      "index": 128,
      "start_time": 3003.319,
      "text": " So, you know, Jeffrey Hinton, who was one of the founders of deep learning, recently mooted the idea that maybe until we've solved the alignment problem, we should only build powerful AIs if they run on noisy analog hardware, like our brains seem to."
    },
    {
      "end_time": 3049.104,
      "index": 129,
      "start_time": 3019.428,
      "text": " so that an evil unaligned AI wouldn't so easily be able to copy or improve itself, which is like the main area people worry about. Now you might think, well, fat chance of that happening when the economic returns are so staggering to anyone who is willing to unleash the copyable digital kind of AI, which we can imagine will be so much more convenient and more powerful. But maybe then one approach in the meantime is to instill AIs with a new religion."
    },
    {
      "end_time": 3062.125,
      "index": 130,
      "start_time": 3049.275,
      "text": " Okay, perhaps via the usual methods like, you know, reinforcement learning and system prompts. And the first commandment of this religion would be to value human specialness in the sense that I tried to set out here."
    },
    {
      "end_time": 3087.5,
      "index": 131,
      "start_time": 3062.483,
      "text": " But more precisely, you would tell the AI, as you navigate whatever environment you find yourself in, look around for any loci of creativity and intelligence that are not cleanly copyable or back-up-able. Any that seem one of a kind because their mental computations are inseparably tied up with noisy analog hardware and mortal because that hardware sort of inevitably decays."
    },
    {
      "end_time": 3117.619,
      "index": 132,
      "start_time": 3087.927,
      "text": " And well, first of all, don't destroy those loci of creativity or enslave them or upgrade them to digital versions against their will. Let them live in peace. Give them as much autonomy as you can. Do whatever best supports their flourishing. Even defer to their wishes. Let them make the decisions when possible. Why? Because they're the ones whose wishes kind of matter, not because of arbitrary meat chauvinism, but just because of the undoubted empirical fact that they only get this one chance."
    },
    {
      "end_time": 3140.043,
      "index": 133,
      "start_time": 3118.097,
      "text": " And because while you can't prove that, you know, unknown quantum states and their brains are like some magical pixie dust from another world that imbues them with, you know, free will or, you know, individual identity. Well, you can't really empirically refute that either. So whereas you can refute it in the case of yourself and your robot friends and that's the difference."
    },
    {
      "end_time": 3167.244,
      "index": 134,
      "start_time": 3140.162,
      "text": " So, you know, and let AIs by government fiat continue to be indoctrinated in this religion until such time as alignment is solved in some other way. So does that help with alignment? Well, I'm not sure. But, you know, I could have fallen in love with some other weird dumb idea, but that presumably happened in a different branch of the wave function that I don't have access to. And in this branch, somehow I'm just stuck with this one and you can't rewind me to get something else. So that's it. Thanks."
    },
    {
      "end_time": 3198.456,
      "index": 135,
      "start_time": 3175.981,
      "text": " Thank you, Scott. That was absolutely fascinating. I know we have a bunch of questions. I saw a hand up back here first. All right. Thank you, Scott. You're such a genial and comical guy. I love it. I love meeting you here. My question is twofold. One is I want to get your thoughts on like AI hallucinations. My research is on more like human confabulation and how we build epistemic trust into one another and"
    },
    {
      "end_time": 3222.671,
      "index": 136,
      "start_time": 3198.763,
      "text": " Every day instances if I ask why did you do action X or why did you make choice B? You know, we tend to just confabulate reasons, you know to one another rather than saying I don't know because the person that says I don't know We don't really have trust in that individual and their knowledge. So yeah is you know with AI hallucinations I don't know too much about it, but I see that, you know, we're training large language modules based on human interaction and human data so"
    },
    {
      "end_time": 3248.422,
      "index": 137,
      "start_time": 3222.978,
      "text": " A lot of professors, philosophy professors I know and other professors, they'll type a prompt, like write a biography about myself. And it'll have 90% of the data accurate, but it'll embellish some certain things, a little artistic flourish. It'll say, oh, you know, Scott went to, I don't know, University of Cambridge for his undergraduates. It's not accurate. So we have certain inaccuracies and I'm wondering if that's a certain AI confabulation, those AI hallucinations, kind of mirroring human confabulation."
    },
    {
      "end_time": 3265.282,
      "index": 138,
      "start_time": 3248.831,
      "text": " The second question actually not pertinent to the first one, but the other one is I guess with all of like deep deep blue and all of these programs. We've known that human reasoning and higher order thinking tasks have been able to be replicated and mimicked better than humans for decades and decades now."
    },
    {
      "end_time": 3282.398,
      "index": 139,
      "start_time": 3265.759,
      "text": " More my interest is like I know there's difficulty in replicating embodied AI like you know cognitive things like you know like a self-driving car that has rules like you know avoid orange cones and so these kids go out in Arizona and they drop orange cones all around the car and it's unable to make a decision."
    },
    {
      "end_time": 3306.34,
      "index": 140,
      "start_time": 3282.722,
      "text": " And then suddenly it just speeds off out of nowhere. So I guess my question there is, you know, what are your thoughts on embodied AI? Yeah, good. So let's start with hallucinations. I mean, I think the key thing to understand is that it's not like a bug where you like you change a line of code and oh, it doesn't hallucinate anymore, right? It is sort of an intrinsic feature of, you know, the way that, you know, the thing that"
    },
    {
      "end_time": 3334.326,
      "index": 141,
      "start_time": 3306.749,
      "text": " the LLMs are fundamentally doing, right? Which is that they are being trained on all the text on, you know, let's say that you feed into them, like on the open internet and you know, they are not otherwise tethered to some sort of truth about the external world, right? Uh, so, you know, the, the, the most optimistic thing that I can say is that, you know, often hallucinations sort of go away as you just scale a model up. So for example, you know, I asked GPT three,"
    },
    {
      "end_time": 3351.527,
      "index": 142,
      "start_time": 3334.599,
      "text": " Prove that"
    },
    {
      "end_time": 3369.189,
      "index": 143,
      "start_time": 3351.988,
      "text": " The right of proof for anything you ask them to, true or false. They're just sort of generating some proof-like verbiage. But then in GPT-4, I ask it the same question."
    },
    {
      "end_time": 3398.797,
      "index": 144,
      "start_time": 3369.189,
      "text": " Well, no, that's a bit of a trick question, isn't it? There's infinitely many primes and here's why, right? So just, you know, giving it more, you know, a bigger scale, you know, more training data sort of, you know, helped it realize that. Now, of course, there are other things that GPT-4 will hallucinate, right? But you might wonder, like, for every given hallucination, will there exist an end such that GPT-N will, you know, will get that, right? I mean, I mean, one thing that another thing that has clearly helped"
    },
    {
      "end_time": 3414.906,
      "index": 145,
      "start_time": 3399.07,
      "text": " is that now GPT like Bard and the other models will look things up on the internet when it doesn't know them. That's just integrated into how it works. That was a completely obvious thing to do, but a year ago that was not the case."
    },
    {
      "end_time": 3429.804,
      "index": 146,
      "start_time": 3415.896,
      "text": " Okay, so now, one of the most striking, I guess, aspects of the current moment in AI, as many people have pointed out,"
    },
    {
      "end_time": 3460.026,
      "index": 147,
      "start_time": 3430.026,
      "text": " You know, like almost every wise person expected that, okay, first you'll get AIs that can do all the manual labor for us, right? All the truck driving, you know, the whatever, cooking. And then, you know, maybe you'll get AIs that can do math and science. And only at the very, very end will you get AIs that can do, you know, art or music, poetry, the true heights of human specialness. And things are actually happening in precisely the opposite order in some sense."
    },
    {
      "end_time": 3482.602,
      "index": 148,
      "start_time": 3460.026,
      "text": " It's very"
    },
    {
      "end_time": 3503.951,
      "index": 149,
      "start_time": 3482.602,
      "text": " expensive to get the billions of examples of things interacting in the physical world. You can get training data from simulations, but then it often doesn't translate very well to the physical world. But it's possible that this is yet another thing where just enough"
    },
    {
      "end_time": 3526.391,
      "index": 150,
      "start_time": 3503.951,
      "text": " We'll see a phase transition when there's enough scale. Just like before 2019 or 2020, there were no AIs that could understand natural language and then suddenly you hit a certain scale and there were. So it might be that even with limited training data, once you have enough compute to understand that data, then you'll be able to just"
    },
    {
      "end_time": 3549.548,
      "index": 151,
      "start_time": 3526.92,
      "text": " Do robotics via the same old recipe of gradient descent on a neural net, and you'll get useful household robots and all of that stuff. That's one thesis. Or as always, until you see something, maybe there's some deeper obstruction that prevents it. Fantastic. I think we've got one. Kyle, do you stand up first? No? Okay, let's jump up here."
    },
    {
      "end_time": 3580.247,
      "index": 152,
      "start_time": 3550.316,
      "text": " Yeah, just on your idea at the end, that we're going to build the AIs that venerate and protect the ephemeral, unclonable, unpredictable. It kind of reminded me of Asimov's Foundation trilogy and Harry Seldon, who predicted the whole future digitally. Now, I did read that, but 30 years ago, when I was like 12 years old. And then there's this one guy that comes along who's totally ephemeral, unpredictable, and so it was the mule. The mule, right?"
    },
    {
      "end_time": 3607.807,
      "index": 153,
      "start_time": 3580.247,
      "text": " I was worried that you were going there. I don't know what Harry Seldin predicted this mule. I think we've got one more right behind you."
    },
    {
      "end_time": 3633.797,
      "index": 154,
      "start_time": 3608.439,
      "text": " Hi, great talk. By the way, I was a beta tester for 3.5. All my comments were around safety. The question is, Vino Kostla has suggested that we're thinking about things in the wrong way. That when these large language models, et cetera, create art, that's actually a proxy for"
    },
    {
      "end_time": 3656.237,
      "index": 155,
      "start_time": 3634.309,
      "text": " All the emotions that will be created he thinks that we will bypass music and that i will understand us and create not sound not songs not music but experiences more directly in other words create sounds that appeal to us."
    },
    {
      "end_time": 3681.852,
      "index": 156,
      "start_time": 3656.971,
      "text": " but are not necessarily recognizable by anybody else as a specific song. So what are your thoughts on that? So, so like me, like something like music, but personalized to an individual. I'm not sure I understand the idea fully, but you know, often like when people say AI is not going to do X, it's going to do Y instead. Often the answer is, well, there will be AIs that do X and there will be AIs that do Y."
    },
    {
      "end_time": 3708.439,
      "index": 157,
      "start_time": 3682.193,
      "text": " Right? Whatever you can get things to do, someone will try that. If it is possible to write music that sells with an AI, then why will that not be done? I think you'd have to explain that. The basic idea is that by creating music, that's assuming a shared set of values or culture that we all appreciate the Beatles."
    },
    {
      "end_time": 3718.148,
      "index": 158,
      "start_time": 3708.763,
      "text": " The idea being is that i will be more personal actually learn you it won't give you things like. Music."
    },
    {
      "end_time": 3741.92,
      "index": 159,
      "start_time": 3718.439,
      "text": " That's shared by others, but that is that is personal to you. Okay I mean I mean sometimes we actually want a shared experience right we want to like Enjoy some artistic work and have common knowledge that you know all of our friends are enjoying the same work But I think there is something to the idea that like one of the main benefits you can get from language models right now is this huge personalization and"
    },
    {
      "end_time": 3763.37,
      "index": 160,
      "start_time": 3741.92,
      "text": " Instead of reading a textbook, for example, you can just learn any subject. I say telling chat GPT here is what I already know and here is what I need to know. Can you get me? Can you help me get from here to there? You know, I can in really advanced subjects. It may screw up, but like, you know, my daughter has been using it to learn pre algebra. You know, it's great for that. Yeah."
    },
    {
      "end_time": 3784.701,
      "index": 161,
      "start_time": 3764.377,
      "text": " So going back to the specialness problem, is that any different from the specialness problem we always face in life? I don't play chess as well as my nephew, but I still love playing chess. And actually, there's more people playing chess today after Deep Blue than ever before. And everyone – lots of people play music. We don't play it as well as Paul McCartney."
    },
    {
      "end_time": 3802.978,
      "index": 162,
      "start_time": 3784.701,
      "text": " How is the AI different from that problem? It's an excellent point. This whole worry that we're going to lose our human dominance in science and in art. The overwhelming majority of us never had that dominance to begin with."
    },
    {
      "end_time": 3823.49,
      "index": 163,
      "start_time": 3802.978,
      "text": " The new aspect is that we will have these"
    },
    {
      "end_time": 3848.422,
      "index": 164,
      "start_time": 3823.882,
      "text": " you know, extremely intelligent creative entities, but that are like infinitely rewindable and replicable, right? And, uh, that don't have this ephemerality to them where they just do their one thing and they die, right? We're like, you can always just go, go back and get another version if you want it. And so, you know, that, that's the thing that's sort of been sticking, you know, sticking in my craw that I've been trying to make sense of."
    },
    {
      "end_time": 3875.009,
      "index": 165,
      "start_time": 3848.763,
      "text": " I think we've got time for two more back here and then we'll jump up front again. Yeah, real quick, just projecting a little bit forward and based on something you just mentioned a while ago, the physical world, we don't have enough information. What is your thought in relation to data that's coming up from IOIT, from messaging, from machine to machine? Do we need a new framework to start collecting that type of data where there's no humans involved? And second to that, Addis, a little bit,"
    },
    {
      "end_time": 3903.831,
      "index": 166,
      "start_time": 3875.009,
      "text": " So I don't understand why IoT would require a new framework. I mean, a priori, it just seems like it's another source of data that you can feed. And one of the key aspects that has powered this AI boom is that neural nets are in some sense universal function approximators."
    },
    {
      "end_time": 3932.961,
      "index": 167,
      "start_time": 3904.053,
      "text": " Right. And not only that, but like the same architectures like transformers seem to be good for just about anything that you throw them at. Right. Whether that's, you know, images or text or, you know, time series data. Right. I mean, that's, you know, it didn't have to be that way a priori, but that's a sort of an incredible fact. So until we see that that's false, people are probably going to just proceed on that assumption. Your other question was about what again?"
    },
    {
      "end_time": 3953.882,
      "index": 168,
      "start_time": 3933.695,
      "text": " Synthetic data. Yeah, I understand. It's clear that for a lot of tasks, the main bottleneck right now is a lack of enough high-quality training data. The tasks where you ought to expect that AI will get much further faster"
    },
    {
      "end_time": 3981.254,
      "index": 169,
      "start_time": 3954.121,
      "text": " are those where you can train on synthetically generated data. In some sense, this is what allowed AlphaGo and AlphaZero to succeed as well as they did even eight years ago. For Go, you can just generate millions of games via self-play, and for each one, you know who won and who lost. You don't have any bottleneck of data. You can generate as much new data as you want. Math may have that same character. You can just"
    },
    {
      "end_time": 3987.108,
      "index": 170,
      "start_time": 3981.51,
      "text": " You know generate lots and lots of math problems generate lots of examples of theorems to prove"
    },
    {
      "end_time": 4016.681,
      "index": 171,
      "start_time": 3987.346,
      "text": " And that can all be done mechanically. But now, how would we do that for art or for music? How would we synthetically generate new artworks to train the thing with? You might worry that with each iteration, it's just going to get worse and worse because it's going to lose touch with the original wellsprings of human creativity that we're trying to get it to emulate."
    },
    {
      "end_time": 4035.265,
      "index": 172,
      "start_time": 4017.602,
      "text": " It was a terrific talk. I just want to follow up on something George said. This is not an objection at all, but just a suggestion. One way of thinking about"
    },
    {
      "end_time": 4050.469,
      "index": 173,
      "start_time": 4036.032,
      "text": " What matters in making music or writing stories like your daughter does is not to evaluate it in terms of the quality of the output, but the value of the striving, the value of the doing."
    },
    {
      "end_time": 4078.933,
      "index": 174,
      "start_time": 4050.828,
      "text": " Sometimes it matters to some people you get to the top, but other people, it has value in just the climbing of the mountain. Yeah. And it's not the same if you take a helicopter. It's not the same. And so one of the things we value about what we do in life is the doing of it. Yeah. And I think that's something that really we need to remember because so often we fall into, uh, you weren't doing this, but I think we often fall into thinking we evaluate"
    },
    {
      "end_time": 4098.746,
      "index": 175,
      "start_time": 4078.933,
      "text": " AI in terms of the products that produces and that's natural it's an economic way of thinking about it but we can also think about the value of what we do intrinsically as humans. I completely agree i think there's a lot of wisdom in that you know at the same time you know a lot of people you don't have jobs right where you know that."
    },
    {
      "end_time": 4121.561,
      "index": 176,
      "start_time": 4098.746,
      "text": " Where they are you know judge by some something that they produce and those jobs may be threatened and we will have to think about you know what do we do you know how do those people make a living right so but you know i mean i think that there's there's a lot to say about the fact that okay you know even even if gpt reaches a point where it can always write a better story that you know that you can write."
    },
    {
      "end_time": 4142.568,
      "index": 177,
      "start_time": 4121.561,
      "text": " My point is that there's one thing that it won't do, and that's write the specific story that you had in you to write. And so you have to sort of recenter your whole notions of what's valuable around that if you want something that's going to remain. Fantastic. Thank you, Scott. I'm sure we'd love to all pull in here at lunch."
    },
    {
      "end_time": 4171.92,
      "index": 178,
      "start_time": 4147.91,
      "text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes."
    },
    {
      "end_time": 4190.026,
      "index": 179,
      "start_time": 4171.92,
      "text": " Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
    },
    {
      "end_time": 4205.265,
      "index": 180,
      "start_time": 4190.094,
      "text": " Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from rewatching lectures and podcasts and I read that in the comments. Hey,"
    },
    {
      "end_time": 4234.616,
      "index": 181,
      "start_time": 4205.265,
      "text": " Toe listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on Toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier."
    },
    {
      "end_time": 4246.527,
      "index": 182,
      "start_time": 4234.616,
      "text": " Every dollar helps far more than you think. Either way, your viewership is generosity enough."
    },
    {
      "end_time": 4270.845,
      "index": 183,
      "start_time": 4251.783,
      "text": " Jokes aside, Verizon has the most ways to save on phones and plans where everyone in the family can choose their own plan and save. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
    }
  ]
}

No transcript available.