Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Gregory Chaitin: Complexity, Metabiology, Gödel, Cold Fusion, and What is Randomness?

August 28, 2023 3:18:32 undefined

ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.

Transcript

Enhanced with Timestamps
420 sentences 29,396 words
Method: api-polled Transcription time: 193m 39s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
[0:20] Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
[0:36] Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] There are two things that are absolutely true. Grandma loves you, and she would never say no to McDonald's. So treat yourself to a Grandma McFlurry with your order today. It's what Grandma would want. At participating McDonald's for a limited time. I think that biology is open-ended and endlessly creative. When you create a new level of reality, what goes on at the bottom level may not be visible at the top level, and you can start with many bottom levels and get to the same top level.
[1:36] Gregory Chaitin is a towering figure in the field of mathematical logic and complexity theory. Chaitin left formal education during high school, beginning his work in mathematical theory as a teenager. His contributions to algorithmic information theory include the development of Chaitin's incompleteness theorem, which builds on Gödel's incompleteness theorem. However, it uses less assumptions.
[1:58] Gödel's theorem requires the strength of arithmetic to prove that an infinite amount of mathematical facts can't be deduced from a finite set of axioms or technically a recursively axiomatizable set. Whereas Chaiten's approach reaches a similar conclusion though it relies on less assumptions and thus in some ways can be seen as more powerful. Chaiten's also famous for his constant called Chaiten's constant. There's a visual math episode on Chaiten's constant and I'll link that in the description as well.
[2:25] And if you'd like a definition, it's written on screen. Again, links to everything will be in the description. It's defined to be the halting probability represented by the symbol Omega, capital Omega. So how can this be understood? It's defined as the probability that a randomly selected program will stop running or halt. Chaitin's career positions include being a researcher at IBM Watson, a professor at the Federal University of Rio de Janeiro, and a member of the Institute for Advanced Studies.
[2:51] This episode delves into Chaitin's exploration of metabiology as well, which is a study intersecting biology and complexity theory. Physicists might discuss complexity theory in terms of chaos and randomness, but Chaitin's approach examines how randomness contributes to biological evolution using mathematical models and algorithmic mutations. You can think of metabiology as an abstracted theory of everything for biology.
[3:15] We also explore chaitin's views on cold fusion developments and his critique of the sociology of science or the culture of science particularly with regard to its resistance to paradigm shifts. My name is Kurt and on this podcast we explore theories of everything primarily from a physics perspective that's my background is math and physics.
[3:34] But as well as delving into consciousness, as well as artificial intelligence, if you like the subjects that are in this podcast, you should watch the David Walport podcast, which is also on algorithmic information theory. Enjoy this podcast with the inimitable Gregory Chaitin.
[3:49] Keep in mind, Gregory Chaitin did tell me that at times his children will be in the background and they may make noise, but luckily it doesn't impair the audio at all. It's a fantastic podcast. It's one of my favorite podcasts. I hope that you enjoy it just as much as I did. Professor, welcome to the theories of everything podcast. I'm super excited to have you on. I know we've spoken for I think a couple of months now, maybe even one year ago over email. So welcome. Pleasure to be here with you.
[4:19] Welcome. What is it that you've been working on? Yeah, well, my wife and I are about to go to Morocco. We're going to be in residence for one academic year at a new Institute for Advanced Study connected with a university, a new university called UM6P. And my project for there is to write a book. And I'm not sure what the title is.
[4:45] But the idea is to look at episodes of mathematics. It's maybe a tentative title is mathematics evolving, but I'm not covering the evolution of mathematics because that's an enormous broad subject. I'm trying to cover episodes and mathematics that have a big philosophical impact. For example, Cantor's theory of infinite sets, Gödel's incompleteness theorem, Turing's halting problem, the halting probability. And I also want to have a chapter on is it possible to mathematize biology?
[5:15] Right. And also a chapter on why mathematics, you know, what makes you suspect what that the world mathematics is relevant to the world. And they're my funny answer. The usual answer is, for example, astronomy. The Sumerians were very good at predicting lunar eclipses, for example. But I think in our everyday life, the best answer is a crystal. I collect
[5:44] Is Chaitin's incompleteness theorem going to make an appearance in it?
[6:13] Yeah, well, maybe not the one you're thinking of. I want the halting probability to have a chapter. You mean the theorem that you can't prove, you can't establish upper bounds, lower bounds on the complexity of an individual object? That was the one that, is it Dave Walpert talked to you about? Oh, you watched that. That's great.
[6:39] Yeah, I saw that. Well, you made a clip of that part, right? Right, and then also Neil deGrasse Tyson. Yeah, I saw that he wasn't very receptive to discussing philosophy of mathematics. Well, he's an astronomer, right, or an astrophysicist. Correct. It's natural. Yeah. It's natural? Have you encountered people saying that you're too philosophical before? Well,
[7:06] I think the math community is not particularly philosophical. They want to solve important problems, develop their theories. I think Goethe's Incompleteness Theorem is largely forgotten. It was very shocking at the time. I, as a young student, read essays by Hermann Weyl and von Neumann. They were deeply, profoundly shocked by Goethe's Incompleteness Theorem. You know, people are good at suppressing unpleasant
[7:33] facts or like thinking about death so mathematicians don't think about Gödel incompleteness theorem and it doesn't seem that it has, I want to discuss that in my book maybe, it doesn't seem that it has a big impact on the work that most mathematicians are interested in. That's a controversial question to what extent.
[7:59] I devoted my life to thinking about incompleteness and trying to find new reasons for incompleteness and strengthen incompleteness results. So I bet that maybe this was important. Maybe I bet on the wrong horse, but certainly I had a lot of fun. Can you explain Chaitin's incompleteness theorem, the one about the lower bounds, as well as what Chaitin's constant is, and if there's a relationship between the two?
[8:27] Well, there is, all of the proofs are related. I think the easiest incompleteness theorem to understand has to do with defining algorithmic randomness or algorithmic incompressibility of a finite string of bits. That's a string of bits that can't be produced from any computer program substantially smaller than it is in bits, right? So that's an algorithmically irreducible, finite string of bits.
[8:57] And when you develop the theory of program size complexity and randomness of finite bit strings, it turns out that most, most bit strings have very close to the maximum possible complexity. Most of them are irreducible. Most of them are random in this sense. And then if you ask, well, what if I want to prove that specific bit string, you know, you can just toss a coin, you get a bit string, which would very high probability.
[9:26] doesn't have any program substantially smaller than it is to calculate it. And you can even quantify that. You know how the, as you ask for the program to get smaller and smaller bit by bit, the number of bit strings that can be compressed by n bits goes down exponentially as n increases. So it's heavily bunched on the maximum possible complexity. The most
[9:54] finite strings of bits, the n-bit strings, require programs very close to n-bits. And as you ask for programs that are k-bits smaller than the size of the string, it's roughly 2 to the minus k of all the possible bit strings, n-bit bit strings that can be compressed by k-bits. So as you make k bigger, the number of the strings that can be compressed that much goes down very rapidly. Okay, so
[10:23] So if you just toss a coin say a thousand times you know that it's irreducible and you can quantify exactly you know the margin of how close it has to be the complexity to the maximum possible right the probabilities are easy to estimate. So what happens if you want to prove you want to have a specific example you want to prove that a specific big string
[10:50] is algorithmically irreducible or close to it. It's an approximate notion. There's not a sharp cutoff. And so it's a little messy mathematically to talk about this. So the answer is you can't. The answer is if you have a formal axiomatic theory for mathematics that is in a certain sense n bits of algorithmic information,
[11:15] You're not going to be able to prove that a bit string that is more than n bits long is algorithmically irreducible. Even though maybe. Even though the probability is enormously higher than it is. So this is something that has an enormously high probability and you can estimate, you can give good bounds on the probabilities, but it's something that is unprovable.
[11:39] So the number of bits of algorithmic information in a formal axiomatic theory, Hilbert thought there was a theory of everything for math, right? So this would have had a certain number of bits of algorithmic information. I can define that more precisely if you want. And it wouldn't be very large because the thought was that pure mathematics starts from a small group of axioms.
[12:06] If you're doing Peano arithmetic, it's a small group of axioms with symbolic logic. If you're using Zermelo-Frenkel set theory as your basis for mathematics, that would be another candidate for a theory of everything. It would be a little more complicated, but it's not very large because people don't believe in very complicated axioms. The basic principles have to be simple to be self-evident and convincing.
[12:33] Hilbert had hoped that there would be a theory of everything for all of pure math and that everyone could agree on it and this would give us absolute certainty because there would be a precise criterion for mathematical truth because if you have a formal axiomatic theory that contains all of math that everyone agrees on, if you have what you think is a proof formulated within this system in symbolic logic, there's an algorithm to check if the proof is correct or not, if it obeys the rules. So that becomes objective.
[13:02] And you can even imagine a program that runs through all possible, the tree of all possible proofs, producing all possible theorems. And that's Emile Post's way of looking at a formal axiomatic theory. It's just as far as he's concerned, and as far as I'm concerned, it's just an algorithm for generating all the theorems you can prove, right? And it's an algorithm that goes on forever. And the precise definition of the algorithmic information content of a formal axiomatic theory,
[13:31] is the size and bits of the program that runs through all possible proofs, checking which ones are correct, the tree of all possible deductions from the axioms using the symbolic logic you're using, producing all the truths, all the provable theorems. You see, it's a calculation that never ends and the program wouldn't be terribly long. So the moment you have a, a bit string that's generated at random by tossing independent tosses of a fair coin,
[13:59] If it's substantially larger in bits than the algorithmic formulation of the kind I just told you, of the formal axiomatic theory you're using to try to prove that this bit string is algorithmically irreducible, you can prove it's impossible. It's just impossible to prove. It's a very paradox kind of argument that with n bits of accents you can't prove that anything
[14:25] needs substantially more than n bits to be calculated. You can't give individual examples. And you can prove that it's impossible? You can prove that the probability of it goes to zero? No, no, it's not the probability goes to zero. The probability is zero. You can't prove, you can't prove with a formal axiomatic theory that is n bits in the sense that I said that the algorithm that runs through all possible proofs, finding all the theorems,
[14:54] is an n-bit algorithm, then you can't give an individual example of any finite object that needs more than that number of bits to be calculated. So this is the theorem that David Wolpert mentioned. So these bounds are actually applied to all possible formal axiomatic theories for which there's an algorithm to run through all possible proofs and get all the theorems.
[15:23] Gödel's original proof applies only to Peano arithmetic, you see. But already starting with Turing's paper of 1936, there's a more general approach that then Emil Post captured and worked on, and I'm following that approach. So these are pretty general incompleteness results.
[15:45] So in other words, if you want an individual, even though most bit strings are algorithmically irreducible, if the string is longer than the number of bits that it takes to formulate your axiomatic theory, you're never going to be able to prove that an individual string that has more bits than the number of bits in your theory is algorithmically irreducible. So your theorem doesn't rely on piano arithmetic?
[16:14] No, it applies to any formal axiomatic theory that is algorithmic, for which there's an algorithm to check if a proof is correct or not, that it's a mechanical procedure to check if a proof is correct or not, which implies that there's a mechanical procedure to generate all the theorems, all the consequences of the axioms. It's very slow. This algorithm would be very slow because you're running through the tree of all possible deductions from the axioms, but it would include every
[16:43] every result that can be proven within your formal axiomatic theory and Hilbert thought there would be one for all of math that mathematicians could agree on and that would be a very concrete proof that mathematics provides absolute certainty that it's black or white which is what most mathematicians assumed until ghetto so this is a very general incompleteness result so another there are three sort of formulations of this that are roughly equivalent
[17:12] One is that if you toss a coin, an independent toss of a fair coin, a number of times that's substantially larger than the bits of axioms of your formal axiomatic theory, even though it's extremely likely that this will give you an algorithmically irreducible unstructured random string, you won't be able to prove it from those axioms because there's more information in the random string than there is in the axioms you're trying to use to prove that it's random.
[17:42] Now another version is the result is you can't prove lower bounds. You can't exhibit any individual example of an object with an n-bit formal axiomatic theory. You can't exhibit any particular finite object to give a specific example of an object which provably requires to be calculated a program that is substantially larger than the number of bits
[18:10] of axioms in your formal axiomatic theory. Now, I'm saying substantially, which is a little messy detail, but in the mathematics, there's a sharper formulation. I like to talk about elegant programs. You're fixing the programming language. An elegant program is one that has the property that no smaller program in the same language produces the same output. That's the most concise program in that particular
[18:42] And here you get a very sharp result. If you want to prove that a program is elegant that has more than n bits, you need a formal axiomatic theory that has n bits. Otherwise, you can't prove it. You can't exhibit any object that is provably requires more than n bits if you have an n bit theory. There's a proof for this. I don't know if I should get into the proof.
[19:10] No, don't worry about the proof. It's going to be difficult to convey with words. Okay. The proof is very simple, but I would need a blackboard probably to explain it better. This is a very simple result compared to ghetto's incompleteness theorem. The proof is really very, very simple. So these are three different versions of essentially the same idea formulated in slightly different ways. The proof uses something called the Berry Paradox.
[19:40] You see, because if I could prove that a specific object requires very big programs and I'm proving this from a smaller number of bits, I can run through all the proofs in your formal axiomatic theory until I find the first object that provably requires a lot more bits than in your theory. And then what's happened is I've calculated that object, which supposedly needs a very large program, but I've calculated from a much smaller number of bits, which is the number of bits in your theory.
[20:11] You see, the proof is very, very simple. Now, as Grothendieck said, when you really understand something, the proof should be very short and very simple. So this, I think, but this is a different kind of inconvenience result than girls. It's rather different in character. Now, starting from this,
[20:36] There is something called the halting probability. The omega number, it's a real number. It's a probability. It's between zero and one. You know, zero would be no computer program holds. You generate computer programs at random and one would be every computer program holds. And of course, the answer is somewhere in the middle. And if you write this number in binary, it is algorithmically irreducible. All the initial segments have very high complexity.
[21:05] It looks totally unstructured. It's incompressible. It's a number defined in pure mathematics that really cannot be distinguished from independent tosses of a fair coin. So in the world of pure mathematics, where truth is assumed to be black or white,
[21:22] Hear that sound? That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms.
[21:50] There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone.
[22:16] of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at Shopify.com
[22:40] Go to Shopify.com slash theories now to grow your business no matter what stage you're in Shopify.com slash theories.
[22:53] Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts, scrapes. A bad shave isn't a blade problem, it's an extension problem. Henson is a family-owned aerospace parts manufacturer that's made parts for the International Space Station and the Mars Rover.
[23:11] Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair. The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors, not the best razor business, so that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence.
[23:40] It's also extremely affordable. The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech. It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit hensonshaving.com slash everything.
[23:56] If you use that code, you'll get two years worth of blades for free. Just make sure to add them to the cart. Plus 100 free blades when you head to H E N S O N S H A V I N G dot com slash everything and use the code everything. You have something that is a very good simulation of randomness of independent tosses of a fair coin in pure mathematics. So this is a sort of a worst case.
[24:26] You see, the original idea of Hilbert was that he wanted a theory of everything for all of math that all mathematicians could agree on, because there were paradoxes and controversies about how to do math, and Hilbert proposed that formalism was a good way to clarify this.
[24:46] So his idea was that all of mathematical truth, all of the infinite, if there were a theory of everything for all of math, all of mathematical truth, this infinite amount of mathematical results in the plutonic world of mathematics, of pure mathematics, could be compressed into a finite set of axioms and rows of inference. Right? Now the halting probability of omega is the opposite extreme. It's this infinite amount of information. It's a real number with an infinite number of bits.
[25:16] which is incompressible. It looks totally unstructured. If you want to determine the first n bits of the numerical value of the halting probability, you need an n-bit theory. If you want a computer program that will output the first n bits of the halting probability, you need a computer program that is n bits long. And similarly, if you want to prove what the bits are, you would need n bits of axioms and rules of inference to be able to determine n bits of the halting probability.
[25:43] So the halting probability is a sort of a worst case. It's a case where mathematical truth is completely irreducible. So it's the opposite extreme from Hilbert's idea of a theory of everything, which would be a finite amount of information, a finite set of axioms, a theory of everything would be a finite set of axioms in rules of inference that would give you all the infinite world of platonic ideas, all the provable results. And here is a case of an example
[26:14] which is the exact opposite, where you have an infinite amount of mathematical information that can't be compressed at all. You see? So this is sort of, as a physicist friend of mine told me, Karl Svazic, this is sort of a nightmare for the rational mind because it's the opposite extreme for what Hilbert thought was going to be the case in pure math. It's something that's completely irreducible and compressible mathematical information.
[26:39] You see, because this whole thing probably can be defined more precisely, and then it's a specific real number, and its value is determined by its definition when you go through all the work. Have you heard of finiteism? Finiteism is the idea that the world of mathematics, you shouldn't allow infinite things in the world of mathematics, right? Yeah, well, I've heard of it.
[27:08] It's certainly a defendable point of view, but it destroys a lot of beautiful mathematics. Mathematics is, I don't know how to put it, it's sort of a fake world, maybe. Part of the problem with finiteism is where do you cut the line? How big is too big? It's sort of where you put the wall, but you see, for example, there's a very simple proof that there are infinitely many prime numbers that goes back to
[27:38] Ancient Greece. And there you have infinity already, you know, zero, one, two, three, four, five. So mathematically, I agree that in our daily lives, we don't see infinity anywhere, right? Or maybe if you're religious, you think that God is infinite, but we're certainly not infinite, we human beings, but you can prove things about infinity in pure mathematics. That's part of the
[28:03] The beauty of it, that this is this platonic world of mathematical ideas, even the simplest math, which is just 0, 1, 2, 3, 4, 5 addition and multiplication, you know, when you talk about numbers that are composite or prime, you have some very simple, beautiful proofs. The standard example that G. H. Hardy gives in his lovely little book called A Mathematician's Apology. When I was a student, it seemed lovely to me. I was reading it not that long after Hardy had written it. He wrote it
[28:33] during the Second World War to cheer himself up. He was a pacifist. Well, you didn't have to be a pacifist to be very sad about what was going on in Europe during the Second World War. So he wrote this little book called The Mathematician's Apology. He's apologizing in the end a little aggressively for the fact that he likes pure mathematics, mathematics that has no applications. He's thinking of applications to war and destruction.
[28:59] He says one field of math which will never have its hands with blood is number theory or arithmetic, where you talk about prime numbers, for example, and prove there are infinitely many. Historically, that hasn't quite been the case because number theory is used in some schemes for cryptography, prime numbers and factorization. Probably Hardy would be unhappy about those practical applications of
[29:28] I'm sorry, that was the first World War or the second? This was the second World War. Yeah, because the Germans were already working on the Enigma machine at the time. Yeah, but nobody knew about it. Artie was doing pure mathematics at Cambridge. He was, of course, very sad because his students mostly went to war, right?
[29:49] And many of them didn't come back from the war. I'm not sure if this is when he had Ramanujan there. Ramanujan was a foreign national, so he wouldn't have been called into the armed forces. And maybe some of the students at Cambridge would have been upset that he wasn't fighting for England. But at that time, India was part of the Commonwealth. So I'm not sure how this all plays out. Anyway,
[30:17] How is a theory of everything in physics different than a theory of everything in math? Well, in theory of everything in math from Gödel in 1931, we know is impossible. Now in physics, I think that a lot of physicists still hope to have a theory of everything for the physical world. But if the physical world doesn't contain infinity,
[30:42] Then you don't get incompleteness results, for example. So Gödel's incompleteness theorem, as far as I can see, does not translate directly into saying there can't be a theory of everything for physics. For example, the halting problem, Turing's halting problem, which I love because my own work comes from Turing's work more than from Gödel's, is not realistic from the point of view of physics because you're talking about unbounded or infinite time of computation. You're asking if an algorithm will halt
[31:12] in a finite amount of time, but it can be an absolutely monstrous amount of time. So this is, I think, unrealistic from the point of view of physics. So I don't think these mathematical results, in my opinion, translate that say that there is no theory of everything for pure math. It translates into saying there's no theory of everything for theoretical physics, unless you maintain that the world of mathematics is a subset of the
[31:43] some people who think of that. Now, on the other hand, how could that argument be made? I could see the argument that physics is a subset of math. How would the other direction go? Well, the other direction would say God is a this is it's Pythagoreanism. God is a pure mathematician and the universe is built out of the ontological basis for the universe is mathematics, is pure mathematics. God is a pure mathematician and the world was created out of
[32:12] out of pure mathematics. See, there's a problem. If you're looking for the fundamental ontology of the universe, you know, it's good to pick something solid, not try to build the universe out of marshmallow, for example. So one thing that's pretty solid is the world of pure mathematics. If you're sticking, say, to the natural numbers, the whole numbers, addition, multiplication, that's a pretty sharp, crystal clear foundation, perhaps.
[32:41] So that's one possible point of view. I personally find a little more attractive a philosophical thesis that says that maybe God is a computer programmer, not a pure mathematician. So instead of saying all is number, God is a mathematician, I would say all is algorithm, God is a computer programmer. So that's a Neopithagorean ontology, but you know, it's really up to physicists to look and see how the physical world
[33:10] is built out of what? Sorry to interrupt. Would God still be a programmer if God had every possible program? Because then it's like a programmer selects from the set of all programs. But in Wolfram's view, there's something like the set, the Rulliad space or the space of all software, the space of all programs. Yeah. And he said, I don't understand the Rulliad very well, in spite of the fact that Steven is a good friend of mine. He's a very
[33:39] Deep thinker and a very sharp guy. I'm sure he knows what he's talking about. But if I understand him properly, he's saying there's no need to know the laws of physics for this particular universe because all possible laws of physics are taking place at the same time. They're all entangled and the observer selects. The observer has an illusion that some particular laws are the ones that apply. But I think his thesis is that the rule yet is what really exists and then
[34:08] You have all possible laws at the same time. This is an amazing new viewpoint that Stephen is working on with collaborators. But the traditional notion would be that God picks either a set of partial differential equations to create the world or he picks a particular software implementation that would generate the world. You know, Leibniz has a remark
[34:35] As God cogitates and calculates so the world is made, this is not a brand new point of view. I guess you could claim it goes back to Pythagoras, at least, if not earlier. When Pythagoras said that the world is built out of mathematics, we didn't have modern physics, so there was already a lot of astronomical information
[35:01] But the usual evidence for the idea that Pythagoras claims that the world is built out of mathematics, I believe, were musical instruments, for example, or the motions of the planets. I personally prefer crystals as the most obvious example of mathematical substructure of the world. And I've asked people, you know, is there a mention of crystals anywhere in the text that come to us from classical Greece, from ancient Greece?
[35:29] And I never got a clear answer. I think you can put it all on a DVD. But as far as I know, nobody made this argument. I imagine they have mineral crystals in Greece or Turkey was then, but Magna Grecia was part of... Well, I believe that the platonic solids that were studied historically, the one that was studied the least or studied the latest is related to a crystal that's the most rare. Oh, so the Greeks...
[35:59] Connected the platonic solids with crystal mineral crystals. I don't know if that is the case, but we can use this historical fact infer that Okay, that's possible. Well, that would be good, but you would have to ask Stephen or one of his collaborators your question again because Right, right, right. I think his stuff is fascinating, but I have to confess I don't
[36:20] Fully understand it. Yeah. So there's a difference between a theory of everything and a delimiting theory of everything. And so a theory of everything is actually easy because you can say anything goes and there you are your theory of everything. Anything that can happen is going to happen, but it's difficult to have a delimiting theory of everything where you say, well, can we distinguish between what is and what isn't?
[36:43] Yeah. Well, if you believe in some kind of multiverse, then the laws of this particular universe is just what our address, our postal address in the multiverse. Oh, that's a great way of putting it. It begins to sound a little trivial, right? Unimportant. Yeah. Well, I personally feel more comfortable with a more traditional point of view, according to which the universe is built out of laws of physics, one particular set of laws of physics.
[37:13] but maybe it is plastic and maybe it is anything goes that would be that would solve the problem of finding the laws of this universe well it changes it it becomes why if i understand steven's notion of the observer the the observer somehow selects depending on what the observer can observe that makes it look like a particular set of laws are operational whereas he believes i think that the rulliad
[37:41] Is the fundamental ontology, which includes all possible time evolutions, all possible formal axiomatic theories. Also, by the way, Stephen has recently published a book called the physicalization of metamathematics because in his really, there isn't that much of a difference between a, an algorithmic world and a formal axiomatic theory. His approach is sufficiently general. It all sort of looks similar.
[38:11] You know, I told you this computer version of a formal axiomatic theory as an algorithm that generates, goes through all the tree of all possible proofs, generating all the theorems. And this kind of thing is also included in the roulette, if I understand the roulette properly. So this is an exciting development. I've known Stephen for many years and the fact that he's come up with digital models that he feels are
[38:38] of the physical universe is wonderful news and he has a lot of young collaborators who are working with him on this. We'll have to see how it develops, but I don't understand it very well. Have you heard of David Walpart's no free lunch theorems? And if so, does it have any relationship to yours? I've tried to take a look at that and I really haven't been able to get the point. I'm sure it's a good piece of work, but somehow it doesn't resonate with my own thinking.
[39:09] Yeah, I don't know for what reason, so I can't really think of anything to say on that topic. Wolpert is a deep thinker and he seems to be one of the few people who appreciates some of the work I've done on incompleteness, which is unusual. Most people don't like it, especially not logicians. Yes. OK, so the idea, in other words, is, as I said, Grothendieck somewhere says that if you really understand something,
[39:40] Proof should all be short and sort of obvious in retrospect when you really understand the subject. And he doesn't like proofs that require cleverness, you know, what is it called in French? Anyway, so there was a famous result he got that he didn't bother to publish because he thought the proof wasn't sufficiently natural. It was too long and it required some cleverness instead of his usual
[40:08] So I would sort of agree and I think the work I've done on completeness sort of hits you in the face because originally Gödel's work looked very complicated. It was sort of like general relativity. You could say that there were 10 people on planet Earth who understood it, right? It's a very technical, very brilliant piece of work. But following Turing and then Post and introducing the idea of the size of programs, bits of information,
[40:40] I think incompleteness hits you in the face. So I believe this to be the right formulation for thinking about incompleteness. Now the logic community will disagree. They find the idea of randomness abhorrent, for example. I mean, this is a logician's worst nightmare come true. They believe that everything that's true is true for a reason, as Leibniz would say.
[41:07] I sympathize with their point of view, I understand. I work with concepts that really come out of theoretical physics. People never enjoy when a field gets invaded by alien concepts that they don't really feel comfortable with. So, during the course of my life, I had many invitations to physics meetings with very good physicists, including, for example, the Solvay physics conferences.
[41:33] in Belgium that originally included Einstein and Madame Curie and Poincaré. I was invited to two of them, which was a real treat. And you could see photographs of the older meetings with this very distinguished group of physicists that created the modern physics. Yeah. So physicists feel much more comfortable with my work because I'm taking the idea of randomness
[42:00] which is a fundamental idea in statistical physics. You know, now all physics is statistical physics. Yeah, this is what I was talking to Neil deGrasse Tyson about, that there may be a connection because you're dealing with randomness and randomness is inherent to quantum mechanics. At least we think so. Yeah, I think that even Stephen Hawking made a remark like that in The New Scientist once. It looks appealing. I don't know how to make a direct connection
[42:30] The ideas certainly seem to resonate with each other, right? There's an empathy with them, but I don't see how to take my work and make it into quantum mechanics. But I agree that what is emerging is a point of view. Well, I would say not just randomness. You see, I'm working on algorithmic information, which is a classical notion. A normal Turing machine is a classical device.
[42:57] And physicists are now more interested in quantum computers, right, and qubits, quantum information. And there's a serious attempt on the part of many physicists to build the world out of quantum information, for example, to get space-time out of quantum information. And those are two different notions of information. And then if you look at molecular biology, Sydney Brenner says somewhere, the way you get molecular biology is you say,
[43:26] I don't care about metabolism. I don't care where the cell gets its energy from. All I care about is information. This was the revolution, the change of viewpoint, the paradigm shift that gets to molecular biology. The energy will take care of itself. What's important is to look at how information is represented in the cell, how it acts, how the information goes around. This is also information. We have three different fields where
[43:56] some notion of information, but also there's computer technology, right, which clearly is software. So I see this notion of information as a fundamental new paradigm that goes across fields, that goes across fields in different versions in biology, molecular biology, in quantum mechanics now, which is built out of qubits. It's a whole different way of looking at quantum mechanics. When I was a student, it was the Schrodinger equation. You know, that's how you started
[44:26] Of course, in quantum mechanics, now it seems to me some people probably start with qubits, right? That's what everybody's interested in quantum computers. And of course, in algorithmic information theory, which is my hobby horse, you're using a notion of information to try to clarify or understand better incompleteness from 1931 and Gödel's work. And then of course, there's software, which is computer technologies built on
[44:53] Hear that sound?
[45:22] That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the internet's best converting checkout, making it 36% more effective than other leading platforms.
[45:48] There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone.
[46:14] of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at shopify.com slash theories, all lowercase.
[46:40] Or is bits of information, then it wouldn't have a physical representation. But normally you'll say you need a physical representation for information in a computer or anywhere else, and then information isn't fundamental. Well, I don't know about that. It's true that a computer has to have a power supply, right?
[47:08] But to understand what a computer does, you really have to look at a different level. You have to look at the software. And the whole point of computers is to hide as much as possible the physical implementation, which changes every time there's a new generation of technology. But what doesn't change or changes much more slowly is the software environment. So it's really like a new emerging. I think it's a new emerging concept, even though some people will say, oh, there's no such thing as information.
[47:38] In the way that you're speaking about information, are you referring to it in a platonic sense or something different than that?
[47:58] No, I think I'm referring to it in a platonic sense. So when you were referring to Pythagoras' view of mathematics earlier, does that stand in contrast to Platonism or is it a subset of Platonism or it evolved into Platonism? Maybe it evolved into Platonism. I don't know exactly what Pythagoras said, but we attribute to Pythagoras the notion that God is a mathematician, that the world is built out of pure mathematics.
[48:28] But that's the ontological basis. This was something that people would say, I don't know, maybe 1900, 1910, 1920, 1930. Maybe nobody talks in terms of God anymore. I grew up reading all this stuff, you know, in the 1950s and early 60s, and you could still find essays which talked in these terms. Einstein is always talking about God, but Einstein's notion of God, as he says,
[48:55] is not a personal God who worries about individuals but is Spinoza's God that is the laws of the universe sort of. So this is a more abstract view of God. So if originally the Pythagorean point of view would be the world is built out of number, out of pure mathematics, or perhaps it's built out of partial differential equations, those happen more often in current
[49:25] There's a fascinating medallion that Leibniz designed that I think was never struck, actually.
[49:55] until Stephen Wolfram had it made and gave it to me as a 60th birthday present. There's a medallion that Leibniz wanted. His patron was a succession of dukes. Was it Brunswick? It was one of the German duchies right before the unification of Germany. Okay, Leibniz was interested in everything and he was exchanging mail with a Jesuit who was in Beijing.
[50:25] trying to see if they could convert the Chinese to Catholicism, or at least trying to see connections between Chinese philosophy and Western philosophy. The Jesuits, you know, are the intellectuals of the Catholic Church. They're all brilliant. There were many Jesuit astronomers. And so this correspondent, a Jesuit priest in Beijing, told Leibniz about the I Ching,
[50:55] which is used for a divination. What is it? Is it five yes or no alternatives that you use for? You toss a coin five times or something like that and then you can make a divination. This is called the i-chain, I believe. And Leibniz immediately realized that you could represent numbers just in base two, in binary notation, which apparently some other people had also figured out. But Leibniz went a little further
[51:25] And he sort of, if I understand his medallion, he's sort of saying that you could create the whole world out of zero and one. And in our digital world, you are sort of creating the whole world out of zero and one. You know, you have video, you have audio, you have software, everything is zeros and ones underneath. And Leibniz goes a little further. And in the medallion, it says in Latin something like the one has created everything out of nothing.
[51:54] And it's a pun. The one is God and everything out of nothing is zero. So from one and zero, you create the world. So he has examples of binary arithmetic addition and multiplication. And there are also images of the sun, the moon, and maybe the Godhead with light streaming from it. So it does seem that this is the idea that perhaps the universe is actually just built out of zeros and ones. Now, this is a tremendous anticipation of our digital technology.
[52:24] You could say that instead of calling zero and one bits, you should call them Leibniz's, right? This is a very forceful statement of digital philosophy or what is it called? I think in France. Digital physics? No, you say numerique. I don't know. Well, anyway, so this is from the late 1600s. Leibniz had built a calculating machine and
[52:53] He had realized that it might be simpler if you wanted to do a hardware implementation of arithmetic using zeros and ones, binary, instead of using decimal, which of course we all know is what won, right? Modern computers internally are using bistable systems and they're using binary notation. So this seems to be the earliest statement I can think of that maybe the ontological basis of everything is
[53:23] While we're being metaphorical, have you heard the case made that not only is mathematics somehow ontological, but that beauty is somehow more ontological so that the more beautiful the math is, the more true it is?
[53:51] Because you can have inelegant math like you mentioned with growth index saying that a proof that's substantial isn't as beautiful as one that's compressed. Well, that's short. It has to be sort of obvious. Once you understand something properly, it should be almost obvious once you put it in the right context. You shouldn't need any tricks in the proof. There's a saying that says that your definitions should be such that your proofs are short. But anyway, so about this beauty in mathematics, it's something I never thought about.
[54:21] Yeah, that's definitely true. Well, beauty is a beautiful concept, right? It certainly motivated me to study mathematics as a young student. But it's a very slippery notion, you know, if you apply it, for example, to sexual beauty, for example, beautiful females, or whatever it is, you find beautiful or beautiful mountains or beautiful landscapes.
[54:48] I don't know, it seems to be involved with biology beauty and it depends on the cultures and it depends on the century you're in and what part of the world you've grown up in. I think the cultural context is important. I don't know, with mathematics maybe we think we pure mathematicians fool ourselves and think that we're somehow dealing more directly with beauty in its primordial form. But if you look at the history of mathematics, the kind of mathematics people
[55:19] do changes over the centuries. What Euler regarded as a proof would not be regarded as a proof nowadays, for example. I personally find Euler's papers very beautiful because he gives his whole train of thought. I struggled through one in Latin once, but he also has papers in French, which I can sort of read, mathematical French. And it's very beautiful. When you're reading Euler's papers, you think you could have done it.
[55:48] But only Euler could have done it. He presents the whole train of thought and it sort of drops into his hands very often. So I don't know. I think this notion of beauty is important. It certainly motivates. It's very important for motivation. Even if it's the case that what we think of as beautiful is predicated on our biology, you make the case. Or on fashion.
[56:11] Or on fashion. Well, you also make the case that the biology may be predicated in math with meta biology or in computer science algorithmic information. Can we talk about meta biology? Okay. Well, sure. Happy to talk about it. This is more tentative work that I haven't fully worked out, but some young researchers have picked up the torch. Maybe I should mention their names. Otherwise, maybe I'll forget to mention them. Hector, Hector Zaneil is
[56:41] one person who's continuing to work in this area or related areas and also Felipe Abraham that got a doctorate with me. He's my only doctoral student, in fact, in Brazil. Okay, well, there is the question that always bothered me philosophically is that, you know, the world of physics seems to be deeply mathematical, right? The Schrodinger equation, Einstein's field equations,
[57:11] It's deeply mathematical, mostly partial differential equations nowadays, I guess. Anyway, but biology is not that way, right? Biology is a million pound marshmallow. And obviously we're very interested in biology. We're biological organisms, right? And we're very interested in understanding biology better, maybe so we can cure diseases or prevent diseases, which would be even better. There is this thing that always bothered me.
[57:38] The pure math doesn't seem to enlighten us about biology as much as it enlightens us about the physical world. And there are some mathematical theories in biology, of course. There's what used to be called, what is it called? Biophysics? Game theory? No, yes, but evolution seems to be the most important idea in biology. The most central idea is Darwin's theory of evolution, right?
[58:05] The question is, how do you explain the diversity, well, the creation of life is important, but also how do you explain the diversity of life forms across this planet? And the explanation for that is supposed to be Darwin's theory of evolution or the modern versions of it, right, which combine Mendelian genetics and something called, I think it's called population genetics.
[58:31] which the modern synthesis is based on three legs, right? One is Darwin's original theory, which people rejected initially. But then when you combined it with Mendelian genetics, and then you combine it with some nice, elegant mathematics called population genetics, looking at how gene frequencies change in a population in response to selective pressures,
[58:57] This was sort of the three legs of what I think was called the modern synthesis. Now, modern synthesis, I don't know how modern it was. Maybe it was the 1930s. But this was the rebirth of Darwin's theory, which people thought was sort of ridiculous at first. They didn't like the idea that everything comes from randomness. You know, it had been God that had created organisms.
[59:25] And then all of a sudden random God is replaced by randomness. And I think the initial reaction was largely negative. I don't know. I used to like reading plays by George Bernard Shaw and he would comment about the reception of because he lived through Darwin's theory and the initial reactions to it. And he has a play called Back to Methuselah.
[59:50] Which is almost unstageable because it has five acts occurring. It's like science fiction over thousands of years, and it's talking about the evolution of the human race. Going back from the garden of Eden, a garden of Eden to human beings that are godlike. And I don't know if this would be five hours or more to perform. It's been done very rarely. But I think it's based on his reading of Darwin's theory of evolution, right?
[60:17] which he talks about in an enormous prologue. His plays were often accompanied by prologues or epilogues that were as long as the play, which were very didactic and where Shaw was expressing his opinion on everything. Well, anyway, I'm sorry, I'm wandering around too much. So anyway, in my opinion, you know, there be, yeah, I'm sorry about this. There are physicists who try to understand life on the basis of energy and things like that, concepts that come from physics.
[60:47] and i think that just doesn't work i think you need to look look at a higher level of abstraction and i think the first paper that i find on this topic of the mathematics of biology is a piece of work by von neumann which is not his self-reproducing computer in a cellular automata world that's very well known it's an earlier paper from i don't remember in the 40s i think a talk that he wrote up as a paper and
[61:16] It's very courageously says that there are natural automata and artificial automata. Artificial automata are computers, which were just being invented, and natural automata are biological organisms. It seems to me that what this paper is saying that the fundamental concept of biology is software, and it's also the fundamental concept of computer technology. This idea of software is what explains the success of computers as a technology,
[61:46] But it's also what explains the plasticity of the biosphere because it's also based on software. So the idea would be my interpretation of Darwin's of Von Neumann's paper from the 40s is that nature discovered software before we did just until Watson and Crick. This was before Watson and Crick, by the way, a little bit before, and it inspired
[62:11] Sydney Brenner, a lot of the people who created molecular biology, which says that information is the most important thing to understand in biology, were physicists who were inspired by a little book by Schrodinger called What is Life. Now, Sydney Brenner, whom I have the privilege of chatting with on a few occasions, shared an office with Crick at Cambridge. He's one of the creators of molecular biology, and he was inspired by Van Norman's paper. He was in South Africa and his friend was Seymour Papert.
[62:40] was interested in computers and Papert told Brenner, who was a chemist, about this paper of annoyance and as a result of that Brenner decided to go to Cambridge eventually and he ended up sharing an office with Crick and Brenner is the gentleman who said, you know, to hell with metabolism, to hell with energy, we're going to create a new field by concentrating on information in the cell and in biology.
[63:07] You have to forget about it. So this is the fundamental idea, I think. And it's an idea that doesn't exist in physics, the idea of software. This is a new concept. And what I've tried to do in microbiology is go one step. Okay. Now, then von Neumann comes up with this cellular automata model, cellular automata world, and he constructs an organism that can replicate itself, right? But it can't move by the way in this world, you can't
[63:37] Translate yourself. The only way to move is to make a copy of yourself and then make another copy. Yeah. So I think now when I was doing this, just keep saying he was working for the U S atomic energy commission. He had been involved in Los Alamos. Everybody was horrified by the atom bomb. There's a whole movie about this now on open minor, right? And just, and so von Neumann was a member of the atomic energy commission.
[64:06] and was involved in decisions, I think, of horrible things like targeting, I don't know. So to keep himself sane, he did a big crossword puzzle. He's very bright and he outlined this solid automata world and a self-reproducing automata. But I'm going to criticize one of my heroes. This was just a crossword puzzle he was doing because the real work he was doing at that time was not mathematically significant and not up to
[64:33] But Neumann is a thinker. But it's the wrong level to think about biology mathematically. Because if you have to create a whole world that functions and you have an organism that is worked out in all detail in this cellular automata world, that's too big a job. I mean, it may be fun to do a simulation, you know, of evolution in some toy world on a computer. But if you want to prove a theorem, that's too low, too level. I'm interested in proving theorems.
[65:07] That's too low-level a formulation of biology, and you're never going to be able to come up with a whole physical world in which you can have an organism that you can prove is going to evolve. That's too big a problem, and it's like creating the whole world.
[65:32] So you have to think about this problem at the right level to be able to prove any theorems. So the way I thought might be a model, a simplified model of biology that would be mathematically amenable goes like this. If you say that DNA is the software of life and that the basic idea, mathematical idea of biology is software, as von Neumann says, I believe, and I agree with that,
[66:00] You know, it's a popular thing to say DNA is the software of life, right? Ventner says that, for example, in his book. Okay. So if you take that idea seriously, it's going to be very tough to have a mathematical theory of evolution where you're dealing with DNA. DNA is very complicated language, you know, to run a DNA program, you need a cell. It's going to be, you can maybe can have systems biology, right? Doing good simulations, but you're not going to be able to prove any theorems because the system is too complicated and messy. So,
[66:29] I said, instead of looking at natural software, which is DNA, and trying to look how it evolves and see if you can prove theorems, I said, let's look at artificial software. Von Neumann talks about natural development and artificial development. Let's look at artificial software. Artificial software being any kind of software? Would be a computer program. And let's make random mutations in the computer program and see if we can
[66:56] Prove theorems about the way it evolves. So that's the idea of Metabology. It's one step removed from normal biology in an attempt to get something simple enough that you can actually prove theorems. So instead of subjecting natural software, which is DNA to random mutations and trying to prove that evolution will occur under certain circumstances, my suggestion was to look at something when removed, which is to take artificial software, which is a
[67:25] some computer programming language, subject it to mutations and see if you can random mutations and see if you can, you have to put in selection somehow and see if you can prove theorems about evolution taking place. And this is tractable mathematically. I have a sketch of a very simplified version of this where I can actually prove some theorems. So this is my suggestion for how to do a mathematical biology that gets at the, maybe gets at the fundamental question of
[67:55] The ideal would be to prove that evolution can explain the diversity of the biosphere we see here. That would be wonderful, but I don't think it can be done. I don't think it's possible. I don't think mathematics can deal with a messy, complicated million pound marshmallow like that. But if you reformulate the problem, one step removed from biology and you're making changes, random changes in some computer programming language that you pick,
[68:23] And then you subjected somehow to selective pressure in a toy model where I don't have a whole physical universe. I don't have a cellular automata world. You know, I'm formulating the problem more abstractly. Then I can actually prove theorems. You get a little theory that I call metabiology. And I published a little book about this called proving Darwin, which is not really what the book does. The subtitle is making biology mathematical. Well, it's an attempt. It's an embryonic theory.
[68:52] that I hope points in a direction, in that direction. The title is not an academic book. It's actually a maybe over mathematically sophisticated popular science book, but the title had to be catchy in the hope of maybe selling copies. If it had been published as an academic book, it would have had a more tentative and more modest title.
[69:16] So metabiology is going along the lines of thinking, okay, it's messy to think about embodiment. Let's uncouple that. It's messy to think about metabolism. It's messy to think about the environment. So there's this continual uncoupling of in order to work with what's most general. However, evolution requires a fitness function. Yet survival of the fittest implies you're fit for something or fit relative to something. Right. Yeah. And metabiology has a fitness function.
[69:43] Well, first of all, I also do another piece of simplification is I don't deal with populations. I only have one organism at a time and you subject it to a modification and if it gets fitter, it replaces the original organism. It's simple. There's no population. There's just one organism. So it ends up being a random walk in software space. Well, it's a hill climbing random walk in software space because you have this one organism that's being mutated.
[70:08] And if the mutation made it fitter, then that becomes the organism. So that's a step in software space. So this is sort of a random walk in software space, which is a nice new phrase. I don't think that anybody ever thought of it. Okay. So software space, the space of all software, the space of all possible programs? Of all possible organisms, because the organisms are just software. Oh, they're only DNA in essence. There are no bodies, there are no populations. I'm trying to make this simple enough that I can actually
[70:38] Proof theorems and there is an environment sort of this model actually needs an oracle for the halting problem. This is the environment. This is where new information comes from. That corresponds to the environment actually in this toy model and you can prove some theorems about the rate of evolution depending on how you evolve. So I look at three regimes in this toy world. Well, one of them is
[71:06] What is creationism called? Intelligent design. Intelligent design is if you always pick the best mutation, God knows how, that will have the organism improving fitness as quickly as possible. And that's not the real world, obviously. But that evolves very rapidly. So that shows you the most rapid evolution that's possible in this model. Then the worst case, the slowest way of evolving is if you have no memory and you're just
[71:36] You basically have to search through the entire space of all possible organisms. Well, in one case, you get n bits of information in time n, that's intelligent design, because each mutation is just the one you need to get one more bit of information from the environment. Now, if you're doing a random walk, you're doing exhaustive search in the space of all possible organisms,
[72:01] It's going to take you a time that goes exponentially to get in bits of information in your organism. It's going to be two to the end on the order of two to the end. So in one case, it goes up linearly to get in bits of information in your organism. That's if somehow you picked out the best possible mutation. But if you pick mutations at random and you don't remember the previous organism, you just pick another organism. So that's exhaustive search and that's the worst case and it'll take you
[72:28] sort of the order of two to the n steps to get n bits of information in your organism. Now, how about real Darwinian evolution, where you're making a random change in an organism and then you stay with the new organism, then you make a random change in that. So that has memory. And how does that compare with the best case and the worst case? It gives you an idea. And the answer, surprisingly enough, is that to get n bits of information from an oracle for the halting problem,
[72:58] It takes time that grows as n squared. N to the third. Yeah, nearly n squared. Yeah, nearly n squared. Yeah, good guess. I think it's n plus epsilon actually. I think n to the third works, but actually it can be pretty close to n squared, which is pretty fast. It's not as fast as, yeah, now this is not a practical model because I am using a Oracle for the halting problem. This is where the new information comes from. This is the environment.
[73:28] Can you briefly explain what an Oracle is? Oh, an Oracle. This is a less known masterpiece paper by Turing. Everyone knows the 3637 paper, right? Where the halting problem comes from about computable numbers and the fact that most numbers are uncomputable. But there's a 1938 paper, which I don't find very interesting except for the fact that he introduces the idea of an Oracle in this paper. So what an Oracle is,
[73:57] is the idea of taking the holding problem. There's no algorithm to solve the holding problem, right? So what happens if you take a normal computer and you give it an oracle for the holding problem? That is to say, the computer has this other unit, this magical unit on the side. Whenever you want to, you can send it a program and ask it, does it hold or not? And the oracle will say yes or no. This is unphysical. This is unrealistic because this violates the holding problem.
[74:28] Yes, this is unrealistic physically. This is a mathematical fantasy of computers that are more powerful than ordinary computers. Is this called hypercomputation or is this unrelated? This may be. This may be what it is. And sorry to interrupt, but in this analogy with the intelligent design, is the God, quote unquote, being analogized to an oracle? No, no.
[74:54] intelligent design there is no intelligent design in my model what it is is what happens if you see the normally the mutations are picked at random right that's the idea but what happens if instead of being at random every time you you pick a mutation to subject your software organism to you pick the best mutation that will make it in order to do that you need an oracle though no yeah you couldn't do that it's not algorithmic
[75:22] This is the fastest possible evolution in this model. That's a non-algorithmic step also in this model. In order to prove theorems here, I have a formulation which could not be simulated on a computer because it requires non-computable steps and it also requires an oracle for the halting problem.
[75:48] the environment you have in this model. The organisms are getting information from the environment, which is an oracle for the halting problem. So anyway, the idea of this hyper computer, I guess maybe it's called, this is a normal Turing machine to which you add something that can't exist in this physical world, which would be a unit on the side.
[76:10] which will solve the halting problem for you. If you give it a program, it'll say this program also, this program never holds. So this is a notion of computation, which is more powerful than normal computation. Now, guess what? You could iterate this. You, because this hyper computer, this computer with an Oracle for halting problem also has a halting problem, which it can solve. This is the problem of whether an algorithm which can use an Oracle for the halting problem eventually stops or not. And even with an Oracle for the halting problem,
[76:40] You can answer this. So then you would need an oracle for the halting problem of computers with an oracle for the halting problem. So you get like this tower of Cantor's infinities. Yes. And this is actually called the Turing degrees sort of. Another name for it is the arithmetical hierarchy. It's a hierarchy of more and more powerful computers and only the ones at the bottom
[77:05] on the zero level are sort of realizable physically although even the normal Turing machine is a bit of a mathematical fantasy because it has no limitation on the time or the amount of storage as long as it's finite it will go on presumably so they're already impractical as the Turing machine the halting problem is an impractical question because if you ask if a program halts in a specific amount of time you just run it for that amount of time and see it's only when you ask if it halts
[77:34] with unbounded, given unbounded time that you get an undecidable question. And are your organisms in your meta biology models, do they have access to unbounded computation? Yeah, that's a good question. Well, they have access to an Oracle for the halting problem. In the form of the environment? Yeah, that sort of corresponds to the environment. I never said what the fitness is. It's just the number produced. I'm looking at programs that calculate a number, a whole number.
[78:03] and the bigger than the number of the fit of the organism. It's a very simple measure of fitness. Sorry, can you repeat that? The organisms want to calculate a very big number. Okay. The bigger the number, they calculate it. They're finite calculations. It's like they want to solve a math problem. Yeah. The math problem is to name a very big number by calculating it.
[78:29] So that's called the busy beaver problem, which actually is equivalent to the halting problem. So it's a very simple minded measure of fitness. And that's why it's a hill climbing random walk in software space because you, you only take a random step if it increases the fitness, if the resulting program calculates a bigger number than your current organism. So that's a very simple model. Now there's some problems with this model. It's a,
[78:57] platonic model because some mutations may not give you a program at all. They may never halt. Another question is, what do you take as considered to be mutation? That was something that I worked on in this theory. You know, the normal notion of mutations in molecular biology are sort of point mutations, right? There's a indel that's inserting another base or deleting a base. They're very localized changes.
[79:27] in a DNA strand. And the problem with this is I couldn't sort of make a mathematical theory where I do the same kind of mutations on my software organisms, but it was horrendously ugly. The proofs were terrible. As you said, in pure math, you change the definition if the proofs are ugly or don't go through at all, right? So it turned out that I got a very nice theory by allowing a very powerful notion of mutation, which I allow algorithmic mutations.
[79:56] So an algorithm mutation is a global mutation that takes as input the software organism that is your current organism. Remember, I don't have a population. I have one organism at a time and it produces a new software organism. Now there you already have a problem because if you pick an algorithm at random, it may be that it never outputs a new software organism. So I have that problem that I have to define away from this model.
[80:27] Randomness coming in here in the form of the algorithm that mutates the organism. And then this organism thereby inherits some randomness because then you have to run that software, which is the organism. Well, how does it learn? That's the question. Where does it get more complexity from? With TD Early Pay, you get your paycheck up to two business days early, which means you can grab last second movie tickets in 5D Premium Ultra with popcorn.
[80:58] Well, the way it works is it's a little funny. See, I did this more than 10 years ago. I'm trying to remember.
[81:25] There's the question of where is the information coming in that makes the organisms more sophisticated? Well, actually, it turns out that one place they can come from is you have to eliminate mutations that don't give you a new organism. You see, if you pick an algorithm at random and you give it as input your current organism, it may be that it runs forever and never gives you a new organism. So those guys have to be eliminated.
[81:53] And that's pretty powerful, actually. That kind of thing amounts to having access to an Oracle for the halting problem. So this is how the organisms get new information from their environment. Is it because you have to say, well, let's select from the set of all programs that halt? Yeah, because if you're dealing with a global algorithmic mutation rather than a localized mutation like in real
[82:24] that you've picked at random an algorithm and you give it as input your current organism and you started running and it goes on forever and never gives you a new organism so you've got to by definition say that you're not going to allow this so this is actually amounts to having an oracle for the halting problem you're eliminating mutations that never give you a new organism
[82:49] and using that the organisms can get the software organisms can get more and more information so they get n bits of information about the holding problem in time roughly n squared n cubed whereas if you pick the best possible mutation you could get one bit each time you do a mutation if you pick the the best possible mutation to try you could get one bit of information from the oracle and every time you try a mutation
[83:18] But that would require the intelligence of picking the right mutation. Mutations are picked at random, right? So that's not as good as intelligent design, shall we say. There's no intelligent design in this system. You're just trying to look at the fastest they could evolve and the slowest they could evolve and get an idea for what happens when you have Darwinian style evolution, how it compares.
[83:46] And the answer is it's surprisingly fast. I was surprised, maybe even a little disappointed. Why? Why disappointed? You wanted more work? No, if the model collapses, becomes trivial, if you can evolve with Darwinian evolution essentially as fast as you do by picking the best mutation, this wouldn't mean the model is very unrealistic biologically. I would have been unhappy.
[84:15] Well, just like you mentioned that there's the modern synthesis, there's also something that's more modern than that called extended evolutionary synthesis, which incorporates niche construction and epigenetics and stress induced mutations. And I think multi-level selection is the rebranding of group selection, but it incorporates lots of stuff. Like, look, the world is way more complex than traditional evolutionary theory would predict. No, I agree.
[84:45] And I'm sure there's going to be an extended version of that. So I don't see it as being a problem that your theory gets to something near optimal. Okay. Okay. Well, that's kind of you. Yeah. Well, the question is how much of this you can make into a little mathematical theory where you can prove theorems. You know, once you make evolution more realistic,
[85:09] If I had a body with a metabolism as well as the DNA of my little organisms, I would be in trouble. If I had a population, instead of just one organism, I would be in trouble, although I think actors in you. Why is that the case? Well, because the mathematics becomes a mess and I don't know how to prove any theorems. It may be my fault. You have to keep it simple enough.
[85:34] Sorry, professor. I have a question. You can say Greg. Sorry, Greg. I have a question. In physics, there's something called effective theories, which says that it could be that the underlying
[86:03] Now, okay, so there's turbulence, which says that you need to know the low level details in order to predict something at the higher level. But then there are renormalizable theories where it turns out that the low level details don't matter much. So could it be the case that there's something like a renormalization of metabiology or some effective field theory where even though it's much more complicated at the bottom, like it's much more messy for the mathematicians to work on, there's something that's renormalizable that becomes tractable for the mathematician.
[86:32] Yeah, that's a very good analogy. That's equivalent to Sydney Brenner saying, I don't care about metabolism or energy. You know, the cell will somehow manage that. I just want to know about information. And that's it's it's that that sort of gets you molecular biology. That's a similar idea. But what goes on underneath may really not be important to understand the system the same way that you can understand to understand what a computer is doing.
[87:02] The engineering, the physical implementation of the Boolean algebra, the circuits, doesn't show through in the software. The software level wants to hide all of that, which is sort of similar to renormalization and the idea that what goes on at the bottom level may not be visible at the top level and you can start with many bottom levels and get to the same top level.
[87:29] right? I don't remember what's that called, but you were referring to that, right? Effective field theory. Yeah, so I feel very sympathetic to that idea. I don't know what the right name for all of this is in this different context. It's when you create a new level of reality, what goes on at lower levels may be unimportant at that level. You have to look at the concepts that are
[87:53] One nice thing about it is the normal version of Darwinian evolution says you want to be well adjusted
[88:21] In 1950s in American schools, kids had to be well adjusted, right? The same with the normal view of biological organisms. And once they get well adjusted, they stop evolving unless the environment changes due to some cataclysm like volcanoes or a meteorite hitting the earth. So I think that's a bad metaphor. I prefer, in my little model, you have open-ended evolution.
[88:50] The system can continue to be a better and better mathematician. So my wife has a little philosophy paper talking about the human self-image depending on which model of biology you pick. And the notion that what you want to be is well adjusted, you know, that you adapt to your environment and then you stop, evolution stagnates, is a picture that she finds, I also find, less attractive than the notion
[89:19] that evolution is open-ended and creative. And is this related to Gödel's incompleteness? Yeah, I see it as being related, but that's because I'm sort of, I'm sort of using the mathematics having to do with Gödel incompleteness theorem and Turing's work on the holding problem. I'm trying to make it into a sort of a toy version of biology. You see the same way that no formal axiomatic theory is a theory of everything. So you can always have better and better theories.
[89:48] And I sort of take that and I sort of convert it into a statement in biology, which says the organism can keep evolving forever, you know, that the system is open-ended. So at the level that I'm looking at biology, the two problems are very similar. Actually, it's more similar to Turing's halting problem, but it's a similar idea to incompleteness, right?
[90:16] So I think that's a more attractive version. Actually, the version that Dawkins popularized of evolution is the selfish gene. And so selfish genes are, I think it's a very ugly idea. You know, sexuality is very unselfish. When you have a child, you throw away half of your genome.
[90:42] the husband and the wife or whatever the couple is. Why do you see it as throwing away half of your genome rather than actually preserving half of your genome because it's going to go on? OK, OK, but you're not selfish. It's not like a gene that says I want to take over, I want to survive, you see. So I in my view of biology is that biology is creative. I don't think it's selfish and I don't think you want to be well adjusted to the environment.
[91:10] I think that biology is open-ended and endlessly creative. That's because I like creativity, so this is my personal bias, so I come up with a little biological model that reflects my love of creativity. Now if you talk about, I'm not sure about what I'm about to say, but I've heard that the view of biology in Japan, for example, is completely different.
[91:34] In the United States, everybody believes that everything is selfish and organisms are competing, which is one possible point of view. In Japan, everybody thinks that everybody's cooperating because if you look at the human body, it has billions of cells, which originally were independent organisms, but now they're cooperating to create a higher level organism. So if you are in a society which believes in cooperation and social cooperation rather than
[92:04] in to get rid of each other and be the success. Then you make a theoretical biology in line with your view. So my little toy model of biology stresses creativity because incompleteness really is creativity. I view Gödel's Incompleteness Theorem not as a limitation of mathematics.
[92:30] and Turing's work, not as a limitation, it was taken pessimistically. I view it as saying that mathematics is creative and will always be creative. You see, a formal axiomatic theory for all of math would give you absolute certainty, but it would also be a cemetery, you see. Right. So you change the viewpoint and you say, get an incompleteness is really good news. It's not bad news. It means our children and our grandchildren will always have new things to discover, new things to do.
[92:59] that it's not a closed system, it's an open system. And that connects pure mathematics, in my opinion, to biology at some very fundamental level, maybe, but not to real everyday working molecular biology, working with DNA. This is an attempt to sort of try to find the basic math, extract the basic mathematical ideas from biology.
[93:24] So another way to say what I'm trying to do is, Von Neumann had this brilliant insight, I think, in that paper where he says that the key idea of computer technology and of biosphere is software, which gives you the plasticity of the biosphere and the successfulness. So I'm just trying to go one step further and see if I can get to a little theory about doing random mutations on software. It's just an attempt to
[93:52] carry the ball mathematically one step, one little step further. Now, I agree that metabology doesn't seem to have much connection with real biology, but actually my wife suggested that it might and work by Hector Zanile and his collaborators has furthered this suspicion. You see, the question is, I'm not using point mutations. I'm using these global algorithmic mutations, which seem very powerful and very unrealistic.
[94:22] So my wife said, oh, well, one should go back to real biology and see if there's anything like that in the biosphere. You know, this may be a hint from pure math about what's going on. There is a piece of work by actor Zanile and a few collaborators whose names escape me that I think was published in the Proceedings of the Royal Society or something where they are looking, they're doing simulations of evolution roughly along the lines that I'm talking about. They're much more practical.
[94:53] They're not molecular biologists by any means. They are using ideas related to algorithmic information theory and Turing's work and all of that, but they are looking at algorithmic mutations rather than point mutations in their model and they're doing simulations. I don't know if they were with populations. Maybe they were not individual. I don't remember. Anyway, it's very good work and the result they found was that algorithmic
[95:22] What I was referring to before about extended evolutionary synthesis, it goes by the name EES, it was because
[95:39] I forget, I think it's Stephen Jay Gould and then this guy named Piglucci. I'll find the person's name and put it on screen. It's not punctuated equilibrium, right? It's much more than that. Well, epigenetics, but epigenetics says that the information is not all in the gene, right? If I'm not mistaken. Sorry, it was developed because the indels and the single nucleotide polymorphisms were insufficient to explain the observed amount of complexity with traditional evolutionary thinking.
[96:09] There's nothing that's like mystical or fanciful about this. It's like niche construction, like I mentioned, and so known mechanism, multi-level selection. There is room for these global changes. Yeah, that would be interesting to see if the stuff I'm suggesting looks analogous in some ways to this work. But my model is very simple. There's no epigenetics in my model. Everything is in the DNA, you know?
[96:34] But the truth is we're both, as you said, the fundamental motivation for this new version of Darwinism and for my own work is a suspicion that point mutations aren't enough, right? So in that sense we're analogous. We were both worried about that problem. It doesn't seem to explain how fast things evolve, right?
[97:00] And it doesn't seem to explain the major transitions in evolution, like, for example, from single cell to multicellular or the Cambrian explosion, for example, is very astonishing, maybe from the point of view of conventional Darwinism. I'm not sure if that's a reasonable statement or not.
[97:19] Yeah. In this comparison that you have with the open system of biology. So to contrast that, generally we think of a fitness function and you just get to the minimum. Then once you get to the minimum, you stay there unless the fitness function changes. So is that what you were saying? No, well, my fitness function doesn't have a minimum. Organisms get fitter and fitter and fitter without limit. The fitness is a number and the bigger the number, the fitter the organism.
[97:48] This is how I avoid, what do you call it, this trap of the well-adjusted child in 1950s schools, American schools, or the selfish gene. The way I avoid it is the fitness function, you don't minimize it, you maximize it, and it can be as big as you want. This is how you get open-endedness out of this little model of biological evolution, you see.
[98:15] to make a parallel between physics again. In physics, when you have an open system, it means you're open again relative to something else. You're a subsystem of something else. So what is this something else in this case? And what is the border in between it that allows an exchange? Well, the environment, the organisms are getting more information from the environment in this little model. And what corresponds to the environment is an oracle for the halting problem.
[98:44] which is an infinite amount of information. So does this require a philosophical commitment to Platonism? Well, this model is non-computable, so it's not even an algorithmic model. It involves software, but there are steps in it that require an oracle for different things. So in that sense, it's a platonic model. You see,
[99:15] The problem is I wanted to get the simplest. Okay. What is life? Okay. Now one definition. I was reading books by John Minard Smith, whom I had the pleasure of meeting by the way, North of the Arctic Circle, an Abisko, Sweden. He's a fun guy. He was already retired and we would go off and drink beer. There wasn't much to do North of the Arctic Circle. So we would go off and drink beer and talk.
[99:42] Anyway, Minard Smith has two books written with a chemist, a Hungarian chemist called Erzszaf Marie. And I think both books have a chapter or a section at the beginning where they ask, what is life? And their definition is, well, you might think that a flame is alive, right? Because it reproduces itself, because more things catch on fire. But they say a flame doesn't evolve. It doesn't incorporate more information.
[100:12] So it would be a system that by evolution gets more complicated and evolves. So I took that as my definition of life in my little toy model. In other words, it may look like a vicious circle because my definition of life in this little model is a system that evolves by Darwinian evolution. So that sounds like you're supposing that Darwinian evolution works to start with.
[100:40] But it's not really a vicious circle because you want to find the simplest Pythagorean or Platonic life form that provably evolves by natural selection. So that would be, it's not the individual organism that's alive in this point of view. It's a system that evolves by natural selection that's alive. And you want to find the simplest system you can find in the world of pure mathematics that you can prove evolves.
[101:10] That's fine. When you were talking about that there's always something new to discover in this book.
[101:40] Is it a guarantee that what's new to discover is of an interesting class? And the reason I say that is there's, for instance, a video game called No Man's Sky. And what it is is it's a procedurally generated world. And so there's an infinite amount for all intents and purposes, an infinite amount of planets. But once you've visited 10, you've pretty much visited them all. So yes, there is something new to do, but it's not relevant. It's not alluring.
[102:06] Future Kurt here editing himself in for clarity. I was being facetious. No Man's Sky is a wonderful game. I've never seen a developer turnaround of games quality like I have with this with Hello Games. It's unprecedented. Thank you to the development team. Sean Murray, if you're watching, I'd love to have you on the podcast. Great work to all of you. Back to the show. Well, that's that's a very good question. For example, Gettler's Incompetence Theorem, you can argue that the theorem
[102:38] is not of any interest to anybody. So, for example, people can say, I don't care about proving that finite bit strings are algorithmically irreducible or have high program size complexity. This is not something a normal mathematician wants to do in their everyday work on the problems that interest them. And you can also say regarding the halting probability or the omega number, it may be true that the bits are unknowable,
[103:07] in sort of the worst possible way, they're algorithmically irreducible. They show that the world of the phonic world of pure mathematics has infinite information content or infinite complexity. But what do I care about the bits of the halting probability? You know, it doesn't come up in, I don't know, algebraic geometry, for example, or the work that real mathematicians are passionate about and working on all the time.
[103:35] You can argue that it's open-ended, but not in any interesting way. And let me give another example of this. I said Hilbert's program was destroyed by Gettle in 1931. But for all practical purposes, there are now formal systems where you can formalize real mathematics, the kind of real mathematics that real mathematicians do.
[104:00] There are some mathematicians who were doing very complicated, very good mathematicians who were doing very complicated proofs. And they wanted some assurance that the proofs were correct. And they took these proofs and formalized and put them into a system now which could check the proofs. So for all practical purposes, there are now theories of everything for all the mathematics that people are doing now.
[104:28] And this is a tremendous engineering achievement. This is very good work building on software techniques and logic and everything like that. And it's true that these systems are incomplete according to Gödel's incompleteness theorem, but it doesn't matter because people can do all the mathematics that real mathematicians want to do now. So these are very powerful systems for formalizing proofs.
[104:54] People have done things like, I think, checking the proof of the four color theorem, if I'm not mistaken, and other kinds of results by formalizing them in these systems. And some of the mathematicians who've done this said they got a deeper insight into the problem by formalizing it. So instead of complaining about the work of inserting a proof into this special language and breaking it into pieces that the proof checker
[105:24] that comes with all this software could check were correct. You have to break the proof down into pieces small enough that the software can say, okay, I got that. That's correct. Let's go on to the next step instead of complaining about this because these systems are getting smarter and smarter. So they can, you can take, leave bigger and bigger holes in the proof, take bigger steps. Yeah.
[105:48] And for all practical purposes, Hilbert was right. There are now very good formalizations of all of working mathematics at this time. So that's in contrast, that's an interesting philosophical tension with Gödel's Completeness Theorem, which says that you can't do a theory of everything for all of math, but you might be able to do a theory of everything for all the math that in practice, mathematicians are doing now. And that, in fact, has happened for all practical purposes, right? F, A, P, P.
[106:18] So from a philosophical point of view, that's interesting tension, I think. And it's sort of like programming a program, because you've got to put your proof into a logical language that is the system you're working with. And you have to break the proof into steps that are small enough that the software can check that they're correct, a step in the proof. This is actually a practical tool now. So in a sense,
[106:46] Hilbert's program has succeeded, has been carried out, that part of it at least. There were a lot of historical aspects to the original proposal of Hilbert, which are of less interest nowadays, I think. But the basic idea is what I've been trying to explain. So it may be that the system is open-ended, but not in a way that people care about that much.
[107:13] On the other hand, real mathematics is progressing. It's progressing very well. The century of Gödel's incompleteness theorem, the 1900s, was a century of tremendous progress in mathematics in contrast to the negative incompleteness result. So that's the question of whether the results that you can't get are interesting results or whether there are artificial things that you constructed to show incompleteness.
[107:42] And that's a very good question. That's a very good question. And I don't have a good answer. That would be a topic. If I could live another lifetime and continue to want to do mathematics, that would be a topic. Would another way of saying that be that, look, what's remarkable isn't incompleteness, but rather the completeness that we have. So despite the incompleteness and given that there's an infinite amount of statements that we can't decide whether they're true or untrue.
[108:10] Why is it that the overlap between what we're interested in and then those statements is remarkably low? Why is it that there's so much progress like in every single domain of math? It's a good question, but it's also a fact that, for example, if you try to do research on an area which is too hard, you never publish papers, you don't get a doctorate and you never become a professor. So the map of current mathematics is a map of what our current methods are able to achieve.
[108:40] So that's why it looks like everything pretty much is achievable. If that's the case, then we're in a bit of a thorny position philosophically, because it means that math isn't something that's so dispassionately abstract, but rather it's influenced by the real world, influenced by empiricism. Well, it's also influenced by fashion. If you look at popular mathematical
[109:11] That changes the function of time. And I think if we met Martians, their mathematics might be different, concerned with different things than ours. For example, the ancient Greeks preferred geometry as the basis for mathematics. And now I think arithmetic is considered to be more fundamental. That was the arithmetization of analysis, which was a topic of the 1800s.
[109:36] Now, I think you've asked a good question and I don't have an answer, but I think it would be interesting to take a look at Stephen Wolfram's recent work because he has a book, as I said, on the physicalization of metamathematics. All formal axiomatic theories are part of the rule he had, as well as all possible algorithmic models of physics, worlds that can be algorithmic. And he, in fact, his approach is more empirical.
[110:05] I try to prove theorems and Stephen has this wonderful tool that he's created, right? Mathematic or the Wolfram language or whatever, tremendous software. And he does things like looking at all the theorems in Gluclidean geometry and all the connections between them. He does graphs, he does statistics. So he's taken a real look at the way he's looked at different axiomatizations, for example, for Boolean algebra, Boolean logic,
[110:36] and found surprisingly, I think, one axiom that gives you everything that nobody had ever seen. So he's trying to get a statistics of all possible of how formal axiomatic theories work. And he's interested in the question of why a theorem is interesting. He's
[110:56] So how do you tell the difference between that and a theorem that is deeply meaningful that will interest you and mathematicians? And Stephen has some good ideas about that. If you look at the graph of all possible connections between theorems where you're proving one theorem from another and how many steps it takes to go,
[111:23] I think one of the things he says is that a theorem that is non-trivial, that's interesting, is sort of in a key spot, connecting many things. Edward Witten said something similar. So Edward Witten of physics was asked, what makes a theorem beautiful? And he was put on the spot. He said he can't articulate it. It's like asking, why is the song beautiful? But he said, if he had to say something right here after thinking about it only for 10 seconds,
[111:52] It would be that you get more out of it than you put in, which sounds like something to do with complexity or compression. Yeah. Well, Stephen actually looks at these graphs of relationship between mathematical statements in a theory. He does looks at statistical aspects of the graphs and different theories. And I know he has discussed this question of what is the difference between a mathematical assertion in a theory that is interesting and the ones that aren't.
[112:21] So I recommend taking a look at this whole new immense body of work. Stephen is publishing, you know, a 400 page book once a year. Yeah, I don't know. He's in a tremendously fertile period. I'm happy to see this. For many years, he was involved with building this software stack, right? Which is all from language, all from alpha, which is a remarkable achievement. It's a non-human AI, but it is an AI in a sense, right?
[112:50] But it's not a human kind of AI. But then recently he's gotten back to doing basic research and he's produced a remarkable series of books. So I think maybe you'll find him discussing
[113:07] Yeah, I've spoken to him a couple of times, off air and on air as well, and I'll likely be speaking to him in a few months. So I'll talk to him about that. By which time he may have a whole new theory to talk about. He's gone through a tremendously productive period. It's wonderful. It's wonderful to see. It's my understanding that what happens at Wolfram Research, the Wolfram Physics Project, stays in the Wolfram's Physics Project. And what I mean by that is that
[113:32] There's not much academic talk about what Steven produces. This is from my perception from the outside and also speaking to a few people. And I'm not sure if that's true. And if it's true, I'm not sure why. If it's because, well, Wolfram is operating from a space outside academia. Yeah, well, I think I think he has some younger collaborators who need to worry perhaps about publishing for their future. I met in Morocco
[114:05] His name is Hatem. It's true that Stephen is operating outside the normal academic environment. I think part of it is that he can't be bothered. If they had to publish refereed papers in traditional journals, it would slow them down tremendously. And since he relies on getting a feel for the data by looking at graphs of different mathematical theories and connections. Well, in a normal journal published on paper, this is the past, of course, you can't fit
[114:35] All of that, right? So Stephen's way of working is not like my way of working. I like to prove theorems and I like to keep it simple. Stephen attacks the full complexity of an issue, looks at many cases, draws graphs, draws pictures that give you an intuitive feel for what's happening in each case. And that way you're building empirical data and you're building intuition of how these systems function. That's the way of empirical scientists functions.
[115:02] Although the world he's looking at is a world of computational world rather than the physical world. So that's his modus operandi. And there's also the problem that he's moving so fast that who would referee his papers. I think I need to change the earbuds. Sure. We'll take a small break. I don't know if this is correct, but it's my understanding that Kurt Gödel didn't see his incompleteness theorem as a hindrance, but rather he saw it as indicating that
[115:31] mathematicians don't proceed logically or rationally when they're coming up with their proofs or theorems. They have some connection to the platonic world that that manifests through intuition or creative sparks. Is that correct? I think so. What happened with Gödel is that some of his papers were even typeset and he proofread them and he went through different versions and
[115:56] finally decided not to release them for publication. He does have places where if you look through the collected works which I've done, I don't remember which part was published and which part were versions of papers that were never published but were really in pretty final form and sometimes he has several versions that all look very good. He does state that he believes that his theorem is not an obstacle to the progress of mathematics because as you said
[116:26] He's saying something like a mathematician closes his eyes and thinks in the dark and somehow manages to perceive the platonic world of ideas. He thinks we're not a formal axiomatic theory. You know, the human mind is not a formal axiomatic theory. And he believes that through mathematical intuition, in some sense, mathematicians connect with this world of ideas. I believe that's his philosophical position. And maybe the reason
[116:56] He didn't release some of these papers for publication is because I view ghetto as a philosopher who would only publish papers which are not controversial because he would prove philosophical results with with mathematical arguments so no one could argue with him. He does he has very small output for a mathematician because he doesn't do papers on on
[117:22] on topics that are not deep deep topics philosophically so that means he has very little output for a pure mathematician with his brains you know very smart broad mathematician he has a wonderful paper on rotating universes in general relativity for example in the arrow of time that he wrote for his friend einstein so but so i think he was
[117:48] to publish papers where he couldn't actually give a mathematical proof for assertions he made. And perhaps for that reason, these remarks you made about the fact that he didn't, yeah, I think, I don't remember all of this, but I think you can make the case, maybe Rebecca Goldstein makes this case in her book, that Gödel was deeply misunderstood and that he viewed the point of his theorem as liberating, as different from what people
[118:18] thought it meant. I've forgotten some of this, I'm sorry to say, or I could give you a more coherent answer, but I would agree with what you said. Yeah, so in other words, if Gödel's incompleteness theorem wasn't there, wasn't true, then math is this fixed tree. So what Gödel done, rather than putting shackles on, was remove the shackles from math.
[118:43] Right, but that also pulls the rug out from mathematical certainty. It's not completely black or white, because if you don't have one fixed theory of everything for pure math, you can start arguing about which axioms to accept. Math could split up into different communities that go off in different directions with axioms that contradict each other, like non-Euclidean geometries, for example.
[119:04] That actually happened. There was intuitionistic mathematics. There was a mathematician whose name escapes me in the Netherlands. Brouwer? Brouwer, maybe. And Hilbert and Brouwer really hated each other. Hilbert kicked Einstein out of the Board of Editors of one of his magazines because he wanted to get Brouwer out of the Board of Editors and he couldn't think of how to do that except by eliminating the Board of Editors altogether.
[119:34] So this was a serious controversy and Hilbert's proposal was intended in fact to refute the Brouwer's position. So in intuitionist logic is there no such thing as girdles and completeness then? Because you have to construct every statement and the girdle statements are unconstructible? That's a very good question.
[119:57] I'm not an expert on Gödel incompetence because I've gone off in a different direction, right? More the Turing's direction. But I can tell you the historical motivation for Hilbert's program. It's not just coming up with a theory of everything for all of pure mathematics that Gödel refutes.
[120:20] The theories of everything that Hilbert wanted to study and for people to agree on and take as the foundations of mathematics were theories that allowed non-constructive proofs, which Brouwer did not allow. So the effort, the attempt was to show that these non-constructive proofs wouldn't lead to a contradiction using only constructive reasoning outside the system. This was the
[120:49] original purpose for the meta mathematics, to which Brouwer replied, just because a policeman doesn't stop one, it doesn't mean that somebody didn't commit a crime. So this was part of the motivation for Hilbert's proposal of formalism that people forget about nowadays. In retrospect, I don't see this as the central issue. For me, the central issue is, is there incompleteness or not? But that was only part of the
[121:17] set of issues that historically were in play at the time. The real history is always more complicated, right? I'm giving what I see as the central issue because it led to my work. And you asked a good question. I know that the meta-mathematical arguments, which were supposed to convince mathematicians that the non-constructive formal axiomatic theory that Hilbert would have come up with, hopefully, but didn't,
[121:46] was okay in spite of allowing non-constructive arguments. In metamathematics, you weren't supposed to use non-constructive arguments. So that way Brouwer would have, they were also informal, they were done outside the system in words. This was to convince Brouwer that it was okay to use non-constructive arguments because it wouldn't lead you to a contradiction. This was a fight between the two of them. Now in practice, constructive mathematics is the computer. You know, now
[122:17] Because of computers, a lot of mathematics is computational and very constructive. But you pay a price. For example, sometimes it's much easier to prove that, say, a partial differential equation has a solution than to give you an efficient way of calculating the solution. So that's part of the reason that mathematicians still like to use non-constructive arguments. And there are places where the arguments have to be non-constructive.
[122:45] My own work, since I'm dealing with things which are un-computable, this stuff, I guess, Brouwer would have said doesn't exist, you know, that the halting probability doesn't exist. It's not a computable real number. But that's the whole point. What I'm trying to do is find things that are non-constructive and un-computable, but are just across the border from what you can calculate and that clearly has meaning.
[123:15] because it has algorithmic meaning, computational meaning. So I try to find the boundary that's as close as possible. So you just get into trouble by going across the boundary. So that's the halting probability. It almost looks computable. You can calculate the limit from below. One of my books, I have a list program that calculates the halting problem in the limit from below of infinite time, but it converges non-computably slowly. You see, so the halting probability is almost a computable real number.
[123:44] There's no regulator of convergence. So it's just across the border from, from what you can calculate in theory and what you can't. So it's an attempt to find the boundary, you know, now Cantor has much more abstract things. His very large infinite sets, you know, you have amazingly. I see you see them as far across the mountains. Yeah. They're almost like theology. Yeah.
[124:13] Not that there aren't wonderful mathematicians doing, a friend of mine, Robert Solovey was one of them. There's a, is he an Israeli mathematician? I can't recall his name, who's just incredibly powerful. It's beautiful work. There's a small community of really good mathematicians working on abstract set theory. They've decided to add a new axiom called projected determinacy.
[124:38] I think this is great stuff, but I regard it a little bit as mathematical theology. You know, it's far off in Never Neverland. It's great work. I have nothing against it, but I'm working at a much lower level, very near to what's computable, you see, because I'm using the notion. So instead of having the hierarchy of bigger and bigger infinities and amazingly big infinities that you have in abstract set theory,
[125:02] I was talking to you about the hierarchy of a Turing machine, its oracle, then a Turing machine with an oracle for the halting problem and its oracle and you get like that more and more powerful Turing machines of which only the ones at the bottom level sort of correspond to what we can calculate in this world. And the other ones are increasingly mathematical fantasies as you go higher up. So all this is in it. But in a way, the world of pure mathematics is beautiful and simple because it's unreal.
[125:31] Von Neumann has a remark that pure mathematics is easy, real life is hard. And I think that's actually a deep remark. It's not just a cocktail party joke. And part of the reason that pure mathematics is beautiful is because it is an ideal world. It's maybe in some ways a fantasy world. So it has beautiful, simple properties. In the real world, that's tougher.
[126:00] Now it's possible that the world of your mathematics is really real in the most real possible sense if you state that maybe that's the fundamental ontology, the substrate out of which the world is built, which is a possible position. I don't know out of what the world is built, but physicists use a lot of mathematics, right? But the mathematics changes as a function of time.
[126:25] Presumably, if we had a deep understanding of physics, we would know what the world is made out of. One possibility would be information. I don't know. At this point in time, that's like pre-Socratic philosophy, where by pure thought, you say maybe the world is fire, maybe the world is all unity, maybe it's not, you know, all these different attitudes. Everything has changed or everything is... Before we move to cold fusion, what is it that drives you to operate on the perimeter?
[126:55] between computability, uncomputability, or contradictory statements, paradoxes, and completeness. What is it that you sit around your house, you go for walks, and then you start thinking about them by nature? Do you have a predilection for it? That's a very good question. Well, I can't give you a full answer, but I'll tell you that as a child, I'm self-taught. I don't have a university degree. I just have two honorary doctorates, right?
[127:26] 20 books that I published. Yeah. So I'm an amateur mathematician. I earned a living as a computer programmer at IBM doing, working on hardware and software design. So as a child, I used to read a lot on my own on many, many topics. And at first I was very interested in astronomy. I built a telescope, a ground, a reflecting telescope in the basement of what was then the Hayden Planetarium.
[127:54] Another topic I was interested in was physics. And in physics, you know, the two most mysterious topics, there was relativity theory, but there was general relativity, curve, space, time, gravity, and there was quantum mechanics. So I read a lot about this as a, as a child, as a teenager. And somehow I was interested in the most fundamental and mysterious things. I don't know why it's my personality. And then I heard about Goethe's incompleteness theorem.
[128:23] I was 11 years old. It was Gödel's Proof, that little book published in 1958 by Nagelin Newman. I got to take it out of the adult section even though I was 11 at the time of the public library in New York. And it fascinated me because it looked to me like a topic in pure math that was as deep as general relativity or quantum mechanics. It was as revolutionary as general relativity or quantum mechanics was in physics.
[128:52] I sort of became obsessed with trying to understand Goethe's complete theorem, but also I was interested in computers. Computers were just beginning. There was a book called Giant Brains or Giant Electronic Brains. I found books, one of the first books that showed examples of computer programs and I was computer programming in high school, which was unusual then.
[129:21] on mainframes. Now probably everybody in high school programs, right? But at the time I did it, it was unusual and I got to use mainframes. I was lucky at Columbia University when I was in high school. So the computer also fascinated me. Artificial intelligence I've never worked on and has had an interesting history that I've been following from outside. I've talked with Marvin Minsky about artificial intelligence and
[129:51] Like everybody else, I'm amazed to see what neural nets have now accomplished. I used to talk with John McCarthy and Marvin Minsky about artificial intelligence, and they thought it would be based on logic, right? On formal logic, on imitating the kind of reasoning that mathematicians do. And of course, Minsky and Papert wrote a book which was bad for the funding for neural network research.
[130:17] And amazingly enough, the engineering approach, that doesn't surprise me. I've seen similar things at IBM, but a brute force engineering approach triumphed. I saw that in translation, language translation from one language to another. There were approaches which tried to be based on deep knowledge of syntax and semantics, and there were approaches which were more brute force engineering, like taking government documents
[130:50] So it's been amazing to see all this happen and I'm sorry I don't remember why I'm talking about this. I lost the thread.
[131:16] About what inspires you in the direction? Oh, what inspired me? Well, I found the computer fascinating and I didn't like Ghetto's original proof. I mean, I could follow it step by step, but it seemed to me, you know, too tricky the way Grothendieck would say it. It was too much a trick. It was too clever by halves, the English would say. I thought
[131:42] something as important as incompleteness needed to have a deeper proof. You need a deeper understanding. And as Karuthindig would say, once you see the right context to think about a problem, it looks trivial, right? The proof is very short because you discover the natural context for thinking about it, which I think I've done. People can argue that it's the wrong context, but it's certainly one
[132:10] Now whether incompleteness is fundamental or not is a question because for all practical purposes it may not be. So this question of creativity and mathematics I think is a deep question. You asked it. So this I guess in a previous age maybe I would have been a tried to be a magician or a Kabbalist or something. You know this business of searching for deep understanding I sort of naturally gravitate to
[132:41] The more philosophical, deeper questions. So that's my personality, I guess. I was like that already as a child and as a teenager. I mean, I came up with the idea of looking at program size at the age of 15, you know, and I published my first paper on it was published in the Journal of the ACM. When I was still a teenager, I think I was 19. It had been submitted before. At that time, the Journal of the ACM was the only
[133:10] only journal on theoretical computer science. Well, I can say a little more. You see, in the fifties, when I grew up in the United States, there were a lot of refugees from Hitler in the U.S. Herman Weill, John von Neumann, Kurt Gödel, and I would read all their essays, all their books. And these gentlemen came from a European background where philosophy and physics were viewed as all part of the same thing, right?
[133:39] Classical music also, I know nothing about classical music. You have a philosophical attitude, it's very clear. That's not exactly favored in the current environment. Shut up and calculate, which I think is a bad prescription for fundamental advances. You said it was suicidal to be shut up and calculate is suicidal.
[133:59] Yeah, well, you can, you know, there's normal science and there are paradigm shifts, as Kyon said. I'm not interested in normal science. That's the kind of thing that can be done in an industrial lab, for example, for new technology. You feel like it's incremental and can be cranked out. Yeah, exactly. They shut up and calculate because you basically have a theory and you just need to work out a case, especially one that is useful for practical applications. But like that, you're never going to do revolutionary work.
[134:26] And by the way, universities are also conservative places. The current environment is tough. Sidney Brenner, a remarkable guy, worked with Craig, one of the creators of Molecular Biology Nobel Prize. He has a lot of friends who won Nobel Prizes in, I guess it's, I don't know where it is, medicine, if you're doing molecular biology. And he said none of them could have done the work that got them the Nobel Prize in the current environment with funding and
[134:55] Research grants and referee reports and all of that. You have to remember the Cavendish had a blanket funding for the whole lab. And then the head of the Cavendish who was Bragg, I think it was the son of the Bragg who did a diffraction of something or other by crystals, some kind of waves by crystals to expose atomic structure, X-ray diffraction. Maybe it was invented by the father.
[135:25] Anyway, he was the head of the, whatever it was called, that lab at Cambridge that Watson and Crick did their work. And the funding was for the whole lab for several years. People didn't have to waste their time promising what they would do in advance, submitting research grants and everything. So I think the current bureaucracy is really dangerous. You know, I'm sort of disappointed that there hasn't been
[135:54] In my view, really fundamental advances in physics for a century. The last previous advances, it seems to me, were general relativity and quantum mechanics. Those were really big ruptures in our understanding of the physical world. So is it that we know everything? I don't think so. But I think with the current environment, the sociology of science makes it difficult for... Can I talk about a little bit about stuff that I think is revolutionary? Please.
[136:23] This kind of stuff attracts me. There's a crazy guy called Randall Mills, who has a little company near Princeton, New Jersey. It's called something like, I don't know, Black Light or Brilliant Light. Anyway, he studied his... Yeah, Brilliant Light Power. Exactly. You know a lot about this. OK, so he was doing sort of practical work in devices, applications in medicine, you know,
[136:52] Bright guy, farm boy, brilliant, brilliant. And then he has a car accident that almost kills him. And he decides he's not going to waste his time on practical applications. He's going to go after fundamental things. So he goes to MIT and studies physics. He goes to MIT and one of his professors was called, I think, Herman House. And Herman House has a theory for how free electron lasers work. They work much better than I thought.
[137:19] and he found novel solutions of Maxwell's equations. I don't have time to talk about this anyway. I don't understand it that well. Okay, so Randall Mills is a student there and he says, oh, I like this theory that was very good for looking at how free electron lasers do so well. Let me apply it to the hydrogen atom.
[137:43] Remember, quantum mechanics is an attempt to explain the fact that the hydrogen atom is stable, right, because the electron should radiate and should spiral into the proton and goodbye hydrogen atom. So you invent quantum mechanics. Well, Randall Mills finds solutions of Maxwell's equation where the electron, instead of sort of being in orbit, a particle in orbit around the proton, is more like a sheet or like a sphere. And anyway, I don't understand it in detail, but it doesn't radiate.
[138:12] These are solutions of Maxwell's equations where you don't have the energy radiated away. This was the professor, his professor at MIT from an house. So then he says, ah, well, maybe he goes a little too far. He says, maybe quantum mechanics wasn't necessary because you can get a stable hydrogen atom just using classical physics. Well, that of course, no physicist would agree with, right? Because quantum mechanics seems to be very successful.
[138:41] And I have to admit I'm skeptical, although I'm interested in this crazy stuff. But the most interesting thing is, he says, according to his theory, there should be forms of the hydrogen atom that are below the normal ground state. That is to say, with the electron closer to the proton than is allowed in normal quantum physics. And he says, this is the dark matter. The dark matter is the most common substance in the universe, which is hydrogen.
[139:11] It's just a more stable form of hydrogen that doesn't radiate. That's why it's dark. And he actually has, I'm not a physicist, he has 23 different kinds of proof of the existence of this stuff. He calls them hydrinos. They're hydrogen atoms below the normal ground state. He thinks he can put it in a bottle. And coming from all this crazy theory, he has a device
[139:40] which creates hydrinos, takes hydrogen and drops them into this lower state. It involves, I don't know, plasma physics or something and it generates a lot of light, very powerful light. That's why it's called Brilliant Light Power, this company. So starting with this crazy theory, which you know any sane physicist would be very skeptical about, he comes up with a lot of experimental evidence of the existence of these hydrinos, he calls them, which it would be
[140:07] I asked physicists to take a good look at it. I was curious to see the reaction and nobody did in Mexico. I gave a talk on this. Obviously people were very skeptical, but I said, there's all this experimental work that's been published. It's been refereed. Please take a look at it. I mean, I even understood one of the results. I did some chromatography experiments, paper chromatography, liquid, when I was a child following articles in Scientific American Amateur Scientist.
[140:35] The hydrinos migrate too quickly. They migrate faster than hydrogen. So they look like they're smaller, right? So that was one experiment where I understand a little bit the idea. So anyway, so he has this practical device which seems to generate a lot of very powerful light and then with photogotetics he gets electricity out of it. So I think this is remarkable.
[141:03] And I think that the idea that maybe the dark matter should be a more stable form of hydrogen than exists according to normal physics would explain why there's so much dark matter. It's just hydrogen and hydrogen is the most common element. And the fact that he has an actual device and investors have been funding this work, I think is just amazing. Now there's a problem with sociology of science. This is the whole field of cold fusion. I think now
[141:33] A better name for it is lattice enabled nuclear reactions or low energy nuclear reactions. There are a number of little companies doing this kind of stuff now and I've seen efforts to change the attitude, the public attitude toward this. I've been seeing news releases in Europe that are trying to redeem cold fusion and say it's a viable field. A lot needs to be done, but it's alive and it's
[142:00] It's for real. Anyway, the one group that I've been following is in Japan. Clean Planet is the name of their company. And why is this being done in Japan with government funding and university participation? Whereas in the States for many years, this was considered to be trash science. Well, one of the reasons is the Japanese have no petroleum and they had a disaster with that nuclear reactor, right? The tsunami. And they have very good scientists and very good technologists.
[142:31] So there was a physicist there, I think his name was Arata, a very good physicist. They were skeptical. The government didn't start funding this and a university didn't get involved. The Japanese MIT, Stokoku University, didn't get involved until this skepticism was shown to be a mistake. There was a physicist called Arata who did a very clean experiment
[142:57] And instead of the Fleischerman and Pons electrochemistry, you see what happens is you take a metal lattice like palladiums is a sponge for hydrogen, right? And you can do it now with nickel. The original experiment used palladium and deuterium, but now people can do this with nickel and ordinary hydrogen. And what happens is if somehow you populate the metal lattice with every interstice with a hydrogen atom in it, then funny things start happening.
[143:25] Because it looks like the metal lattice shields the normal effects, which would normally you would need an awful lot of energy to get to hydrogen atoms to combine into alien electromagnetic repulsion. But there seems to be some kind of a shielding effect when you have hydrogen in a metal lattice under certain conditions that are complicated, which was why the original experiments couldn't be repeated. So anyway, the version by errata is
[143:55] is in a solid lattice not in a not electrochemical and I think it involves nano layers of nickel and copper and hydrogen is somehow pushed through the system I don't know how but it was a very clean experiment and you know the problem is with cold fusion the evidence that something unusual was happening was excess heat anomalous heat more heat than
[144:27] had to be taken place. But heat is a messy thing to measure, right? Cholera emitters is really primitive stone age stuff. But if you get helium out, if you put hydrogen in and get helium out, then it's clear that some nuclear reaction is happening of some new sort, right? And Erada got helium out of his system. So at that point, the whole Japanese community, the government, the universities, everybody
[144:56] started saying oh this is for real and we are a country without petroleum and our nuclear reactor was a disaster we need a new source of energy desperately and they had very good scientists and very good technology people Japan is a powerhouse right so the government started funding research in this area and there's a little department in Tohoku University which is the MIT of Japan the Tokyo University is the Harvard should we say Japan working on this there's
[145:25] Clean Planet, a small company and these people already have a prototype commercial device that generates heat using this and they've made deals with Mitsubishi for international distribution. Mitsubishi has global distribution and they've also made a deal with the largest manufacturer of boilers for industrial purposes in Japan. I don't remember the name.
[145:54] They have a website which is supposed to be giving live data from a reactor at one of their research labs, but so far has nothing there. It says, coming live soon. I would love to see this. This is a pretty serious effort. Now, nobody talks about it outside of Japan because this was supposed to be junk science. If you were a physicist, I would mention this work to
[146:21] physicists. And that would be the end of the conversation. They would say I was crazy and didn't want to talk to me anymore. Right. So I lost an appointment to a physics department because I mentioned it to somebody or a joint worker with a physics department. So, so anyway, this is a problem of sociology of science. Anyway, it turns out that Pons and Fleischman, Fleischman was one of the top electrochemists in the world at the time. He had been the head of the electrochemical society.
[146:49] Their interpretation of the experiment had flaws, but the experiment worked. It was very hard to repeat. It wasn't clear why sometimes it worked and sometimes it didn't. And 30 years of work by a community of eccentrics who've been basically doing this on their own without funding in secret have sort of clarified the conditions under which these things happen. And actually, Bonson-Fleichman weren't the first people. There were experiments in the 20s that
[147:19] A KFC tale in the pursuit of flavor. The holidays were tricky for the Colonel. He loved people, but he also loved peace and quiet. So he cooked up KFC's $4.99 chicken pot pie.
[147:35] Never published.
[148:04] important physicist. So there have been a number of people who've seen things like this in the 20s. So anyway, the Japanese effort seems to me as far as I know, the effort that's most advanced, but there are a number of groups and Google has done some research in this area and seems to be there is a website in Europe that has information about energy and clean energy and things like that. I don't know who funds it, but lately they've been running a whole series of articles about
[148:31] Low energy nuclear reactions or lattice-enabled nuclear reactions, I think is a better term. Talking about its history, the fact that there is something there, but work needs to be done. There is no good theoretical understanding of why it works, but there seems to be pretty clear empirical evidence. And this is too promising to drop without further investigation because it may be a clean source of energy, of tremendous energy, and it doesn't produce amazingly enough these nuclear reactions.
[149:01] Whatever the mechanism is, it isn't completely known. They don't produce radiation and they don't produce radioactive products and they certainly don't produce carbon dioxide because it's not combustion. So this would be a green source of energy. And if you're doing it with nickel, the original experiments were palladium, which is expensive, and deuterium, which is heavy hydrogen, right? No good for a practical technology.
[149:29] But now it's being done with nickel, with nickel and hydrogen. Nickel is a very common element. So what's the caveat here? Like what's the but? Yeah. So this is an example of the sociology of science. There's tremendous funding for tokamaks, right? And some of the people who supposedly tried to duplicate the Ponsman and Fleisch experiment to kill it as quickly as possible were associated with this.
[149:56] And once you have thousands of physicists working on fusion reactors and tremendous funding in many nations, and plus you have oil companies, it's tough. But it looks to me like it was a tremendous mistake not to continue working on this area. But it's an example of sociology of science. As Aristotle said, humans are political animals. There were two Nobel Prize physicists who right away got interested in this phenomenon because, of course,
[150:23] A good physicist looks for a place where the theories don't work, a some new phenomenon. One of them was, I think, was Schwinger, Julian Schwinger, and he immediately wrote a paper on the topic and it was discarded by the journal he sent it to without even sending it out for refereeing. It was a journal of the American Physical Society and he resigned. He was a fellow of the American Physical Society, a Nobel Prize winner. He resigned saying this will be the death of science, you know, if stuff is dismissed. I think this was
[150:53] Could this have been Julian Schwinger? Quantum electrodynamics. It was a Nobel Prize winner. I'm not sure it was him. So, but you know, humans are... Hugh Price also talks about this. Hugh Price, the philosopher Hugh Price, he calls it the reputation track, which is essentially the opposite of stigma. That is that a professor goes into what brings them prestige and they do so unconsciously most of the time and they follow along.
[151:23] graduate students are told look you don't want to work on the hardest problems or the most interesting problems you want to work on the tractable problems the solvable ones so that you can have a history of publishing so that you can get a postdoc then get a professorship in which case you also want to have a history of publishing in order to get eligible for this is crap though this will ensure that no really deep revolutionary work gets done I had a conversation with who is this wonderful physicist at Stanford
[151:53] We were drinking beer in Arizona once and reminiscing. It turns out we're both from New York City. We went to public schools in New York City, high schools in New York City. So we were chatting and he said when he was a kid, he and his fellow physics graduate students, all were revolutionaries. They were subversive.
[152:19] They didn't want to listen to their professors. They were going to do something new and revolutionary. And now he says the students come up to him and they ask him for a problem to work on. They don't have their own problem to work on. They ask him to tell them a good problem and then they work on it and never change. They have to keep grinding out papers on this topic. And he said he was very disappointed by this. But the current
[152:47] system, the bureaucracy, the whole thing makes this happen. I think the reason that there hasn't been that much fundamental work in physics since the creation of quantum mechanics at the level of the creation of quantum mechanics was a tremendous rupture, tremendous revolution. I think it doesn't mean that the physical universe doesn't have wonderful surprises for us. I think a lot of it reflects big science and the sociology of science. Interesting, big science. It's the first time I've heard that.
[153:17] Yeah, well, you have to remember, it was used in the 50s. You have to remember that studying nuclear physics in the 30s was like studying, writing poetry in ancient Greek. It had no practical applications. Very few people were interested in this topic. But after you build an atom bomb and nuclear reactors, this is big science. And von Neumann, who knew the history of the church, of the Catholic Church, he had a photographic memory.
[153:44] He said the transition which was happening in science was similar transition that happened in the history of Catholic Church. I would have to find this, this quote. So anyway, this has been very bad for scientific creativity and Huey Price gives practical reasons to this. Even though this is low probability, it could save the planet and the human race. So there should be some money invested and they've been investing billions in Tokamaks, right? For 30 years.
[154:13] They could invest a few hundred million a year in Cold Fusion, but it didn't happen that way. But the tide is turning, I think. I think the tide is turning. You're aware of this stuff, which is great. And I don't know if you're aware of... Maybe you didn't hear about this Japanese effort. No. You heard about Brilliant Light Power. Yeah. So the word is getting out. And I think...
[154:39] Maybe Google is funding this. I've seen a number of press releases that don't have an authorship under them in Europe, which are trying to rehabilitate ColdFusion and say it's a legitimate field and they're startups. And it's true, we don't understand the mechanisms yet, but it's promising and it should be worked on. I've seen a whole series of press releases, one of which said Google as the authorship, but the others didn't indicate who was writing the article.
[155:07] My impression, and this comes from my background in math and physics in university, so just in the math and physics domain, for why there isn't much of a paradigm shift in physics in the past 50 years or so, is because, well firstly we're talking about high energy physics, not just physics in general, like condensed matter physics has so much that's new, but it's not fundamentally paradigm shifting like quantum mechanics was.
[155:31] One of the reasons is that you have to become so well versed in physics in order to make a change. Like you have to understand the landscape in order to propose something new. And doing so takes up so much time that by the time you're done studying it, you become ingrained and you can't think outside of it. And you're also encouraged not to while you're learning. Yeah, I agree. I mean, that's why I'm a college dropout.
[155:56] That's why I have this podcast. So we're similar. Yeah, to get a large overview of a rock of different fields and to explore them in depth. That is with rigor, hopefully, but in order to make connections between the fields. Absolutely. Universities are basically conservative institutions and enormous bureaucracies that administer scientific research now, you know, guarantee that no really
[156:20] Fundamentally novel stuff can get funded or it can get published. The referees will never accept it. So this is not the way to do good science. But I think the politicians don't want revolutionary science because it scares them. They prefer to have control over the scientific community. They don't want another development as scary as the atom bomb. That may be a reason for controlling everything, but it's very bad for science to advance.
[156:49] You know, I think that Randall Mills is clearly very brilliant. I think he's going too far. He thinks quantum mechanics was not necessary and he can do everything with his approach. Well, I can't follow his approach in detail, but I'm a little skeptical. Maybe if there's very clear experimental evidence as there seems to be that hydrinos exist, then clearly he's onto something. But maybe
[157:17] His ideas can be combined with quantum mechanics into a richer theory. He's lucky that he came up with a practical device because he never would have gotten funding to do all the research he's been doing. The fact that he starts out presenting his stuff by saying that he's
[157:36] decided that quantum mechanics is a mistake and he's found another path is a little suicidal, right? But it makes it fascinating from a scientific point of view. You know, he's gotten spectrums out that he says are the spectrum of the solar corona. There's the question of war. There's not only the question of the dark matter. There's a question of why is the solar corona so hot? Where does all that energy come from? And he says it's from hydrogen dropping into hydrinos, which is the dark matter.
[158:06] And you know, there's so much dark matter. The most common thing in the universe is hydrogen, right? So if it's all hydrogen, it makes sense. So I think that Randall Mills deserves a chance. Yeah, the people should really listen to him. I find it absolutely fascinating. Now the stuff on low energy nuclear actors, the lattice enabled nuclear reactions, I think is better.
[158:36] It's not as fundamentally new science because condensed matter is a very complicated environment. As somebody on the Physics Nobel Prize committee who looked at some of the work in this area said, there's a lot of phenomenological stuff in condensed matter physics and there is room for this new phenomenon to take place maybe.
[159:02] So he took it seriously. He died. He was an elder statesman, which is why he could make a statement like this without destroying his career. He already had had a wonderful career. The fact that in a nickel lattice, if you populate almost every interstice with a hydrogen atom, somehow some of the hydrogens can turn into helium. This would not seem to be fundamentally new science.
[159:31] the way Randall Mills' hydrinos are. Now, technologically, it could be just wonderful. And that's what the Clean Planet Group is betting on with government funding and university support and also now industrial partnerships in Japan. But I have to say that Randall Mills fascinates me more because that would be very interesting. And he starts with a piece of work done by his professor
[160:04] The Japanese group seems to be interesting engineering, whereas what Randall Mills is talking about or Randy Mills is interesting physics for myself. I agree. I have the same point of view. Now there's another crazy guy. I don't know whether to take him seriously in this field or not. He's called Andrea Rossi. He has something called the energy catalyzer. I think one can be justifiably skeptical about his work.
[160:32] Because Randall Mills has an organization that people have invested, I don't know, $140 million in. It's a serious organization. And the Japanese effort is involved with universities, industry, the government. That's a serious thing. But this guy, Andrew Ressie, seems to keep all the cards close to his chest. He's doing it all on his own. It's a little crazy. But he claims now he had more conventional low-energy or lattice-enabled nuclear reactions, much closer to Ponce inflation already 10 years ago.
[161:00] But now he claims he's getting energy out of the vacuum, you know, out of quantum fluctuations in the vacuum. He has a new device. I don't know whether one should take him seriously or not, because he doesn't really publish. He doesn't reveal very much information. One demo that he did was was just wrong. He had a light bulb, supposedly. Well, anyway, he claims now he's going to have an electric car where he's going to be
[161:30] running the charging up the battery with his his device that supposedly gets electricity. He gets electricity directly, not heat or not light. He's going to keep the battery of the car charged while it's going round and round the racetrack. He claims he's going to do a demo in October, but you know, he usually doesn't want to reveal enough information for people to see whether they should take him seriously or not. The poor guy has been working on this for a long time. He's not so young.
[161:59] There's certainly justification for a lot of skepticism. There is a paper he published, I don't know, ARXIV or something, someplace like that. It's not in a referee journal talking about explaining how he thinks he can extract energy from the vacuum. I don't know enough physics to judge if this paper is nonsense or makes any sense. So, but this is just indicate that there's all this fascinating stuff going on out there outside the system.
[162:28] Because the system can't do crazy stuff. And if you don't allow some stuff that is clearly crazy, in the future, one of these crazy people like Randall Mills, Andrea Rossi, well, the Japanese effort is more solid engineering. But if you don't allow stuff that is going to be wrong, you won't find new stuff that is right. You know, you have to allow, as one friend of mine put it at IBM, if a lot of your research projects
[162:57] Don't fail. You're not being aggressive enough. You're not doing real research, but now you can't fail. You have to announce in advance what you're going to do to get the research grant, right? So, so I think we have a serious problem in the sociology of science and I agree with you in price. You know, the dark matter was clearly giving us a picture that we're missing something big because this is supposed to be most of the matter in the universe and we don't understand what it is.
[163:26] So clearly something big is missing in our picture of the physical universe, right? And the only proposal I've heard, well, I'm not a physicist, right, or an astrophysicist, but the stuff by Randy Mills, I find fascinating. I'm not in a position to judge his reworking of all of physics using classical physics. He has four volumes on this, but
[163:54] But he has 23 experimental proofs of the existence of hydrinos. He says he has them in a bottle. He's had videos showing reactions with them. This sounds like the kind of thing that, you know, a good experimental physicist could take a look at and see. Well, some of it has been published in referee journals, see if the evidence is good enough. I may have emailed Randy or he may have emailed me or someone may have like, I don't recall. I should look into interviewing him.
[164:22] I think he's a fascinating guy. I think it would be great to interview him. Now, Andrea Rossi will not want to be interviewed because he doesn't want to reveal any information about what he's doing. The Japanese effort, they have somebody who does public relations, a lady who gave a good talk at the last International Congress on Cold Fusion. Maybe you could, I'm sure that I have a feeling they would be happy to talk to you, or she would be happy to talk to you, since that's her job. And they
[164:52] They're going to, they want to market this thing worldwide, you know, and they think they're getting close to that. So those are two people I would suggest. Now I don't know them, you know, I don't have a, if I were friends with them, I could get you an interview with them. So you've got to break through the layers of protection. I would love to, but I'm just not going to do it if it's to promote them. If I'm an arm off of their marketing team, I don't care about that. I'll do it to understand the,
[165:21] Condensed Matter Physics in particular.
[165:38] And the Japanese effort, well, the person to interview would be the physicist Arata, who came up with the evidence that convinced the Japanese to take seriously this whole phenomenon. I don't know if he's still alive. He was already another statesman, but I think he did those experiments with a younger collaborator who might still be alive. There's a department at Tohoku University that's working on this stuff. Okay. Well, I'll look into that. So Greg, before we get going, I want to ask you,
[166:08] What are the different types of randomness? I know they're different kinds. There are lots of different kinds. There are several different definitions of randomness. I was just speaking with a Russian historian about this. The whole subject of program size complexity
[166:29] There were three of us who came up with this idea in the 60s. Remember, the computer had just begun. And if you're a pure mathematician, it's a sort of a natural question to want to look at complexity of computations and prove theorems about it. Most people felt time was the right kind of complexity to look at. Most of the work on complexity has to do with time, runtime. Three of us thought that maybe program size was more fundamental. That was Andrey Komogorov, who was very old. He was my age in Russia.
[167:00] There was Ray Solomonoff, a friend of Marvin Minsky. They're both gone, the two of them. Ray Solomonoff was interested in artificial intelligence and machine learning. Good friend of Marvin Minsky, a very unconventional guy, not with a university position, but a friend of Marvin Minsky. Minsky's a very bright guy. He doesn't have any stupid friends. And Marvin Minsky liked Solomonoff's work and he
[167:29] included it in some of his review articles. And there was me, I was a teenager, right, at the Bronx guy School of Science. Kolmogorov and I wanted to define randomness. We're pure mathematicians. Our most basic interest was to define lack of structure, because probability theory doesn't talk about that, you know, distinguishing a sequence of zeros and ones that has structure from one that is lawless. Solomon's motivation was induction, predicting the future and machine learning.
[167:59] He didn't define randomness. Wait, sorry. Shannon's work didn't even have to do with zeros and ones and which ones are more lawless or more chaotic than others? Yeah. I came up with this idea by reading Shannon and reading Turin at the same time and combining the two. But Shannon's information theory doesn't take an individual sequence of bits and say whether it has structure or not. It's a statistical theory. So what's random in Shannon's theory
[168:28] Is you have an ensemble of all possible messages and what's random is the distribution of probability over the ensemble of possibilities so the information is. The entropy is maximum when the distribution is uniform over all the possibilities right but you're not looking at an individual message and i thought you should look at an individual sequence of zeros and ones you don't have a statistical ensemble that you embedded in.
[168:52] You just look at that sequence of zeros and ones and ask does it have structure or doesn't it? Does it obey our law or doesn't it? We wanted to define that notion. I wanted to define it because I was interested in completeness and I had this intuitive feeling that this might have something to do. Now Kolmogorov wanted to define it because he was one of the creators of modern probability theory. Another guy I admire is Emile Borel who gets less credit.
[169:21] Borel sets like the Heine-Borel theorem, the same Borel? Yes, the same Borel. He proved that almost all real numbers are normal, which means that digits are equidistributed in every base or blocks of digits. He's a guy I admire. Kolmogorov had a more abstract formulation of probability theory that was very convincing for pure mathematicians via measure theory. Borel had a more down-to-earth
[169:49] But it did beautiful work. Beautiful work, Borel. Anyway, they're different kinds of minds. Anyway, Kolmogorov wanted it because he was one of the creators of modern probability theory. And he noticed that this notion of unstructured sequence or random sequence, although you talk about randomness, it doesn't really play a role in measure theoretic probability theory.
[170:13] Okay, so these were three of us with different goals that came up with this same idea, basically, different versions of it in the 60s. Now, for finite sequences, it's pretty straightforward to define lack of structure. It gets a little more difficult when you're talking about an infinite sequence, like a real number with an infinite number of bits. So this is where Martin-Loff comes in.
[170:39] There's the question of how do you define an infinite sequence to be unstructured? And the bits of my halting probability are infinite sequence, right? When you write it in binary. Well, you can sort of say that all the initial segments have to be unstructured, right? And then the whole thing is the infinite sequence structure. There's a problem with that though. A certain amount of structure happens just by chance. It has to happen.
[171:05] As Feller points out, there's a law of either logarithm, you have to get, you're going to get runs of the same bit. You know, if you go out and it's roughly log n, there's a very beautiful theorem about this. So the complexity will dip, it has to dip with probability one. So in the case of infinite sequences, it gets a little more delicate and the original version that Kolmogorov proposed
[171:32] doesn't work and I propose a definition of infinite sequence randomness that doesn't have that problem but has another problem, okay? It's too inclusive, it allows sequences that aren't random and Kolmogorov's definition demanding that every initial sequence should be close to the maximum complexity doesn't work because with probability one this is going to fail, right? And you want the random sequences to come out with probability one. Okay, so Martenloff looked at this and said,
[172:01] to define the random infinite sequence, you can't use program size. It didn't work. So he came up with a measure, the definition using constructive measure theory. That's the Martin-Loff random sequence. Okay. But then I went back and changed the definition of program size complexity. And then I got a neat definition because my definition says exactly how much it should dip.
[172:27] I changed to a program size complexity where n bit strings instead of having n bit complexity maximum they have n plus the information content of n. So then the dips turn out to be exactly that extra piece. So the way you define an infinite random sequence using program size is to say that the complexity of the initial segment has to always stay above n minus c for all n. And that takes into account the dips because the maximum possible complexity for n bit sequence
[172:57] with self-delimiting program size complexity that I and Levin invented independently 10 years later in the seventies. That's exactly how far down it is going to dip from N plus the program size complexity of N, which is roughly log N down to about N and minus C. So you end up having actually three different definitions of an infinite random sequence. There's my definition talking about program size complexity, but not the original one from the sixties.
[173:26] It's a better definition that I call self-delimiting programs and Levin came up with his own version of that in the 70s as I did. And then there's Martin Love's definition and then there's another definition that's very beautiful done by Robert Solovey, the set theorist who was briefly interested in this area. He and I were both at IBM Research for a while and we talked about all of this and he said, okay, I'm going to think about this. Maybe it will help me to settle P equals NP.
[173:56] And he did some beautiful work, but it didn't settle PNP and he left the field, never bothered to publish. Is P equals NP shown to be a girdle statement, by the way? I don't think so. Well, yes, it depends on, it depends on, you can show that it doesn't relativize. There are oracles for which P is equal to NP. If you're doing hypercomputation with an oracle,
[174:26] You can make the oracle such that P is equal to NP and you can make the oracle such that P is not equal to NP by changing the oracle instead of allowing normal computation. That's a theorem of who was it? Was it Gill and Salovey a long time ago? And that means that all the normal messes of proof can't really settle this question. But I don't know, I don't think it's shown to be
[174:51] Undecidable from the normal. Is there a way to classify complexity of logical systems? I don't know if logical system is the right word for here, but the NAND gate is functionally complete. To construct any other gate, you just need a combination of NAND gates. But in quantum logic or in quantum computing, you need more than just one gate. You need maybe three or four. So then is that some way of saying quantum logic or quantum computing is more complex
[175:20] than classical logic computing. Can you use that as a measure? Susskind has proposed connecting quantum circuit complexity with some fundamental questions in physics, including cave or black holes and whether information is lost or not. I don't understand quantum computation or the kind of physics that Susskind does.
[175:47] I think about classical computation. I can give you a historical comment, though. You know, von Neumann worked on pure math and also on quantum mechanics, right? He has a book which proposes the Hilbert space formulation of quantum mechanics. He was a student of Hilbert's. He invented the name Hilbert space and von Neumann did. Okay, so von Neumann thought
[176:16] If the logic of the world is really quantum logic rather than classical logic, I mean, if the world is quantum mechanical, not classical, maybe we should use a quantum logic to do pure mathematics rather than classical logic. And he and was it Berkoff or something published a few papers on this, but they couldn't pull it off. It didn't seem to work. I don't know if there's an attempt to revive this work now with all the work on quantum computing. Maybe it's time to revisit.
[176:46] These issues, I've heard that some people are trying to do that. I'm not familiar with, you know, I have to confess, I'm sort of a narrow specialist in my area. So I wanted to just, since you asked about Martin Lowe's randomness, so there are three definitions of an infinite random sequence. There's one using program size that I came up with.
[177:06] there's the one of martinloff which uses constructive measure theory and there's a one by robert salovey which is related to the one by martinloff it also uses construction measure theory but from a mathematical point of view is closer to what you want to prove properties of it has to do with the borel cantelli lemma which is fundamental in probability theory did you mention that or you mentioned something else by borel
[177:32] Okay, so these are three different definitions, and the beautiful thing is that you can prove that they're all equivalent, even though they look very different. And that's always very good. You see, when you have a fundamental mathematical concept you're trying to capture, if people have proposed different definitions and it turns out you can prove they're all equivalent, it means you probably have really gotten it. The same thing happened with Turing machines.
[177:56] There were different definitions of what is computable, many different in fact, and it was shown that they're all equivalent because each one of them could simulate the other, which was good. It meant that a fundamental idea had been captured. So, Martenloff then went on to work on other things, I think, on intuitionistic logics or type theory. I had the pleasure of meeting him once in Stockholm. I never met Kolmogorov. I did meet Ray Salmonov a few times.
[178:25] So professor, what advice do you have for people who are getting into mathematics and computer science, maybe even physics, that is undergraduates? And then what advice do you have for people who are in the field? Yeah, well, I don't know how you guys can work in the current environment. The environment that I worked in wasn't as tough, but I survived only by hiding myself in an industrial lab and doing
[178:52] the research I really believed in as a hobby while I was doing practical hardware and software engineering. So I didn't have to get funding for my research. I wasn't at a university. I don't know how people manage nowadays. There's my young PhD student, Philippe Abraham, and it's a tough environment for him. I'm very upset by the way all the work on lattice enabled nuclear reactions was treated. We lost 30 years.
[179:21] That could have been done in a year if it had the proper funding, people had done what they should have done. And now we wouldn't have global warming. The situation is really getting desperate, right? With heat waves and stuff like that. So and also we wouldn't have, you know, electrical cars are wonderful, but to make the batteries, the batteries are very polluting. They involve all kinds of stuff. This is going to have a big impact on the environment also.
[179:49] So if we had a cheap source of energy, low energy nuclear reactions, it would solve a lot of our problems. And it was just sociology of science or politics, in other words, that kept that from being done as it should have been. So I don't know. My advice is ignore the system, be against the system. You know, G.H. Hardy has a remark like that, which is very elitist, but I admire G.H. Hardy, wonderful pure mathematician.
[180:19] He said, by definition, it's never worthwhile for someone whose first rate to work on fashionable topics, because by definition, there are plenty of people who are working on the fashionable topics. Right, right. You know, but he, he was living in an aristocratic environment where he was a member of the British elite. He wasn't an aristocrat, but he was on the fringes. You know, he was at Cambridge. He was able to, to, to do that, but it's very hard in the, in the current environment.
[180:48] which means that the current environment is very inimical to real scientific progress and creativity and understanding. And I think that's just too bad. I think young people should be optimistic and they should be against the system. Now you can't, you can't be against the system. You have to try to ignore it as much as possible. So one way to do it, which is tough, is to have a day job, which earns you a living or work on fashionable topics.
[181:17] and work on what you really believe in in secret. The guy who solved Fermat's Last Theorem, nobody knew what he was working on all the time, but he kept publishing papers or he would have lost his professorship. He had to keep publishing and he probably cursed that he had to do this work just to stay in the game, but he managed to pull it off. As Sydney Branger said, he has a bunch of friends who are all Nobel Prizes,
[181:47] He said none of them could have survived and done the work that made them famous in the current environment. This is a tremendous indictment of the current environment. So I don't know what's going to happen. One optimistic solution is to say that civilization will collapse and then a whole new world will begin and we'll have a chance to try to do things better. But that would probably be the cataclysmic. I think young people should be optimistic.
[182:15] You have your blog. I managed to hide myself and do my work by some miracle. But it's not easy. It's not easy. Don't follow the fashion. You don't have to spend years learning the existing paradigms because what's more important is to invent a new paradigm. I believe in paradigm shifts. I'm not interested in normal science and I want to encourage all it takes is a new idea. You know, I had a new idea when I was 15.
[182:43] Then I had to work on it many years, right? But all it takes is a new idea and then you have to work hard at it. I have a feeling that the next paradigm shift will come from the quote unquote fringes but will be verified by academia. Yeah, academia will then say they knew all along, they're already trying to change the story about gold fusion.
[183:07] Yeah. Anyway, what I say to young people is don't give up hope, be enthusiastic. You're young, you have energy, a lot of energy, you have your life ahead of you. Don't let them crush you to death. See, this is the same issue with people who, let's say there's something that the majority of people who call themselves skeptics believe is false. So I have a feeling that if
[183:30] Decades later it comes out, oh, it turns out those millions of people were correct and the skeptics were incorrect. The skeptics would still say, yeah, but we were correct because at the time there was no evidence. So technically we're still correct. And now look, we've changed our mind because of new evidence. So we're still following the rational train of thought. But there's the other view, which is that if you're so wrong and you had such a low problem, like you signed it with 0.0001%,
[183:56] You know, one of the reasons Einstein was not a wonderful mathematician, but he had an instinct for where something funny was happening that suggested new physics. And I'm sure there are a lot of places out there
[184:24] where one of them is the dark matter, right? But it's not enough to realize that there's a hole. You have to come up with a suggestion for an explanation, something new. And as Einstein said, there's no systematic way to go from experiment to theory. It's an act of leap of imagination. You've got to be a little crazy to do a paradigm shift because you have to go against what everybody believes is the case.
[184:51] And it may be that you're just crazy and you're wrong. But if, yeah, but if you're right, the mafia will say, oh, we knew all along that this was a good idea, right? We had the idea ourselves earlier. And almost every idea historically can be traced back to some other idea. So you can find a linear that leads you to a common branch. It's true. That's rewriting history. What is that called? The wig history, the history by the winners.
[185:18] Revisionist history? Revisionist history, that's right. We knew all along that cold fusion was right, and you can find it in the 20s, for example. And actually, I guess it will be a good sign if the establishment claims it, because that'll mean that cold fusion works, otherwise they wouldn't claim it. But still, I think it's dreadful what they did. Fleischman, who was a wonderful electrochemist, he had to resign his professorship.
[185:45] He and Pons had to flee the United States and go to Europe. And this could have saved the world if people had taken it seriously. But there was too much politics against it, of course. So the fact that the Japanese are taking this seriously. Well, first there was this wonderful physicist who found a very clean experiment producing helium. So that's got it. That's a nuclear signature. You know, heat is a messy thing to measure.
[186:15] Calimeter calorimeters, but there's also the fact that the Japanese have no oil and they tried Using nuclear reactors, but it was a catastrophe Yeah, I don't understand why they didn't tarnish nuclear energy as a whole whether it's fusion or fission Like that something fission didn't work and then they're like, well, let's try fusion, but that could also be explosive Now the green movement strangely enough in Sweden
[186:44] was very unappreciative of lattice-enabled nuclear reactions. Why? I thought they would like it because you won't be burning fossil fuels anymore. No, they don't like it. They attacked it savagely. Why? Because if there's a wonderful source of nearly free energy that doesn't contaminate anything, then the human footprint on planet Earth will increase. And the green people in Sweden, they want the human population to go down. They want people
[187:14] to have fewer factories, to have a smaller human footprint on planet Earth. So they were vicious in attacking Andrea Rossi. So what's driving in that case, at least purportedly, is not clean energy, but less population? Yeah, I think they prefer that the planet should
[187:39] Go back to the way it looked before human beings were here, ideally, or as close to that as they can get. I think that's sort of the impulse, part of the impulse behind that. So it's true that if there's this wonderful source of nearly free energy that doesn't contaminate anything and it's not dangerous, doesn't produce anybody, that's going to transform the planet. And now it's going to help nations which are poor, right?
[188:06] If you have a lot of energy, you can desalination, right? Which means you can get water from places where there's only salt water, for example. This will make a lot of changes and may mean that the human population will be able to increase more and take over more of the planet. That may be why Elon Musk thinks we have to have a colony on Mars, right? Just in case we mess up things here, there'll be some human beings left elsewhere. And he may be right.
[188:35] He's a pretty deep, he's a pretty deep, he's a pretty deep thinker. I'll say one message. I think the future is going to be radically different than any of us can imagine. Not just because political movements or geopolitical changes, but because of AI and also because of, I think LENR now is going to happen. It's going to happen because climate change is already becoming unbearable, right?
[189:01] So all of a sudden there's going to be political backing. I see signs that that's already happening, that there's an attempt to portray LNR as something reasonable that should be explored rather than trash science the way it was portrayed for 30 years. So that's going to be a very different future from our current future. And also medical knowledge is increasing enormously with molecular biology and things like that.
[189:28] Professor, I have to get going.
[189:45] Thanks a lot. This was a lot of fun. Keep up the good work. Thank you. Bye bye.
[190:13] The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people.
[190:25] You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
[190:55] Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms?
[191:16] iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting Patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough.
[193:08] Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?
[193:21] Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 7618,
  "audio_duration_seconds": 11618.8,
  "completed_at": "2025-12-01T00:49:32Z",
  "segments": [
    {
      "end_time": 20.896,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
    },
    {
      "end_time": 36.067,
      "index": 1,
      "start_time": 20.896,
      "text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
    },
    {
      "end_time": 64.531,
      "index": 2,
      "start_time": 36.34,
      "text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 95.52,
      "index": 3,
      "start_time": 66.254,
      "text": " There are two things that are absolutely true. Grandma loves you, and she would never say no to McDonald's. So treat yourself to a Grandma McFlurry with your order today. It's what Grandma would want. At participating McDonald's for a limited time. I think that biology is open-ended and endlessly creative. When you create a new level of reality, what goes on at the bottom level may not be visible at the top level, and you can start with many bottom levels and get to the same top level."
    },
    {
      "end_time": 118.234,
      "index": 4,
      "start_time": 96.425,
      "text": " Gregory Chaitin is a towering figure in the field of mathematical logic and complexity theory. Chaitin left formal education during high school, beginning his work in mathematical theory as a teenager. His contributions to algorithmic information theory include the development of Chaitin's incompleteness theorem, which builds on Gödel's incompleteness theorem. However, it uses less assumptions."
    },
    {
      "end_time": 145.879,
      "index": 5,
      "start_time": 118.234,
      "text": " Gödel's theorem requires the strength of arithmetic to prove that an infinite amount of mathematical facts can't be deduced from a finite set of axioms or technically a recursively axiomatizable set. Whereas Chaiten's approach reaches a similar conclusion though it relies on less assumptions and thus in some ways can be seen as more powerful. Chaiten's also famous for his constant called Chaiten's constant. There's a visual math episode on Chaiten's constant and I'll link that in the description as well."
    },
    {
      "end_time": 171.493,
      "index": 6,
      "start_time": 145.879,
      "text": " And if you'd like a definition, it's written on screen. Again, links to everything will be in the description. It's defined to be the halting probability represented by the symbol Omega, capital Omega. So how can this be understood? It's defined as the probability that a randomly selected program will stop running or halt. Chaitin's career positions include being a researcher at IBM Watson, a professor at the Federal University of Rio de Janeiro, and a member of the Institute for Advanced Studies."
    },
    {
      "end_time": 195.299,
      "index": 7,
      "start_time": 171.493,
      "text": " This episode delves into Chaitin's exploration of metabiology as well, which is a study intersecting biology and complexity theory. Physicists might discuss complexity theory in terms of chaos and randomness, but Chaitin's approach examines how randomness contributes to biological evolution using mathematical models and algorithmic mutations. You can think of metabiology as an abstracted theory of everything for biology."
    },
    {
      "end_time": 214.087,
      "index": 8,
      "start_time": 195.299,
      "text": " We also explore chaitin's views on cold fusion developments and his critique of the sociology of science or the culture of science particularly with regard to its resistance to paradigm shifts. My name is Kurt and on this podcast we explore theories of everything primarily from a physics perspective that's my background is math and physics."
    },
    {
      "end_time": 228.763,
      "index": 9,
      "start_time": 214.155,
      "text": " But as well as delving into consciousness, as well as artificial intelligence, if you like the subjects that are in this podcast, you should watch the David Walport podcast, which is also on algorithmic information theory. Enjoy this podcast with the inimitable Gregory Chaitin."
    },
    {
      "end_time": 258.677,
      "index": 10,
      "start_time": 229.309,
      "text": " Keep in mind, Gregory Chaitin did tell me that at times his children will be in the background and they may make noise, but luckily it doesn't impair the audio at all. It's a fantastic podcast. It's one of my favorite podcasts. I hope that you enjoy it just as much as I did. Professor, welcome to the theories of everything podcast. I'm super excited to have you on. I know we've spoken for I think a couple of months now, maybe even one year ago over email. So welcome. Pleasure to be here with you."
    },
    {
      "end_time": 285.111,
      "index": 11,
      "start_time": 259.428,
      "text": " Welcome. What is it that you've been working on? Yeah, well, my wife and I are about to go to Morocco. We're going to be in residence for one academic year at a new Institute for Advanced Study connected with a university, a new university called UM6P. And my project for there is to write a book. And I'm not sure what the title is."
    },
    {
      "end_time": 314.838,
      "index": 12,
      "start_time": 285.538,
      "text": " But the idea is to look at episodes of mathematics. It's maybe a tentative title is mathematics evolving, but I'm not covering the evolution of mathematics because that's an enormous broad subject. I'm trying to cover episodes and mathematics that have a big philosophical impact. For example, Cantor's theory of infinite sets, Gödel's incompleteness theorem, Turing's halting problem, the halting probability. And I also want to have a chapter on is it possible to mathematize biology?"
    },
    {
      "end_time": 344.155,
      "index": 13,
      "start_time": 315.162,
      "text": " Right. And also a chapter on why mathematics, you know, what makes you suspect what that the world mathematics is relevant to the world. And they're my funny answer. The usual answer is, for example, astronomy. The Sumerians were very good at predicting lunar eclipses, for example. But I think in our everyday life, the best answer is a crystal. I collect"
    },
    {
      "end_time": 372.022,
      "index": 14,
      "start_time": 344.428,
      "text": " Is Chaitin's incompleteness theorem going to make an appearance in it?"
    },
    {
      "end_time": 398.797,
      "index": 15,
      "start_time": 373.37,
      "text": " Yeah, well, maybe not the one you're thinking of. I want the halting probability to have a chapter. You mean the theorem that you can't prove, you can't establish upper bounds, lower bounds on the complexity of an individual object? That was the one that, is it Dave Walpert talked to you about? Oh, you watched that. That's great."
    },
    {
      "end_time": 426.118,
      "index": 16,
      "start_time": 399.445,
      "text": " Yeah, I saw that. Well, you made a clip of that part, right? Right, and then also Neil deGrasse Tyson. Yeah, I saw that he wasn't very receptive to discussing philosophy of mathematics. Well, he's an astronomer, right, or an astrophysicist. Correct. It's natural. Yeah. It's natural? Have you encountered people saying that you're too philosophical before? Well,"
    },
    {
      "end_time": 453.063,
      "index": 17,
      "start_time": 426.886,
      "text": " I think the math community is not particularly philosophical. They want to solve important problems, develop their theories. I think Goethe's Incompleteness Theorem is largely forgotten. It was very shocking at the time. I, as a young student, read essays by Hermann Weyl and von Neumann. They were deeply, profoundly shocked by Goethe's Incompleteness Theorem. You know, people are good at suppressing unpleasant"
    },
    {
      "end_time": 472.892,
      "index": 18,
      "start_time": 453.422,
      "text": " facts or like thinking about death so mathematicians don't think about Gödel incompleteness theorem and it doesn't seem that it has, I want to discuss that in my book maybe, it doesn't seem that it has a big impact on the work that most mathematicians are interested in. That's a controversial question to what extent."
    },
    {
      "end_time": 505.998,
      "index": 19,
      "start_time": 479.923,
      "text": " I devoted my life to thinking about incompleteness and trying to find new reasons for incompleteness and strengthen incompleteness results. So I bet that maybe this was important. Maybe I bet on the wrong horse, but certainly I had a lot of fun. Can you explain Chaitin's incompleteness theorem, the one about the lower bounds, as well as what Chaitin's constant is, and if there's a relationship between the two?"
    },
    {
      "end_time": 536.8,
      "index": 20,
      "start_time": 507.841,
      "text": " Well, there is, all of the proofs are related. I think the easiest incompleteness theorem to understand has to do with defining algorithmic randomness or algorithmic incompressibility of a finite string of bits. That's a string of bits that can't be produced from any computer program substantially smaller than it is in bits, right? So that's an algorithmically irreducible, finite string of bits."
    },
    {
      "end_time": 565.64,
      "index": 21,
      "start_time": 537.637,
      "text": " And when you develop the theory of program size complexity and randomness of finite bit strings, it turns out that most, most bit strings have very close to the maximum possible complexity. Most of them are irreducible. Most of them are random in this sense. And then if you ask, well, what if I want to prove that specific bit string, you know, you can just toss a coin, you get a bit string, which would very high probability."
    },
    {
      "end_time": 593.66,
      "index": 22,
      "start_time": 566.271,
      "text": " doesn't have any program substantially smaller than it is to calculate it. And you can even quantify that. You know how the, as you ask for the program to get smaller and smaller bit by bit, the number of bit strings that can be compressed by n bits goes down exponentially as n increases. So it's heavily bunched on the maximum possible complexity. The most"
    },
    {
      "end_time": 622.978,
      "index": 23,
      "start_time": 594.036,
      "text": " finite strings of bits, the n-bit strings, require programs very close to n-bits. And as you ask for programs that are k-bits smaller than the size of the string, it's roughly 2 to the minus k of all the possible bit strings, n-bit bit strings that can be compressed by k-bits. So as you make k bigger, the number of the strings that can be compressed that much goes down very rapidly. Okay, so"
    },
    {
      "end_time": 649.889,
      "index": 24,
      "start_time": 623.353,
      "text": " So if you just toss a coin say a thousand times you know that it's irreducible and you can quantify exactly you know the margin of how close it has to be the complexity to the maximum possible right the probabilities are easy to estimate. So what happens if you want to prove you want to have a specific example you want to prove that a specific big string"
    },
    {
      "end_time": 674.735,
      "index": 25,
      "start_time": 650.794,
      "text": " is algorithmically irreducible or close to it. It's an approximate notion. There's not a sharp cutoff. And so it's a little messy mathematically to talk about this. So the answer is you can't. The answer is if you have a formal axiomatic theory for mathematics that is in a certain sense n bits of algorithmic information,"
    },
    {
      "end_time": 699.019,
      "index": 26,
      "start_time": 675.333,
      "text": " You're not going to be able to prove that a bit string that is more than n bits long is algorithmically irreducible. Even though maybe. Even though the probability is enormously higher than it is. So this is something that has an enormously high probability and you can estimate, you can give good bounds on the probabilities, but it's something that is unprovable."
    },
    {
      "end_time": 725.486,
      "index": 27,
      "start_time": 699.531,
      "text": " So the number of bits of algorithmic information in a formal axiomatic theory, Hilbert thought there was a theory of everything for math, right? So this would have had a certain number of bits of algorithmic information. I can define that more precisely if you want. And it wouldn't be very large because the thought was that pure mathematics starts from a small group of axioms."
    },
    {
      "end_time": 752.688,
      "index": 28,
      "start_time": 726.032,
      "text": " If you're doing Peano arithmetic, it's a small group of axioms with symbolic logic. If you're using Zermelo-Frenkel set theory as your basis for mathematics, that would be another candidate for a theory of everything. It would be a little more complicated, but it's not very large because people don't believe in very complicated axioms. The basic principles have to be simple to be self-evident and convincing."
    },
    {
      "end_time": 782.073,
      "index": 29,
      "start_time": 753.302,
      "text": " Hilbert had hoped that there would be a theory of everything for all of pure math and that everyone could agree on it and this would give us absolute certainty because there would be a precise criterion for mathematical truth because if you have a formal axiomatic theory that contains all of math that everyone agrees on, if you have what you think is a proof formulated within this system in symbolic logic, there's an algorithm to check if the proof is correct or not, if it obeys the rules. So that becomes objective."
    },
    {
      "end_time": 811.015,
      "index": 30,
      "start_time": 782.602,
      "text": " And you can even imagine a program that runs through all possible, the tree of all possible proofs, producing all possible theorems. And that's Emile Post's way of looking at a formal axiomatic theory. It's just as far as he's concerned, and as far as I'm concerned, it's just an algorithm for generating all the theorems you can prove, right? And it's an algorithm that goes on forever. And the precise definition of the algorithmic information content of a formal axiomatic theory,"
    },
    {
      "end_time": 838.729,
      "index": 31,
      "start_time": 811.544,
      "text": " is the size and bits of the program that runs through all possible proofs, checking which ones are correct, the tree of all possible deductions from the axioms using the symbolic logic you're using, producing all the truths, all the provable theorems. You see, it's a calculation that never ends and the program wouldn't be terribly long. So the moment you have a, a bit string that's generated at random by tossing independent tosses of a fair coin,"
    },
    {
      "end_time": 863.951,
      "index": 32,
      "start_time": 839.36,
      "text": " If it's substantially larger in bits than the algorithmic formulation of the kind I just told you, of the formal axiomatic theory you're using to try to prove that this bit string is algorithmically irreducible, you can prove it's impossible. It's just impossible to prove. It's a very paradox kind of argument that with n bits of accents you can't prove that anything"
    },
    {
      "end_time": 893.404,
      "index": 33,
      "start_time": 865.06,
      "text": " needs substantially more than n bits to be calculated. You can't give individual examples. And you can prove that it's impossible? You can prove that the probability of it goes to zero? No, no, it's not the probability goes to zero. The probability is zero. You can't prove, you can't prove with a formal axiomatic theory that is n bits in the sense that I said that the algorithm that runs through all possible proofs, finding all the theorems,"
    },
    {
      "end_time": 922.995,
      "index": 34,
      "start_time": 894.155,
      "text": " is an n-bit algorithm, then you can't give an individual example of any finite object that needs more than that number of bits to be calculated. So this is the theorem that David Wolpert mentioned. So these bounds are actually applied to all possible formal axiomatic theories for which there's an algorithm to run through all possible proofs and get all the theorems."
    },
    {
      "end_time": 945.452,
      "index": 35,
      "start_time": 923.558,
      "text": " Gödel's original proof applies only to Peano arithmetic, you see. But already starting with Turing's paper of 1936, there's a more general approach that then Emil Post captured and worked on, and I'm following that approach. So these are pretty general incompleteness results."
    },
    {
      "end_time": 973.302,
      "index": 36,
      "start_time": 945.691,
      "text": " So in other words, if you want an individual, even though most bit strings are algorithmically irreducible, if the string is longer than the number of bits that it takes to formulate your axiomatic theory, you're never going to be able to prove that an individual string that has more bits than the number of bits in your theory is algorithmically irreducible. So your theorem doesn't rely on piano arithmetic?"
    },
    {
      "end_time": 1002.619,
      "index": 37,
      "start_time": 974.633,
      "text": " No, it applies to any formal axiomatic theory that is algorithmic, for which there's an algorithm to check if a proof is correct or not, that it's a mechanical procedure to check if a proof is correct or not, which implies that there's a mechanical procedure to generate all the theorems, all the consequences of the axioms. It's very slow. This algorithm would be very slow because you're running through the tree of all possible deductions from the axioms, but it would include every"
    },
    {
      "end_time": 1032.056,
      "index": 38,
      "start_time": 1003.148,
      "text": " every result that can be proven within your formal axiomatic theory and Hilbert thought there would be one for all of math that mathematicians could agree on and that would be a very concrete proof that mathematics provides absolute certainty that it's black or white which is what most mathematicians assumed until ghetto so this is a very general incompleteness result so another there are three sort of formulations of this that are roughly equivalent"
    },
    {
      "end_time": 1061.869,
      "index": 39,
      "start_time": 1032.637,
      "text": " One is that if you toss a coin, an independent toss of a fair coin, a number of times that's substantially larger than the bits of axioms of your formal axiomatic theory, even though it's extremely likely that this will give you an algorithmically irreducible unstructured random string, you won't be able to prove it from those axioms because there's more information in the random string than there is in the axioms you're trying to use to prove that it's random."
    },
    {
      "end_time": 1090.555,
      "index": 40,
      "start_time": 1062.176,
      "text": " Now another version is the result is you can't prove lower bounds. You can't exhibit any individual example of an object with an n-bit formal axiomatic theory. You can't exhibit any particular finite object to give a specific example of an object which provably requires to be calculated a program that is substantially larger than the number of bits"
    },
    {
      "end_time": 1119.514,
      "index": 41,
      "start_time": 1090.998,
      "text": " of axioms in your formal axiomatic theory. Now, I'm saying substantially, which is a little messy detail, but in the mathematics, there's a sharper formulation. I like to talk about elegant programs. You're fixing the programming language. An elegant program is one that has the property that no smaller program in the same language produces the same output. That's the most concise program in that particular"
    },
    {
      "end_time": 1149.94,
      "index": 42,
      "start_time": 1122.5,
      "text": " And here you get a very sharp result. If you want to prove that a program is elegant that has more than n bits, you need a formal axiomatic theory that has n bits. Otherwise, you can't prove it. You can't exhibit any object that is provably requires more than n bits if you have an n bit theory. There's a proof for this. I don't know if I should get into the proof."
    },
    {
      "end_time": 1180.162,
      "index": 43,
      "start_time": 1150.538,
      "text": " No, don't worry about the proof. It's going to be difficult to convey with words. Okay. The proof is very simple, but I would need a blackboard probably to explain it better. This is a very simple result compared to ghetto's incompleteness theorem. The proof is really very, very simple. So these are three different versions of essentially the same idea formulated in slightly different ways. The proof uses something called the Berry Paradox."
    },
    {
      "end_time": 1210.196,
      "index": 44,
      "start_time": 1180.674,
      "text": " You see, because if I could prove that a specific object requires very big programs and I'm proving this from a smaller number of bits, I can run through all the proofs in your formal axiomatic theory until I find the first object that provably requires a lot more bits than in your theory. And then what's happened is I've calculated that object, which supposedly needs a very large program, but I've calculated from a much smaller number of bits, which is the number of bits in your theory."
    },
    {
      "end_time": 1235.725,
      "index": 45,
      "start_time": 1211.305,
      "text": " You see, the proof is very, very simple. Now, as Grothendieck said, when you really understand something, the proof should be very short and very simple. So this, I think, but this is a different kind of inconvenience result than girls. It's rather different in character. Now, starting from this,"
    },
    {
      "end_time": 1264.701,
      "index": 46,
      "start_time": 1236.749,
      "text": " There is something called the halting probability. The omega number, it's a real number. It's a probability. It's between zero and one. You know, zero would be no computer program holds. You generate computer programs at random and one would be every computer program holds. And of course, the answer is somewhere in the middle. And if you write this number in binary, it is algorithmically irreducible. All the initial segments have very high complexity."
    },
    {
      "end_time": 1281.442,
      "index": 47,
      "start_time": 1265.538,
      "text": " It looks totally unstructured. It's incompressible. It's a number defined in pure mathematics that really cannot be distinguished from independent tosses of a fair coin. So in the world of pure mathematics, where truth is assumed to be black or white,"
    },
    {
      "end_time": 1310.913,
      "index": 48,
      "start_time": 1282.807,
      "text": " Hear that sound? That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 1336.971,
      "index": 49,
      "start_time": 1310.913,
      "text": " There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone."
    },
    {
      "end_time": 1360.23,
      "index": 50,
      "start_time": 1336.971,
      "text": " of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at Shopify.com"
    },
    {
      "end_time": 1370.845,
      "index": 51,
      "start_time": 1360.23,
      "text": " Go to Shopify.com slash theories now to grow your business no matter what stage you're in Shopify.com slash theories."
    },
    {
      "end_time": 1391.63,
      "index": 52,
      "start_time": 1373.848,
      "text": " Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts, scrapes. A bad shave isn't a blade problem, it's an extension problem. Henson is a family-owned aerospace parts manufacturer that's made parts for the International Space Station and the Mars Rover."
    },
    {
      "end_time": 1420.111,
      "index": 53,
      "start_time": 1391.63,
      "text": " Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair. The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors, not the best razor business, so that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence."
    },
    {
      "end_time": 1436.476,
      "index": 54,
      "start_time": 1420.111,
      "text": " It's also extremely affordable. The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech. It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit hensonshaving.com slash everything."
    },
    {
      "end_time": 1466.476,
      "index": 55,
      "start_time": 1436.476,
      "text": " If you use that code, you'll get two years worth of blades for free. Just make sure to add them to the cart. Plus 100 free blades when you head to H E N S O N S H A V I N G dot com slash everything and use the code everything. You have something that is a very good simulation of randomness of independent tosses of a fair coin in pure mathematics. So this is a sort of a worst case."
    },
    {
      "end_time": 1486.049,
      "index": 56,
      "start_time": 1466.971,
      "text": " You see, the original idea of Hilbert was that he wanted a theory of everything for all of math that all mathematicians could agree on, because there were paradoxes and controversies about how to do math, and Hilbert proposed that formalism was a good way to clarify this."
    },
    {
      "end_time": 1515.742,
      "index": 57,
      "start_time": 1486.766,
      "text": " So his idea was that all of mathematical truth, all of the infinite, if there were a theory of everything for all of math, all of mathematical truth, this infinite amount of mathematical results in the plutonic world of mathematics, of pure mathematics, could be compressed into a finite set of axioms and rows of inference. Right? Now the halting probability of omega is the opposite extreme. It's this infinite amount of information. It's a real number with an infinite number of bits."
    },
    {
      "end_time": 1543.456,
      "index": 58,
      "start_time": 1516.8,
      "text": " which is incompressible. It looks totally unstructured. If you want to determine the first n bits of the numerical value of the halting probability, you need an n-bit theory. If you want a computer program that will output the first n bits of the halting probability, you need a computer program that is n bits long. And similarly, if you want to prove what the bits are, you would need n bits of axioms and rules of inference to be able to determine n bits of the halting probability."
    },
    {
      "end_time": 1573.643,
      "index": 59,
      "start_time": 1543.831,
      "text": " So the halting probability is a sort of a worst case. It's a case where mathematical truth is completely irreducible. So it's the opposite extreme from Hilbert's idea of a theory of everything, which would be a finite amount of information, a finite set of axioms, a theory of everything would be a finite set of axioms in rules of inference that would give you all the infinite world of platonic ideas, all the provable results. And here is a case of an example"
    },
    {
      "end_time": 1599.172,
      "index": 60,
      "start_time": 1574.019,
      "text": " which is the exact opposite, where you have an infinite amount of mathematical information that can't be compressed at all. You see? So this is sort of, as a physicist friend of mine told me, Karl Svazic, this is sort of a nightmare for the rational mind because it's the opposite extreme for what Hilbert thought was going to be the case in pure math. It's something that's completely irreducible and compressible mathematical information."
    },
    {
      "end_time": 1627.073,
      "index": 61,
      "start_time": 1599.991,
      "text": " You see, because this whole thing probably can be defined more precisely, and then it's a specific real number, and its value is determined by its definition when you go through all the work. Have you heard of finiteism? Finiteism is the idea that the world of mathematics, you shouldn't allow infinite things in the world of mathematics, right? Yeah, well, I've heard of it."
    },
    {
      "end_time": 1657.91,
      "index": 62,
      "start_time": 1628.217,
      "text": " It's certainly a defendable point of view, but it destroys a lot of beautiful mathematics. Mathematics is, I don't know how to put it, it's sort of a fake world, maybe. Part of the problem with finiteism is where do you cut the line? How big is too big? It's sort of where you put the wall, but you see, for example, there's a very simple proof that there are infinitely many prime numbers that goes back to"
    },
    {
      "end_time": 1682.892,
      "index": 63,
      "start_time": 1658.439,
      "text": " Ancient Greece. And there you have infinity already, you know, zero, one, two, three, four, five. So mathematically, I agree that in our daily lives, we don't see infinity anywhere, right? Or maybe if you're religious, you think that God is infinite, but we're certainly not infinite, we human beings, but you can prove things about infinity in pure mathematics. That's part of the"
    },
    {
      "end_time": 1712.858,
      "index": 64,
      "start_time": 1683.507,
      "text": " The beauty of it, that this is this platonic world of mathematical ideas, even the simplest math, which is just 0, 1, 2, 3, 4, 5 addition and multiplication, you know, when you talk about numbers that are composite or prime, you have some very simple, beautiful proofs. The standard example that G. H. Hardy gives in his lovely little book called A Mathematician's Apology. When I was a student, it seemed lovely to me. I was reading it not that long after Hardy had written it. He wrote it"
    },
    {
      "end_time": 1737.073,
      "index": 65,
      "start_time": 1713.831,
      "text": " during the Second World War to cheer himself up. He was a pacifist. Well, you didn't have to be a pacifist to be very sad about what was going on in Europe during the Second World War. So he wrote this little book called The Mathematician's Apology. He's apologizing in the end a little aggressively for the fact that he likes pure mathematics, mathematics that has no applications. He's thinking of applications to war and destruction."
    },
    {
      "end_time": 1768.063,
      "index": 66,
      "start_time": 1739.804,
      "text": " He says one field of math which will never have its hands with blood is number theory or arithmetic, where you talk about prime numbers, for example, and prove there are infinitely many. Historically, that hasn't quite been the case because number theory is used in some schemes for cryptography, prime numbers and factorization. Probably Hardy would be unhappy about those practical applications of"
    },
    {
      "end_time": 1788.865,
      "index": 67,
      "start_time": 1768.507,
      "text": " I'm sorry, that was the first World War or the second? This was the second World War. Yeah, because the Germans were already working on the Enigma machine at the time. Yeah, but nobody knew about it. Artie was doing pure mathematics at Cambridge. He was, of course, very sad because his students mostly went to war, right?"
    },
    {
      "end_time": 1816.51,
      "index": 68,
      "start_time": 1789.599,
      "text": " And many of them didn't come back from the war. I'm not sure if this is when he had Ramanujan there. Ramanujan was a foreign national, so he wouldn't have been called into the armed forces. And maybe some of the students at Cambridge would have been upset that he wasn't fighting for England. But at that time, India was part of the Commonwealth. So I'm not sure how this all plays out. Anyway,"
    },
    {
      "end_time": 1841.527,
      "index": 69,
      "start_time": 1817.09,
      "text": " How is a theory of everything in physics different than a theory of everything in math? Well, in theory of everything in math from Gödel in 1931, we know is impossible. Now in physics, I think that a lot of physicists still hope to have a theory of everything for the physical world. But if the physical world doesn't contain infinity,"
    },
    {
      "end_time": 1871.323,
      "index": 70,
      "start_time": 1842.108,
      "text": " Then you don't get incompleteness results, for example. So Gödel's incompleteness theorem, as far as I can see, does not translate directly into saying there can't be a theory of everything for physics. For example, the halting problem, Turing's halting problem, which I love because my own work comes from Turing's work more than from Gödel's, is not realistic from the point of view of physics because you're talking about unbounded or infinite time of computation. You're asking if an algorithm will halt"
    },
    {
      "end_time": 1899.753,
      "index": 71,
      "start_time": 1872.261,
      "text": " in a finite amount of time, but it can be an absolutely monstrous amount of time. So this is, I think, unrealistic from the point of view of physics. So I don't think these mathematical results, in my opinion, translate that say that there is no theory of everything for pure math. It translates into saying there's no theory of everything for theoretical physics, unless you maintain that the world of mathematics is a subset of the"
    },
    {
      "end_time": 1931.374,
      "index": 72,
      "start_time": 1903.353,
      "text": " some people who think of that. Now, on the other hand, how could that argument be made? I could see the argument that physics is a subset of math. How would the other direction go? Well, the other direction would say God is a this is it's Pythagoreanism. God is a pure mathematician and the universe is built out of the ontological basis for the universe is mathematics, is pure mathematics. God is a pure mathematician and the world was created out of"
    },
    {
      "end_time": 1960.384,
      "index": 73,
      "start_time": 1932.261,
      "text": " out of pure mathematics. See, there's a problem. If you're looking for the fundamental ontology of the universe, you know, it's good to pick something solid, not try to build the universe out of marshmallow, for example. So one thing that's pretty solid is the world of pure mathematics. If you're sticking, say, to the natural numbers, the whole numbers, addition, multiplication, that's a pretty sharp, crystal clear foundation, perhaps."
    },
    {
      "end_time": 1990.094,
      "index": 74,
      "start_time": 1961.186,
      "text": " So that's one possible point of view. I personally find a little more attractive a philosophical thesis that says that maybe God is a computer programmer, not a pure mathematician. So instead of saying all is number, God is a mathematician, I would say all is algorithm, God is a computer programmer. So that's a Neopithagorean ontology, but you know, it's really up to physicists to look and see how the physical world"
    },
    {
      "end_time": 2019.036,
      "index": 75,
      "start_time": 1990.742,
      "text": " is built out of what? Sorry to interrupt. Would God still be a programmer if God had every possible program? Because then it's like a programmer selects from the set of all programs. But in Wolfram's view, there's something like the set, the Rulliad space or the space of all software, the space of all programs. Yeah. And he said, I don't understand the Rulliad very well, in spite of the fact that Steven is a good friend of mine. He's a very"
    },
    {
      "end_time": 2048.285,
      "index": 76,
      "start_time": 2019.497,
      "text": " Deep thinker and a very sharp guy. I'm sure he knows what he's talking about. But if I understand him properly, he's saying there's no need to know the laws of physics for this particular universe because all possible laws of physics are taking place at the same time. They're all entangled and the observer selects. The observer has an illusion that some particular laws are the ones that apply. But I think his thesis is that the rule yet is what really exists and then"
    },
    {
      "end_time": 2074.855,
      "index": 77,
      "start_time": 2048.78,
      "text": " You have all possible laws at the same time. This is an amazing new viewpoint that Stephen is working on with collaborators. But the traditional notion would be that God picks either a set of partial differential equations to create the world or he picks a particular software implementation that would generate the world. You know, Leibniz has a remark"
    },
    {
      "end_time": 2101.135,
      "index": 78,
      "start_time": 2075.299,
      "text": " As God cogitates and calculates so the world is made, this is not a brand new point of view. I guess you could claim it goes back to Pythagoras, at least, if not earlier. When Pythagoras said that the world is built out of mathematics, we didn't have modern physics, so there was already a lot of astronomical information"
    },
    {
      "end_time": 2128.695,
      "index": 79,
      "start_time": 2101.476,
      "text": " But the usual evidence for the idea that Pythagoras claims that the world is built out of mathematics, I believe, were musical instruments, for example, or the motions of the planets. I personally prefer crystals as the most obvious example of mathematical substructure of the world. And I've asked people, you know, is there a mention of crystals anywhere in the text that come to us from classical Greece, from ancient Greece?"
    },
    {
      "end_time": 2159.36,
      "index": 80,
      "start_time": 2129.36,
      "text": " And I never got a clear answer. I think you can put it all on a DVD. But as far as I know, nobody made this argument. I imagine they have mineral crystals in Greece or Turkey was then, but Magna Grecia was part of... Well, I believe that the platonic solids that were studied historically, the one that was studied the least or studied the latest is related to a crystal that's the most rare. Oh, so the Greeks..."
    },
    {
      "end_time": 2180.555,
      "index": 81,
      "start_time": 2159.735,
      "text": " Connected the platonic solids with crystal mineral crystals. I don't know if that is the case, but we can use this historical fact infer that Okay, that's possible. Well, that would be good, but you would have to ask Stephen or one of his collaborators your question again because Right, right, right. I think his stuff is fascinating, but I have to confess I don't"
    },
    {
      "end_time": 2202.705,
      "index": 82,
      "start_time": 2180.828,
      "text": " Fully understand it. Yeah. So there's a difference between a theory of everything and a delimiting theory of everything. And so a theory of everything is actually easy because you can say anything goes and there you are your theory of everything. Anything that can happen is going to happen, but it's difficult to have a delimiting theory of everything where you say, well, can we distinguish between what is and what isn't?"
    },
    {
      "end_time": 2232.585,
      "index": 83,
      "start_time": 2203.985,
      "text": " Yeah. Well, if you believe in some kind of multiverse, then the laws of this particular universe is just what our address, our postal address in the multiverse. Oh, that's a great way of putting it. It begins to sound a little trivial, right? Unimportant. Yeah. Well, I personally feel more comfortable with a more traditional point of view, according to which the universe is built out of laws of physics, one particular set of laws of physics."
    },
    {
      "end_time": 2261.493,
      "index": 84,
      "start_time": 2233.575,
      "text": " but maybe it is plastic and maybe it is anything goes that would be that would solve the problem of finding the laws of this universe well it changes it it becomes why if i understand steven's notion of the observer the the observer somehow selects depending on what the observer can observe that makes it look like a particular set of laws are operational whereas he believes i think that the rulliad"
    },
    {
      "end_time": 2291.749,
      "index": 85,
      "start_time": 2261.92,
      "text": " Is the fundamental ontology, which includes all possible time evolutions, all possible formal axiomatic theories. Also, by the way, Stephen has recently published a book called the physicalization of metamathematics because in his really, there isn't that much of a difference between a, an algorithmic world and a formal axiomatic theory. His approach is sufficiently general. It all sort of looks similar."
    },
    {
      "end_time": 2318.575,
      "index": 86,
      "start_time": 2291.988,
      "text": " You know, I told you this computer version of a formal axiomatic theory as an algorithm that generates, goes through all the tree of all possible proofs, generating all the theorems. And this kind of thing is also included in the roulette, if I understand the roulette properly. So this is an exciting development. I've known Stephen for many years and the fact that he's come up with digital models that he feels are"
    },
    {
      "end_time": 2348.746,
      "index": 87,
      "start_time": 2318.916,
      "text": " of the physical universe is wonderful news and he has a lot of young collaborators who are working with him on this. We'll have to see how it develops, but I don't understand it very well. Have you heard of David Walpart's no free lunch theorems? And if so, does it have any relationship to yours? I've tried to take a look at that and I really haven't been able to get the point. I'm sure it's a good piece of work, but somehow it doesn't resonate with my own thinking."
    },
    {
      "end_time": 2379.599,
      "index": 88,
      "start_time": 2349.701,
      "text": " Yeah, I don't know for what reason, so I can't really think of anything to say on that topic. Wolpert is a deep thinker and he seems to be one of the few people who appreciates some of the work I've done on incompleteness, which is unusual. Most people don't like it, especially not logicians. Yes. OK, so the idea, in other words, is, as I said, Grothendieck somewhere says that if you really understand something,"
    },
    {
      "end_time": 2408.507,
      "index": 89,
      "start_time": 2380.026,
      "text": " Proof should all be short and sort of obvious in retrospect when you really understand the subject. And he doesn't like proofs that require cleverness, you know, what is it called in French? Anyway, so there was a famous result he got that he didn't bother to publish because he thought the proof wasn't sufficiently natural. It was too long and it required some cleverness instead of his usual"
    },
    {
      "end_time": 2438.763,
      "index": 90,
      "start_time": 2408.916,
      "text": " So I would sort of agree and I think the work I've done on completeness sort of hits you in the face because originally Gödel's work looked very complicated. It was sort of like general relativity. You could say that there were 10 people on planet Earth who understood it, right? It's a very technical, very brilliant piece of work. But following Turing and then Post and introducing the idea of the size of programs, bits of information,"
    },
    {
      "end_time": 2463.131,
      "index": 91,
      "start_time": 2440.043,
      "text": " I think incompleteness hits you in the face. So I believe this to be the right formulation for thinking about incompleteness. Now the logic community will disagree. They find the idea of randomness abhorrent, for example. I mean, this is a logician's worst nightmare come true. They believe that everything that's true is true for a reason, as Leibniz would say."
    },
    {
      "end_time": 2493.592,
      "index": 92,
      "start_time": 2467.585,
      "text": " I sympathize with their point of view, I understand. I work with concepts that really come out of theoretical physics. People never enjoy when a field gets invaded by alien concepts that they don't really feel comfortable with. So, during the course of my life, I had many invitations to physics meetings with very good physicists, including, for example, the Solvay physics conferences."
    },
    {
      "end_time": 2520.435,
      "index": 93,
      "start_time": 2493.968,
      "text": " in Belgium that originally included Einstein and Madame Curie and Poincaré. I was invited to two of them, which was a real treat. And you could see photographs of the older meetings with this very distinguished group of physicists that created the modern physics. Yeah. So physicists feel much more comfortable with my work because I'm taking the idea of randomness"
    },
    {
      "end_time": 2549.753,
      "index": 94,
      "start_time": 2520.964,
      "text": " which is a fundamental idea in statistical physics. You know, now all physics is statistical physics. Yeah, this is what I was talking to Neil deGrasse Tyson about, that there may be a connection because you're dealing with randomness and randomness is inherent to quantum mechanics. At least we think so. Yeah, I think that even Stephen Hawking made a remark like that in The New Scientist once. It looks appealing. I don't know how to make a direct connection"
    },
    {
      "end_time": 2577.21,
      "index": 95,
      "start_time": 2550.316,
      "text": " The ideas certainly seem to resonate with each other, right? There's an empathy with them, but I don't see how to take my work and make it into quantum mechanics. But I agree that what is emerging is a point of view. Well, I would say not just randomness. You see, I'm working on algorithmic information, which is a classical notion. A normal Turing machine is a classical device."
    },
    {
      "end_time": 2605.896,
      "index": 96,
      "start_time": 2577.619,
      "text": " And physicists are now more interested in quantum computers, right, and qubits, quantum information. And there's a serious attempt on the part of many physicists to build the world out of quantum information, for example, to get space-time out of quantum information. And those are two different notions of information. And then if you look at molecular biology, Sydney Brenner says somewhere, the way you get molecular biology is you say,"
    },
    {
      "end_time": 2636.357,
      "index": 97,
      "start_time": 2606.493,
      "text": " I don't care about metabolism. I don't care where the cell gets its energy from. All I care about is information. This was the revolution, the change of viewpoint, the paradigm shift that gets to molecular biology. The energy will take care of itself. What's important is to look at how information is represented in the cell, how it acts, how the information goes around. This is also information. We have three different fields where"
    },
    {
      "end_time": 2665.759,
      "index": 98,
      "start_time": 2636.681,
      "text": " some notion of information, but also there's computer technology, right, which clearly is software. So I see this notion of information as a fundamental new paradigm that goes across fields, that goes across fields in different versions in biology, molecular biology, in quantum mechanics now, which is built out of qubits. It's a whole different way of looking at quantum mechanics. When I was a student, it was the Schrodinger equation. You know, that's how you started"
    },
    {
      "end_time": 2693.387,
      "index": 99,
      "start_time": 2666.049,
      "text": " Of course, in quantum mechanics, now it seems to me some people probably start with qubits, right? That's what everybody's interested in quantum computers. And of course, in algorithmic information theory, which is my hobby horse, you're using a notion of information to try to clarify or understand better incompleteness from 1931 and Gödel's work. And then of course, there's software, which is computer technologies built on"
    },
    {
      "end_time": 2721.425,
      "index": 100,
      "start_time": 2693.695,
      "text": " Hear that sound?"
    },
    {
      "end_time": 2748.2,
      "index": 101,
      "start_time": 2722.09,
      "text": " That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 2774.309,
      "index": 102,
      "start_time": 2748.2,
      "text": " There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone."
    },
    {
      "end_time": 2800.111,
      "index": 103,
      "start_time": 2774.309,
      "text": " of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at shopify.com slash theories, all lowercase."
    },
    {
      "end_time": 2827.807,
      "index": 104,
      "start_time": 2800.111,
      "text": " Or is bits of information, then it wouldn't have a physical representation. But normally you'll say you need a physical representation for information in a computer or anywhere else, and then information isn't fundamental. Well, I don't know about that. It's true that a computer has to have a power supply, right?"
    },
    {
      "end_time": 2858.217,
      "index": 105,
      "start_time": 2828.66,
      "text": " But to understand what a computer does, you really have to look at a different level. You have to look at the software. And the whole point of computers is to hide as much as possible the physical implementation, which changes every time there's a new generation of technology. But what doesn't change or changes much more slowly is the software environment. So it's really like a new emerging. I think it's a new emerging concept, even though some people will say, oh, there's no such thing as information."
    },
    {
      "end_time": 2878.302,
      "index": 106,
      "start_time": 2858.814,
      "text": " In the way that you're speaking about information, are you referring to it in a platonic sense or something different than that?"
    },
    {
      "end_time": 2907.773,
      "index": 107,
      "start_time": 2878.951,
      "text": " No, I think I'm referring to it in a platonic sense. So when you were referring to Pythagoras' view of mathematics earlier, does that stand in contrast to Platonism or is it a subset of Platonism or it evolved into Platonism? Maybe it evolved into Platonism. I don't know exactly what Pythagoras said, but we attribute to Pythagoras the notion that God is a mathematician, that the world is built out of pure mathematics."
    },
    {
      "end_time": 2935.128,
      "index": 108,
      "start_time": 2908.063,
      "text": " But that's the ontological basis. This was something that people would say, I don't know, maybe 1900, 1910, 1920, 1930. Maybe nobody talks in terms of God anymore. I grew up reading all this stuff, you know, in the 1950s and early 60s, and you could still find essays which talked in these terms. Einstein is always talking about God, but Einstein's notion of God, as he says,"
    },
    {
      "end_time": 2959.258,
      "index": 109,
      "start_time": 2935.776,
      "text": " is not a personal God who worries about individuals but is Spinoza's God that is the laws of the universe sort of. So this is a more abstract view of God. So if originally the Pythagorean point of view would be the world is built out of number, out of pure mathematics, or perhaps it's built out of partial differential equations, those happen more often in current"
    },
    {
      "end_time": 2995.009,
      "index": 110,
      "start_time": 2965.64,
      "text": " There's a fascinating medallion that Leibniz designed that I think was never struck, actually."
    },
    {
      "end_time": 3024.684,
      "index": 111,
      "start_time": 2995.384,
      "text": " until Stephen Wolfram had it made and gave it to me as a 60th birthday present. There's a medallion that Leibniz wanted. His patron was a succession of dukes. Was it Brunswick? It was one of the German duchies right before the unification of Germany. Okay, Leibniz was interested in everything and he was exchanging mail with a Jesuit who was in Beijing."
    },
    {
      "end_time": 3054.224,
      "index": 112,
      "start_time": 3025.145,
      "text": " trying to see if they could convert the Chinese to Catholicism, or at least trying to see connections between Chinese philosophy and Western philosophy. The Jesuits, you know, are the intellectuals of the Catholic Church. They're all brilliant. There were many Jesuit astronomers. And so this correspondent, a Jesuit priest in Beijing, told Leibniz about the I Ching,"
    },
    {
      "end_time": 3084.957,
      "index": 113,
      "start_time": 3055.401,
      "text": " which is used for a divination. What is it? Is it five yes or no alternatives that you use for? You toss a coin five times or something like that and then you can make a divination. This is called the i-chain, I believe. And Leibniz immediately realized that you could represent numbers just in base two, in binary notation, which apparently some other people had also figured out. But Leibniz went a little further"
    },
    {
      "end_time": 3114.155,
      "index": 114,
      "start_time": 3085.64,
      "text": " And he sort of, if I understand his medallion, he's sort of saying that you could create the whole world out of zero and one. And in our digital world, you are sort of creating the whole world out of zero and one. You know, you have video, you have audio, you have software, everything is zeros and ones underneath. And Leibniz goes a little further. And in the medallion, it says in Latin something like the one has created everything out of nothing."
    },
    {
      "end_time": 3144.206,
      "index": 115,
      "start_time": 3114.787,
      "text": " And it's a pun. The one is God and everything out of nothing is zero. So from one and zero, you create the world. So he has examples of binary arithmetic addition and multiplication. And there are also images of the sun, the moon, and maybe the Godhead with light streaming from it. So it does seem that this is the idea that perhaps the universe is actually just built out of zeros and ones. Now, this is a tremendous anticipation of our digital technology."
    },
    {
      "end_time": 3172.602,
      "index": 116,
      "start_time": 3144.667,
      "text": " You could say that instead of calling zero and one bits, you should call them Leibniz's, right? This is a very forceful statement of digital philosophy or what is it called? I think in France. Digital physics? No, you say numerique. I don't know. Well, anyway, so this is from the late 1600s. Leibniz had built a calculating machine and"
    },
    {
      "end_time": 3202.79,
      "index": 117,
      "start_time": 3173.353,
      "text": " He had realized that it might be simpler if you wanted to do a hardware implementation of arithmetic using zeros and ones, binary, instead of using decimal, which of course we all know is what won, right? Modern computers internally are using bistable systems and they're using binary notation. So this seems to be the earliest statement I can think of that maybe the ontological basis of everything is"
    },
    {
      "end_time": 3230.998,
      "index": 118,
      "start_time": 3203.353,
      "text": " While we're being metaphorical, have you heard the case made that not only is mathematics somehow ontological, but that beauty is somehow more ontological so that the more beautiful the math is, the more true it is?"
    },
    {
      "end_time": 3261.408,
      "index": 119,
      "start_time": 3231.63,
      "text": " Because you can have inelegant math like you mentioned with growth index saying that a proof that's substantial isn't as beautiful as one that's compressed. Well, that's short. It has to be sort of obvious. Once you understand something properly, it should be almost obvious once you put it in the right context. You shouldn't need any tricks in the proof. There's a saying that says that your definitions should be such that your proofs are short. But anyway, so about this beauty in mathematics, it's something I never thought about."
    },
    {
      "end_time": 3287.978,
      "index": 120,
      "start_time": 3261.63,
      "text": " Yeah, that's definitely true. Well, beauty is a beautiful concept, right? It certainly motivated me to study mathematics as a young student. But it's a very slippery notion, you know, if you apply it, for example, to sexual beauty, for example, beautiful females, or whatever it is, you find beautiful or beautiful mountains or beautiful landscapes."
    },
    {
      "end_time": 3318.234,
      "index": 121,
      "start_time": 3288.336,
      "text": " I don't know, it seems to be involved with biology beauty and it depends on the cultures and it depends on the century you're in and what part of the world you've grown up in. I think the cultural context is important. I don't know, with mathematics maybe we think we pure mathematicians fool ourselves and think that we're somehow dealing more directly with beauty in its primordial form. But if you look at the history of mathematics, the kind of mathematics people"
    },
    {
      "end_time": 3347.398,
      "index": 122,
      "start_time": 3319.172,
      "text": " do changes over the centuries. What Euler regarded as a proof would not be regarded as a proof nowadays, for example. I personally find Euler's papers very beautiful because he gives his whole train of thought. I struggled through one in Latin once, but he also has papers in French, which I can sort of read, mathematical French. And it's very beautiful. When you're reading Euler's papers, you think you could have done it."
    },
    {
      "end_time": 3371.118,
      "index": 123,
      "start_time": 3348.677,
      "text": " But only Euler could have done it. He presents the whole train of thought and it sort of drops into his hands very often. So I don't know. I think this notion of beauty is important. It certainly motivates. It's very important for motivation. Even if it's the case that what we think of as beautiful is predicated on our biology, you make the case. Or on fashion."
    },
    {
      "end_time": 3401.049,
      "index": 124,
      "start_time": 3371.578,
      "text": " Or on fashion. Well, you also make the case that the biology may be predicated in math with meta biology or in computer science algorithmic information. Can we talk about meta biology? Okay. Well, sure. Happy to talk about it. This is more tentative work that I haven't fully worked out, but some young researchers have picked up the torch. Maybe I should mention their names. Otherwise, maybe I'll forget to mention them. Hector, Hector Zaneil is"
    },
    {
      "end_time": 3430.572,
      "index": 125,
      "start_time": 3401.357,
      "text": " one person who's continuing to work in this area or related areas and also Felipe Abraham that got a doctorate with me. He's my only doctoral student, in fact, in Brazil. Okay, well, there is the question that always bothered me philosophically is that, you know, the world of physics seems to be deeply mathematical, right? The Schrodinger equation, Einstein's field equations,"
    },
    {
      "end_time": 3458.148,
      "index": 126,
      "start_time": 3431.442,
      "text": " It's deeply mathematical, mostly partial differential equations nowadays, I guess. Anyway, but biology is not that way, right? Biology is a million pound marshmallow. And obviously we're very interested in biology. We're biological organisms, right? And we're very interested in understanding biology better, maybe so we can cure diseases or prevent diseases, which would be even better. There is this thing that always bothered me."
    },
    {
      "end_time": 3485.213,
      "index": 127,
      "start_time": 3458.524,
      "text": " The pure math doesn't seem to enlighten us about biology as much as it enlightens us about the physical world. And there are some mathematical theories in biology, of course. There's what used to be called, what is it called? Biophysics? Game theory? No, yes, but evolution seems to be the most important idea in biology. The most central idea is Darwin's theory of evolution, right?"
    },
    {
      "end_time": 3510.998,
      "index": 128,
      "start_time": 3485.742,
      "text": " The question is, how do you explain the diversity, well, the creation of life is important, but also how do you explain the diversity of life forms across this planet? And the explanation for that is supposed to be Darwin's theory of evolution or the modern versions of it, right, which combine Mendelian genetics and something called, I think it's called population genetics."
    },
    {
      "end_time": 3537.022,
      "index": 129,
      "start_time": 3511.561,
      "text": " which the modern synthesis is based on three legs, right? One is Darwin's original theory, which people rejected initially. But then when you combined it with Mendelian genetics, and then you combine it with some nice, elegant mathematics called population genetics, looking at how gene frequencies change in a population in response to selective pressures,"
    },
    {
      "end_time": 3565.06,
      "index": 130,
      "start_time": 3537.381,
      "text": " This was sort of the three legs of what I think was called the modern synthesis. Now, modern synthesis, I don't know how modern it was. Maybe it was the 1930s. But this was the rebirth of Darwin's theory, which people thought was sort of ridiculous at first. They didn't like the idea that everything comes from randomness. You know, it had been God that had created organisms."
    },
    {
      "end_time": 3589.548,
      "index": 131,
      "start_time": 3565.469,
      "text": " And then all of a sudden random God is replaced by randomness. And I think the initial reaction was largely negative. I don't know. I used to like reading plays by George Bernard Shaw and he would comment about the reception of because he lived through Darwin's theory and the initial reactions to it. And he has a play called Back to Methuselah."
    },
    {
      "end_time": 3617.637,
      "index": 132,
      "start_time": 3590.128,
      "text": " Which is almost unstageable because it has five acts occurring. It's like science fiction over thousands of years, and it's talking about the evolution of the human race. Going back from the garden of Eden, a garden of Eden to human beings that are godlike. And I don't know if this would be five hours or more to perform. It's been done very rarely. But I think it's based on his reading of Darwin's theory of evolution, right?"
    },
    {
      "end_time": 3646.954,
      "index": 133,
      "start_time": 3617.892,
      "text": " which he talks about in an enormous prologue. His plays were often accompanied by prologues or epilogues that were as long as the play, which were very didactic and where Shaw was expressing his opinion on everything. Well, anyway, I'm sorry, I'm wandering around too much. So anyway, in my opinion, you know, there be, yeah, I'm sorry about this. There are physicists who try to understand life on the basis of energy and things like that, concepts that come from physics."
    },
    {
      "end_time": 3676.374,
      "index": 134,
      "start_time": 3647.346,
      "text": " and i think that just doesn't work i think you need to look look at a higher level of abstraction and i think the first paper that i find on this topic of the mathematics of biology is a piece of work by von neumann which is not his self-reproducing computer in a cellular automata world that's very well known it's an earlier paper from i don't remember in the 40s i think a talk that he wrote up as a paper and"
    },
    {
      "end_time": 3705.964,
      "index": 135,
      "start_time": 3676.783,
      "text": " It's very courageously says that there are natural automata and artificial automata. Artificial automata are computers, which were just being invented, and natural automata are biological organisms. It seems to me that what this paper is saying that the fundamental concept of biology is software, and it's also the fundamental concept of computer technology. This idea of software is what explains the success of computers as a technology,"
    },
    {
      "end_time": 3730.93,
      "index": 136,
      "start_time": 3706.305,
      "text": " But it's also what explains the plasticity of the biosphere because it's also based on software. So the idea would be my interpretation of Darwin's of Von Neumann's paper from the 40s is that nature discovered software before we did just until Watson and Crick. This was before Watson and Crick, by the way, a little bit before, and it inspired"
    },
    {
      "end_time": 3760.469,
      "index": 137,
      "start_time": 3731.323,
      "text": " Sydney Brenner, a lot of the people who created molecular biology, which says that information is the most important thing to understand in biology, were physicists who were inspired by a little book by Schrodinger called What is Life. Now, Sydney Brenner, whom I have the privilege of chatting with on a few occasions, shared an office with Crick at Cambridge. He's one of the creators of molecular biology, and he was inspired by Van Norman's paper. He was in South Africa and his friend was Seymour Papert."
    },
    {
      "end_time": 3786.51,
      "index": 138,
      "start_time": 3760.828,
      "text": " was interested in computers and Papert told Brenner, who was a chemist, about this paper of annoyance and as a result of that Brenner decided to go to Cambridge eventually and he ended up sharing an office with Crick and Brenner is the gentleman who said, you know, to hell with metabolism, to hell with energy, we're going to create a new field by concentrating on information in the cell and in biology."
    },
    {
      "end_time": 3816.459,
      "index": 139,
      "start_time": 3787.09,
      "text": " You have to forget about it. So this is the fundamental idea, I think. And it's an idea that doesn't exist in physics, the idea of software. This is a new concept. And what I've tried to do in microbiology is go one step. Okay. Now, then von Neumann comes up with this cellular automata model, cellular automata world, and he constructs an organism that can replicate itself, right? But it can't move by the way in this world, you can't"
    },
    {
      "end_time": 3845.435,
      "index": 140,
      "start_time": 3817.142,
      "text": " Translate yourself. The only way to move is to make a copy of yourself and then make another copy. Yeah. So I think now when I was doing this, just keep saying he was working for the U S atomic energy commission. He had been involved in Los Alamos. Everybody was horrified by the atom bomb. There's a whole movie about this now on open minor, right? And just, and so von Neumann was a member of the atomic energy commission."
    },
    {
      "end_time": 3873.422,
      "index": 141,
      "start_time": 3846.032,
      "text": " and was involved in decisions, I think, of horrible things like targeting, I don't know. So to keep himself sane, he did a big crossword puzzle. He's very bright and he outlined this solid automata world and a self-reproducing automata. But I'm going to criticize one of my heroes. This was just a crossword puzzle he was doing because the real work he was doing at that time was not mathematically significant and not up to"
    },
    {
      "end_time": 3903.712,
      "index": 142,
      "start_time": 3873.916,
      "text": " But Neumann is a thinker. But it's the wrong level to think about biology mathematically. Because if you have to create a whole world that functions and you have an organism that is worked out in all detail in this cellular automata world, that's too big a job. I mean, it may be fun to do a simulation, you know, of evolution in some toy world on a computer. But if you want to prove a theorem, that's too low, too level. I'm interested in proving theorems."
    },
    {
      "end_time": 3932.466,
      "index": 143,
      "start_time": 3907.312,
      "text": " That's too low-level a formulation of biology, and you're never going to be able to come up with a whole physical world in which you can have an organism that you can prove is going to evolve. That's too big a problem, and it's like creating the whole world."
    },
    {
      "end_time": 3959.889,
      "index": 144,
      "start_time": 3932.79,
      "text": " So you have to think about this problem at the right level to be able to prove any theorems. So the way I thought might be a model, a simplified model of biology that would be mathematically amenable goes like this. If you say that DNA is the software of life and that the basic idea, mathematical idea of biology is software, as von Neumann says, I believe, and I agree with that,"
    },
    {
      "end_time": 3989.753,
      "index": 145,
      "start_time": 3960.896,
      "text": " You know, it's a popular thing to say DNA is the software of life, right? Ventner says that, for example, in his book. Okay. So if you take that idea seriously, it's going to be very tough to have a mathematical theory of evolution where you're dealing with DNA. DNA is very complicated language, you know, to run a DNA program, you need a cell. It's going to be, you can maybe can have systems biology, right? Doing good simulations, but you're not going to be able to prove any theorems because the system is too complicated and messy. So,"
    },
    {
      "end_time": 4015.555,
      "index": 146,
      "start_time": 3989.991,
      "text": " I said, instead of looking at natural software, which is DNA, and trying to look how it evolves and see if you can prove theorems, I said, let's look at artificial software. Von Neumann talks about natural development and artificial development. Let's look at artificial software. Artificial software being any kind of software? Would be a computer program. And let's make random mutations in the computer program and see if we can"
    },
    {
      "end_time": 4045.52,
      "index": 147,
      "start_time": 4016.698,
      "text": " Prove theorems about the way it evolves. So that's the idea of Metabology. It's one step removed from normal biology in an attempt to get something simple enough that you can actually prove theorems. So instead of subjecting natural software, which is DNA to random mutations and trying to prove that evolution will occur under certain circumstances, my suggestion was to look at something when removed, which is to take artificial software, which is a"
    },
    {
      "end_time": 4074.548,
      "index": 148,
      "start_time": 4045.657,
      "text": " some computer programming language, subject it to mutations and see if you can random mutations and see if you can, you have to put in selection somehow and see if you can prove theorems about evolution taking place. And this is tractable mathematically. I have a sketch of a very simplified version of this where I can actually prove some theorems. So this is my suggestion for how to do a mathematical biology that gets at the, maybe gets at the fundamental question of"
    },
    {
      "end_time": 4102.722,
      "index": 149,
      "start_time": 4075.06,
      "text": " The ideal would be to prove that evolution can explain the diversity of the biosphere we see here. That would be wonderful, but I don't think it can be done. I don't think it's possible. I don't think mathematics can deal with a messy, complicated million pound marshmallow like that. But if you reformulate the problem, one step removed from biology and you're making changes, random changes in some computer programming language that you pick,"
    },
    {
      "end_time": 4131.732,
      "index": 150,
      "start_time": 4103.251,
      "text": " And then you subjected somehow to selective pressure in a toy model where I don't have a whole physical universe. I don't have a cellular automata world. You know, I'm formulating the problem more abstractly. Then I can actually prove theorems. You get a little theory that I call metabiology. And I published a little book about this called proving Darwin, which is not really what the book does. The subtitle is making biology mathematical. Well, it's an attempt. It's an embryonic theory."
    },
    {
      "end_time": 4155.657,
      "index": 151,
      "start_time": 4132.432,
      "text": " that I hope points in a direction, in that direction. The title is not an academic book. It's actually a maybe over mathematically sophisticated popular science book, but the title had to be catchy in the hope of maybe selling copies. If it had been published as an academic book, it would have had a more tentative and more modest title."
    },
    {
      "end_time": 4183.695,
      "index": 152,
      "start_time": 4156.288,
      "text": " So metabiology is going along the lines of thinking, okay, it's messy to think about embodiment. Let's uncouple that. It's messy to think about metabolism. It's messy to think about the environment. So there's this continual uncoupling of in order to work with what's most general. However, evolution requires a fitness function. Yet survival of the fittest implies you're fit for something or fit relative to something. Right. Yeah. And metabiology has a fitness function."
    },
    {
      "end_time": 4208.046,
      "index": 153,
      "start_time": 4183.968,
      "text": " Well, first of all, I also do another piece of simplification is I don't deal with populations. I only have one organism at a time and you subject it to a modification and if it gets fitter, it replaces the original organism. It's simple. There's no population. There's just one organism. So it ends up being a random walk in software space. Well, it's a hill climbing random walk in software space because you have this one organism that's being mutated."
    },
    {
      "end_time": 4238.183,
      "index": 154,
      "start_time": 4208.387,
      "text": " And if the mutation made it fitter, then that becomes the organism. So that's a step in software space. So this is sort of a random walk in software space, which is a nice new phrase. I don't think that anybody ever thought of it. Okay. So software space, the space of all software, the space of all possible programs? Of all possible organisms, because the organisms are just software. Oh, they're only DNA in essence. There are no bodies, there are no populations. I'm trying to make this simple enough that I can actually"
    },
    {
      "end_time": 4266.647,
      "index": 155,
      "start_time": 4238.524,
      "text": " Proof theorems and there is an environment sort of this model actually needs an oracle for the halting problem. This is the environment. This is where new information comes from. That corresponds to the environment actually in this toy model and you can prove some theorems about the rate of evolution depending on how you evolve. So I look at three regimes in this toy world. Well, one of them is"
    },
    {
      "end_time": 4296.101,
      "index": 156,
      "start_time": 4266.92,
      "text": " What is creationism called? Intelligent design. Intelligent design is if you always pick the best mutation, God knows how, that will have the organism improving fitness as quickly as possible. And that's not the real world, obviously. But that evolves very rapidly. So that shows you the most rapid evolution that's possible in this model. Then the worst case, the slowest way of evolving is if you have no memory and you're just"
    },
    {
      "end_time": 4320.725,
      "index": 157,
      "start_time": 4296.237,
      "text": " You basically have to search through the entire space of all possible organisms. Well, in one case, you get n bits of information in time n, that's intelligent design, because each mutation is just the one you need to get one more bit of information from the environment. Now, if you're doing a random walk, you're doing exhaustive search in the space of all possible organisms,"
    },
    {
      "end_time": 4348.387,
      "index": 158,
      "start_time": 4321.22,
      "text": " It's going to take you a time that goes exponentially to get in bits of information in your organism. It's going to be two to the end on the order of two to the end. So in one case, it goes up linearly to get in bits of information in your organism. That's if somehow you picked out the best possible mutation. But if you pick mutations at random and you don't remember the previous organism, you just pick another organism. So that's exhaustive search and that's the worst case and it'll take you"
    },
    {
      "end_time": 4377.739,
      "index": 159,
      "start_time": 4348.746,
      "text": " sort of the order of two to the n steps to get n bits of information in your organism. Now, how about real Darwinian evolution, where you're making a random change in an organism and then you stay with the new organism, then you make a random change in that. So that has memory. And how does that compare with the best case and the worst case? It gives you an idea. And the answer, surprisingly enough, is that to get n bits of information from an oracle for the halting problem,"
    },
    {
      "end_time": 4407.995,
      "index": 160,
      "start_time": 4378.217,
      "text": " It takes time that grows as n squared. N to the third. Yeah, nearly n squared. Yeah, nearly n squared. Yeah, good guess. I think it's n plus epsilon actually. I think n to the third works, but actually it can be pretty close to n squared, which is pretty fast. It's not as fast as, yeah, now this is not a practical model because I am using a Oracle for the halting problem. This is where the new information comes from. This is the environment."
    },
    {
      "end_time": 4437.398,
      "index": 161,
      "start_time": 4408.37,
      "text": " Can you briefly explain what an Oracle is? Oh, an Oracle. This is a less known masterpiece paper by Turing. Everyone knows the 3637 paper, right? Where the halting problem comes from about computable numbers and the fact that most numbers are uncomputable. But there's a 1938 paper, which I don't find very interesting except for the fact that he introduces the idea of an Oracle in this paper. So what an Oracle is,"
    },
    {
      "end_time": 4467.398,
      "index": 162,
      "start_time": 4437.824,
      "text": " is the idea of taking the holding problem. There's no algorithm to solve the holding problem, right? So what happens if you take a normal computer and you give it an oracle for the holding problem? That is to say, the computer has this other unit, this magical unit on the side. Whenever you want to, you can send it a program and ask it, does it hold or not? And the oracle will say yes or no. This is unphysical. This is unrealistic because this violates the holding problem."
    },
    {
      "end_time": 4492.858,
      "index": 163,
      "start_time": 4468.08,
      "text": " Yes, this is unrealistic physically. This is a mathematical fantasy of computers that are more powerful than ordinary computers. Is this called hypercomputation or is this unrelated? This may be. This may be what it is. And sorry to interrupt, but in this analogy with the intelligent design, is the God, quote unquote, being analogized to an oracle? No, no."
    },
    {
      "end_time": 4522.773,
      "index": 164,
      "start_time": 4494.65,
      "text": " intelligent design there is no intelligent design in my model what it is is what happens if you see the normally the mutations are picked at random right that's the idea but what happens if instead of being at random every time you you pick a mutation to subject your software organism to you pick the best mutation that will make it in order to do that you need an oracle though no yeah you couldn't do that it's not algorithmic"
    },
    {
      "end_time": 4545.367,
      "index": 165,
      "start_time": 4522.961,
      "text": " This is the fastest possible evolution in this model. That's a non-algorithmic step also in this model. In order to prove theorems here, I have a formulation which could not be simulated on a computer because it requires non-computable steps and it also requires an oracle for the halting problem."
    },
    {
      "end_time": 4569.36,
      "index": 166,
      "start_time": 4548.507,
      "text": " the environment you have in this model. The organisms are getting information from the environment, which is an oracle for the halting problem. So anyway, the idea of this hyper computer, I guess maybe it's called, this is a normal Turing machine to which you add something that can't exist in this physical world, which would be a unit on the side."
    },
    {
      "end_time": 4599.838,
      "index": 167,
      "start_time": 4570.213,
      "text": " which will solve the halting problem for you. If you give it a program, it'll say this program also, this program never holds. So this is a notion of computation, which is more powerful than normal computation. Now, guess what? You could iterate this. You, because this hyper computer, this computer with an Oracle for halting problem also has a halting problem, which it can solve. This is the problem of whether an algorithm which can use an Oracle for the halting problem eventually stops or not. And even with an Oracle for the halting problem,"
    },
    {
      "end_time": 4625.06,
      "index": 168,
      "start_time": 4600.06,
      "text": " You can answer this. So then you would need an oracle for the halting problem of computers with an oracle for the halting problem. So you get like this tower of Cantor's infinities. Yes. And this is actually called the Turing degrees sort of. Another name for it is the arithmetical hierarchy. It's a hierarchy of more and more powerful computers and only the ones at the bottom"
    },
    {
      "end_time": 4654.36,
      "index": 169,
      "start_time": 4625.503,
      "text": " on the zero level are sort of realizable physically although even the normal Turing machine is a bit of a mathematical fantasy because it has no limitation on the time or the amount of storage as long as it's finite it will go on presumably so they're already impractical as the Turing machine the halting problem is an impractical question because if you ask if a program halts in a specific amount of time you just run it for that amount of time and see it's only when you ask if it halts"
    },
    {
      "end_time": 4682.978,
      "index": 170,
      "start_time": 4654.838,
      "text": " with unbounded, given unbounded time that you get an undecidable question. And are your organisms in your meta biology models, do they have access to unbounded computation? Yeah, that's a good question. Well, they have access to an Oracle for the halting problem. In the form of the environment? Yeah, that sort of corresponds to the environment. I never said what the fitness is. It's just the number produced. I'm looking at programs that calculate a number, a whole number."
    },
    {
      "end_time": 4709.002,
      "index": 171,
      "start_time": 4683.302,
      "text": " and the bigger than the number of the fit of the organism. It's a very simple measure of fitness. Sorry, can you repeat that? The organisms want to calculate a very big number. Okay. The bigger the number, they calculate it. They're finite calculations. It's like they want to solve a math problem. Yeah. The math problem is to name a very big number by calculating it."
    },
    {
      "end_time": 4736.493,
      "index": 172,
      "start_time": 4709.275,
      "text": " So that's called the busy beaver problem, which actually is equivalent to the halting problem. So it's a very simple minded measure of fitness. And that's why it's a hill climbing random walk in software space because you, you only take a random step if it increases the fitness, if the resulting program calculates a bigger number than your current organism. So that's a very simple model. Now there's some problems with this model. It's a,"
    },
    {
      "end_time": 4766.92,
      "index": 173,
      "start_time": 4737.312,
      "text": " platonic model because some mutations may not give you a program at all. They may never halt. Another question is, what do you take as considered to be mutation? That was something that I worked on in this theory. You know, the normal notion of mutations in molecular biology are sort of point mutations, right? There's a indel that's inserting another base or deleting a base. They're very localized changes."
    },
    {
      "end_time": 4795.964,
      "index": 174,
      "start_time": 4767.449,
      "text": " in a DNA strand. And the problem with this is I couldn't sort of make a mathematical theory where I do the same kind of mutations on my software organisms, but it was horrendously ugly. The proofs were terrible. As you said, in pure math, you change the definition if the proofs are ugly or don't go through at all, right? So it turned out that I got a very nice theory by allowing a very powerful notion of mutation, which I allow algorithmic mutations."
    },
    {
      "end_time": 4826.323,
      "index": 175,
      "start_time": 4796.323,
      "text": " So an algorithm mutation is a global mutation that takes as input the software organism that is your current organism. Remember, I don't have a population. I have one organism at a time and it produces a new software organism. Now there you already have a problem because if you pick an algorithm at random, it may be that it never outputs a new software organism. So I have that problem that I have to define away from this model."
    },
    {
      "end_time": 4856.749,
      "index": 176,
      "start_time": 4827.449,
      "text": " Randomness coming in here in the form of the algorithm that mutates the organism. And then this organism thereby inherits some randomness because then you have to run that software, which is the organism. Well, how does it learn? That's the question. Where does it get more complexity from? With TD Early Pay, you get your paycheck up to two business days early, which means you can grab last second movie tickets in 5D Premium Ultra with popcorn."
    },
    {
      "end_time": 4885.145,
      "index": 177,
      "start_time": 4858.166,
      "text": " Well, the way it works is it's a little funny. See, I did this more than 10 years ago. I'm trying to remember."
    },
    {
      "end_time": 4913.08,
      "index": 178,
      "start_time": 4885.674,
      "text": " There's the question of where is the information coming in that makes the organisms more sophisticated? Well, actually, it turns out that one place they can come from is you have to eliminate mutations that don't give you a new organism. You see, if you pick an algorithm at random and you give it as input your current organism, it may be that it runs forever and never gives you a new organism. So those guys have to be eliminated."
    },
    {
      "end_time": 4942.261,
      "index": 179,
      "start_time": 4913.865,
      "text": " And that's pretty powerful, actually. That kind of thing amounts to having access to an Oracle for the halting problem. So this is how the organisms get new information from their environment. Is it because you have to say, well, let's select from the set of all programs that halt? Yeah, because if you're dealing with a global algorithmic mutation rather than a localized mutation like in real"
    },
    {
      "end_time": 4968.831,
      "index": 180,
      "start_time": 4944.104,
      "text": " that you've picked at random an algorithm and you give it as input your current organism and you started running and it goes on forever and never gives you a new organism so you've got to by definition say that you're not going to allow this so this is actually amounts to having an oracle for the halting problem you're eliminating mutations that never give you a new organism"
    },
    {
      "end_time": 4997.637,
      "index": 181,
      "start_time": 4969.155,
      "text": " and using that the organisms can get the software organisms can get more and more information so they get n bits of information about the holding problem in time roughly n squared n cubed whereas if you pick the best possible mutation you could get one bit each time you do a mutation if you pick the the best possible mutation to try you could get one bit of information from the oracle and every time you try a mutation"
    },
    {
      "end_time": 5025.64,
      "index": 182,
      "start_time": 4998.507,
      "text": " But that would require the intelligence of picking the right mutation. Mutations are picked at random, right? So that's not as good as intelligent design, shall we say. There's no intelligent design in this system. You're just trying to look at the fastest they could evolve and the slowest they could evolve and get an idea for what happens when you have Darwinian style evolution, how it compares."
    },
    {
      "end_time": 5054.974,
      "index": 183,
      "start_time": 5026.391,
      "text": " And the answer is it's surprisingly fast. I was surprised, maybe even a little disappointed. Why? Why disappointed? You wanted more work? No, if the model collapses, becomes trivial, if you can evolve with Darwinian evolution essentially as fast as you do by picking the best mutation, this wouldn't mean the model is very unrealistic biologically. I would have been unhappy."
    },
    {
      "end_time": 5085.401,
      "index": 184,
      "start_time": 5055.435,
      "text": " Well, just like you mentioned that there's the modern synthesis, there's also something that's more modern than that called extended evolutionary synthesis, which incorporates niche construction and epigenetics and stress induced mutations. And I think multi-level selection is the rebranding of group selection, but it incorporates lots of stuff. Like, look, the world is way more complex than traditional evolutionary theory would predict. No, I agree."
    },
    {
      "end_time": 5109.206,
      "index": 185,
      "start_time": 5085.93,
      "text": " And I'm sure there's going to be an extended version of that. So I don't see it as being a problem that your theory gets to something near optimal. Okay. Okay. Well, that's kind of you. Yeah. Well, the question is how much of this you can make into a little mathematical theory where you can prove theorems. You know, once you make evolution more realistic,"
    },
    {
      "end_time": 5132.022,
      "index": 186,
      "start_time": 5109.753,
      "text": " If I had a body with a metabolism as well as the DNA of my little organisms, I would be in trouble. If I had a population, instead of just one organism, I would be in trouble, although I think actors in you. Why is that the case? Well, because the mathematics becomes a mess and I don't know how to prove any theorems. It may be my fault. You have to keep it simple enough."
    },
    {
      "end_time": 5162.841,
      "index": 187,
      "start_time": 5134.531,
      "text": " Sorry, professor. I have a question. You can say Greg. Sorry, Greg. I have a question. In physics, there's something called effective theories, which says that it could be that the underlying"
    },
    {
      "end_time": 5191.834,
      "index": 188,
      "start_time": 5163.524,
      "text": " Now, okay, so there's turbulence, which says that you need to know the low level details in order to predict something at the higher level. But then there are renormalizable theories where it turns out that the low level details don't matter much. So could it be the case that there's something like a renormalization of metabiology or some effective field theory where even though it's much more complicated at the bottom, like it's much more messy for the mathematicians to work on, there's something that's renormalizable that becomes tractable for the mathematician."
    },
    {
      "end_time": 5221.937,
      "index": 189,
      "start_time": 5192.534,
      "text": " Yeah, that's a very good analogy. That's equivalent to Sydney Brenner saying, I don't care about metabolism or energy. You know, the cell will somehow manage that. I just want to know about information. And that's it's it's that that sort of gets you molecular biology. That's a similar idea. But what goes on underneath may really not be important to understand the system the same way that you can understand to understand what a computer is doing."
    },
    {
      "end_time": 5248.643,
      "index": 190,
      "start_time": 5222.671,
      "text": " The engineering, the physical implementation of the Boolean algebra, the circuits, doesn't show through in the software. The software level wants to hide all of that, which is sort of similar to renormalization and the idea that what goes on at the bottom level may not be visible at the top level and you can start with many bottom levels and get to the same top level."
    },
    {
      "end_time": 5272.568,
      "index": 191,
      "start_time": 5249.087,
      "text": " right? I don't remember what's that called, but you were referring to that, right? Effective field theory. Yeah, so I feel very sympathetic to that idea. I don't know what the right name for all of this is in this different context. It's when you create a new level of reality, what goes on at lower levels may be unimportant at that level. You have to look at the concepts that are"
    },
    {
      "end_time": 5300.794,
      "index": 192,
      "start_time": 5273.985,
      "text": " One nice thing about it is the normal version of Darwinian evolution says you want to be well adjusted"
    },
    {
      "end_time": 5329.821,
      "index": 193,
      "start_time": 5301.493,
      "text": " In 1950s in American schools, kids had to be well adjusted, right? The same with the normal view of biological organisms. And once they get well adjusted, they stop evolving unless the environment changes due to some cataclysm like volcanoes or a meteorite hitting the earth. So I think that's a bad metaphor. I prefer, in my little model, you have open-ended evolution."
    },
    {
      "end_time": 5359.104,
      "index": 194,
      "start_time": 5330.623,
      "text": " The system can continue to be a better and better mathematician. So my wife has a little philosophy paper talking about the human self-image depending on which model of biology you pick. And the notion that what you want to be is well adjusted, you know, that you adapt to your environment and then you stop, evolution stagnates, is a picture that she finds, I also find, less attractive than the notion"
    },
    {
      "end_time": 5387.381,
      "index": 195,
      "start_time": 5359.77,
      "text": " that evolution is open-ended and creative. And is this related to Gödel's incompleteness? Yeah, I see it as being related, but that's because I'm sort of, I'm sort of using the mathematics having to do with Gödel incompleteness theorem and Turing's work on the holding problem. I'm trying to make it into a sort of a toy version of biology. You see the same way that no formal axiomatic theory is a theory of everything. So you can always have better and better theories."
    },
    {
      "end_time": 5415.247,
      "index": 196,
      "start_time": 5388.268,
      "text": " And I sort of take that and I sort of convert it into a statement in biology, which says the organism can keep evolving forever, you know, that the system is open-ended. So at the level that I'm looking at biology, the two problems are very similar. Actually, it's more similar to Turing's halting problem, but it's a similar idea to incompleteness, right?"
    },
    {
      "end_time": 5441.203,
      "index": 197,
      "start_time": 5416.135,
      "text": " So I think that's a more attractive version. Actually, the version that Dawkins popularized of evolution is the selfish gene. And so selfish genes are, I think it's a very ugly idea. You know, sexuality is very unselfish. When you have a child, you throw away half of your genome."
    },
    {
      "end_time": 5470.213,
      "index": 198,
      "start_time": 5442.108,
      "text": " the husband and the wife or whatever the couple is. Why do you see it as throwing away half of your genome rather than actually preserving half of your genome because it's going to go on? OK, OK, but you're not selfish. It's not like a gene that says I want to take over, I want to survive, you see. So I in my view of biology is that biology is creative. I don't think it's selfish and I don't think you want to be well adjusted to the environment."
    },
    {
      "end_time": 5493.66,
      "index": 199,
      "start_time": 5470.828,
      "text": " I think that biology is open-ended and endlessly creative. That's because I like creativity, so this is my personal bias, so I come up with a little biological model that reflects my love of creativity. Now if you talk about, I'm not sure about what I'm about to say, but I've heard that the view of biology in Japan, for example, is completely different."
    },
    {
      "end_time": 5521.493,
      "index": 200,
      "start_time": 5494.292,
      "text": " In the United States, everybody believes that everything is selfish and organisms are competing, which is one possible point of view. In Japan, everybody thinks that everybody's cooperating because if you look at the human body, it has billions of cells, which originally were independent organisms, but now they're cooperating to create a higher level organism. So if you are in a society which believes in cooperation and social cooperation rather than"
    },
    {
      "end_time": 5549.957,
      "index": 201,
      "start_time": 5524.309,
      "text": " in to get rid of each other and be the success. Then you make a theoretical biology in line with your view. So my little toy model of biology stresses creativity because incompleteness really is creativity. I view Gödel's Incompleteness Theorem not as a limitation of mathematics."
    },
    {
      "end_time": 5578.302,
      "index": 202,
      "start_time": 5550.52,
      "text": " and Turing's work, not as a limitation, it was taken pessimistically. I view it as saying that mathematics is creative and will always be creative. You see, a formal axiomatic theory for all of math would give you absolute certainty, but it would also be a cemetery, you see. Right. So you change the viewpoint and you say, get an incompleteness is really good news. It's not bad news. It means our children and our grandchildren will always have new things to discover, new things to do."
    },
    {
      "end_time": 5603.643,
      "index": 203,
      "start_time": 5579.531,
      "text": " that it's not a closed system, it's an open system. And that connects pure mathematics, in my opinion, to biology at some very fundamental level, maybe, but not to real everyday working molecular biology, working with DNA. This is an attempt to sort of try to find the basic math, extract the basic mathematical ideas from biology."
    },
    {
      "end_time": 5632.227,
      "index": 204,
      "start_time": 5604.411,
      "text": " So another way to say what I'm trying to do is, Von Neumann had this brilliant insight, I think, in that paper where he says that the key idea of computer technology and of biosphere is software, which gives you the plasticity of the biosphere and the successfulness. So I'm just trying to go one step further and see if I can get to a little theory about doing random mutations on software. It's just an attempt to"
    },
    {
      "end_time": 5661.664,
      "index": 205,
      "start_time": 5632.756,
      "text": " carry the ball mathematically one step, one little step further. Now, I agree that metabology doesn't seem to have much connection with real biology, but actually my wife suggested that it might and work by Hector Zanile and his collaborators has furthered this suspicion. You see, the question is, I'm not using point mutations. I'm using these global algorithmic mutations, which seem very powerful and very unrealistic."
    },
    {
      "end_time": 5692.022,
      "index": 206,
      "start_time": 5662.278,
      "text": " So my wife said, oh, well, one should go back to real biology and see if there's anything like that in the biosphere. You know, this may be a hint from pure math about what's going on. There is a piece of work by actor Zanile and a few collaborators whose names escape me that I think was published in the Proceedings of the Royal Society or something where they are looking, they're doing simulations of evolution roughly along the lines that I'm talking about. They're much more practical."
    },
    {
      "end_time": 5721.647,
      "index": 207,
      "start_time": 5693.063,
      "text": " They're not molecular biologists by any means. They are using ideas related to algorithmic information theory and Turing's work and all of that, but they are looking at algorithmic mutations rather than point mutations in their model and they're doing simulations. I don't know if they were with populations. Maybe they were not individual. I don't remember. Anyway, it's very good work and the result they found was that algorithmic"
    },
    {
      "end_time": 5739.531,
      "index": 208,
      "start_time": 5722.005,
      "text": " What I was referring to before about extended evolutionary synthesis, it goes by the name EES, it was because"
    },
    {
      "end_time": 5769.121,
      "index": 209,
      "start_time": 5739.991,
      "text": " I forget, I think it's Stephen Jay Gould and then this guy named Piglucci. I'll find the person's name and put it on screen. It's not punctuated equilibrium, right? It's much more than that. Well, epigenetics, but epigenetics says that the information is not all in the gene, right? If I'm not mistaken. Sorry, it was developed because the indels and the single nucleotide polymorphisms were insufficient to explain the observed amount of complexity with traditional evolutionary thinking."
    },
    {
      "end_time": 5793.763,
      "index": 210,
      "start_time": 5769.735,
      "text": " There's nothing that's like mystical or fanciful about this. It's like niche construction, like I mentioned, and so known mechanism, multi-level selection. There is room for these global changes. Yeah, that would be interesting to see if the stuff I'm suggesting looks analogous in some ways to this work. But my model is very simple. There's no epigenetics in my model. Everything is in the DNA, you know?"
    },
    {
      "end_time": 5819.582,
      "index": 211,
      "start_time": 5794.411,
      "text": " But the truth is we're both, as you said, the fundamental motivation for this new version of Darwinism and for my own work is a suspicion that point mutations aren't enough, right? So in that sense we're analogous. We were both worried about that problem. It doesn't seem to explain how fast things evolve, right?"
    },
    {
      "end_time": 5839.121,
      "index": 212,
      "start_time": 5820.452,
      "text": " And it doesn't seem to explain the major transitions in evolution, like, for example, from single cell to multicellular or the Cambrian explosion, for example, is very astonishing, maybe from the point of view of conventional Darwinism. I'm not sure if that's a reasonable statement or not."
    },
    {
      "end_time": 5866.391,
      "index": 213,
      "start_time": 5839.872,
      "text": " Yeah. In this comparison that you have with the open system of biology. So to contrast that, generally we think of a fitness function and you just get to the minimum. Then once you get to the minimum, you stay there unless the fitness function changes. So is that what you were saying? No, well, my fitness function doesn't have a minimum. Organisms get fitter and fitter and fitter without limit. The fitness is a number and the bigger the number, the fitter the organism."
    },
    {
      "end_time": 5894.582,
      "index": 214,
      "start_time": 5868.302,
      "text": " This is how I avoid, what do you call it, this trap of the well-adjusted child in 1950s schools, American schools, or the selfish gene. The way I avoid it is the fitness function, you don't minimize it, you maximize it, and it can be as big as you want. This is how you get open-endedness out of this little model of biological evolution, you see."
    },
    {
      "end_time": 5923.865,
      "index": 215,
      "start_time": 5895.35,
      "text": " to make a parallel between physics again. In physics, when you have an open system, it means you're open again relative to something else. You're a subsystem of something else. So what is this something else in this case? And what is the border in between it that allows an exchange? Well, the environment, the organisms are getting more information from the environment in this little model. And what corresponds to the environment is an oracle for the halting problem."
    },
    {
      "end_time": 5954.616,
      "index": 216,
      "start_time": 5924.804,
      "text": " which is an infinite amount of information. So does this require a philosophical commitment to Platonism? Well, this model is non-computable, so it's not even an algorithmic model. It involves software, but there are steps in it that require an oracle for different things. So in that sense, it's a platonic model. You see,"
    },
    {
      "end_time": 5981.937,
      "index": 217,
      "start_time": 5955.265,
      "text": " The problem is I wanted to get the simplest. Okay. What is life? Okay. Now one definition. I was reading books by John Minard Smith, whom I had the pleasure of meeting by the way, North of the Arctic Circle, an Abisko, Sweden. He's a fun guy. He was already retired and we would go off and drink beer. There wasn't much to do North of the Arctic Circle. So we would go off and drink beer and talk."
    },
    {
      "end_time": 6011.971,
      "index": 218,
      "start_time": 5982.602,
      "text": " Anyway, Minard Smith has two books written with a chemist, a Hungarian chemist called Erzszaf Marie. And I think both books have a chapter or a section at the beginning where they ask, what is life? And their definition is, well, you might think that a flame is alive, right? Because it reproduces itself, because more things catch on fire. But they say a flame doesn't evolve. It doesn't incorporate more information."
    },
    {
      "end_time": 6039.599,
      "index": 219,
      "start_time": 6012.619,
      "text": " So it would be a system that by evolution gets more complicated and evolves. So I took that as my definition of life in my little toy model. In other words, it may look like a vicious circle because my definition of life in this little model is a system that evolves by Darwinian evolution. So that sounds like you're supposing that Darwinian evolution works to start with."
    },
    {
      "end_time": 6069.684,
      "index": 220,
      "start_time": 6040.247,
      "text": " But it's not really a vicious circle because you want to find the simplest Pythagorean or Platonic life form that provably evolves by natural selection. So that would be, it's not the individual organism that's alive in this point of view. It's a system that evolves by natural selection that's alive. And you want to find the simplest system you can find in the world of pure mathematics that you can prove evolves."
    },
    {
      "end_time": 6100.009,
      "index": 221,
      "start_time": 6070.162,
      "text": " That's fine. When you were talking about that there's always something new to discover in this book."
    },
    {
      "end_time": 6125.538,
      "index": 222,
      "start_time": 6100.572,
      "text": " Is it a guarantee that what's new to discover is of an interesting class? And the reason I say that is there's, for instance, a video game called No Man's Sky. And what it is is it's a procedurally generated world. And so there's an infinite amount for all intents and purposes, an infinite amount of planets. But once you've visited 10, you've pretty much visited them all. So yes, there is something new to do, but it's not relevant. It's not alluring."
    },
    {
      "end_time": 6152.705,
      "index": 223,
      "start_time": 6126.084,
      "text": " Future Kurt here editing himself in for clarity. I was being facetious. No Man's Sky is a wonderful game. I've never seen a developer turnaround of games quality like I have with this with Hello Games. It's unprecedented. Thank you to the development team. Sean Murray, if you're watching, I'd love to have you on the podcast. Great work to all of you. Back to the show. Well, that's that's a very good question. For example, Gettler's Incompetence Theorem, you can argue that the theorem"
    },
    {
      "end_time": 6186.749,
      "index": 224,
      "start_time": 6158.831,
      "text": " is not of any interest to anybody. So, for example, people can say, I don't care about proving that finite bit strings are algorithmically irreducible or have high program size complexity. This is not something a normal mathematician wants to do in their everyday work on the problems that interest them. And you can also say regarding the halting probability or the omega number, it may be true that the bits are unknowable,"
    },
    {
      "end_time": 6214.906,
      "index": 225,
      "start_time": 6187.108,
      "text": " in sort of the worst possible way, they're algorithmically irreducible. They show that the world of the phonic world of pure mathematics has infinite information content or infinite complexity. But what do I care about the bits of the halting probability? You know, it doesn't come up in, I don't know, algebraic geometry, for example, or the work that real mathematicians are passionate about and working on all the time."
    },
    {
      "end_time": 6239.821,
      "index": 226,
      "start_time": 6215.691,
      "text": " You can argue that it's open-ended, but not in any interesting way. And let me give another example of this. I said Hilbert's program was destroyed by Gettle in 1931. But for all practical purposes, there are now formal systems where you can formalize real mathematics, the kind of real mathematics that real mathematicians do."
    },
    {
      "end_time": 6267.739,
      "index": 227,
      "start_time": 6240.35,
      "text": " There are some mathematicians who were doing very complicated, very good mathematicians who were doing very complicated proofs. And they wanted some assurance that the proofs were correct. And they took these proofs and formalized and put them into a system now which could check the proofs. So for all practical purposes, there are now theories of everything for all the mathematics that people are doing now."
    },
    {
      "end_time": 6294.019,
      "index": 228,
      "start_time": 6268.37,
      "text": " And this is a tremendous engineering achievement. This is very good work building on software techniques and logic and everything like that. And it's true that these systems are incomplete according to Gödel's incompleteness theorem, but it doesn't matter because people can do all the mathematics that real mathematicians want to do now. So these are very powerful systems for formalizing proofs."
    },
    {
      "end_time": 6324.394,
      "index": 229,
      "start_time": 6294.411,
      "text": " People have done things like, I think, checking the proof of the four color theorem, if I'm not mistaken, and other kinds of results by formalizing them in these systems. And some of the mathematicians who've done this said they got a deeper insight into the problem by formalizing it. So instead of complaining about the work of inserting a proof into this special language and breaking it into pieces that the proof checker"
    },
    {
      "end_time": 6348.302,
      "index": 230,
      "start_time": 6324.957,
      "text": " that comes with all this software could check were correct. You have to break the proof down into pieces small enough that the software can say, okay, I got that. That's correct. Let's go on to the next step instead of complaining about this because these systems are getting smarter and smarter. So they can, you can take, leave bigger and bigger holes in the proof, take bigger steps. Yeah."
    },
    {
      "end_time": 6378.268,
      "index": 231,
      "start_time": 6348.677,
      "text": " And for all practical purposes, Hilbert was right. There are now very good formalizations of all of working mathematics at this time. So that's in contrast, that's an interesting philosophical tension with Gödel's Completeness Theorem, which says that you can't do a theory of everything for all of math, but you might be able to do a theory of everything for all the math that in practice, mathematicians are doing now. And that, in fact, has happened for all practical purposes, right? F, A, P, P."
    },
    {
      "end_time": 6406.049,
      "index": 232,
      "start_time": 6378.729,
      "text": " So from a philosophical point of view, that's interesting tension, I think. And it's sort of like programming a program, because you've got to put your proof into a logical language that is the system you're working with. And you have to break the proof into steps that are small enough that the software can check that they're correct, a step in the proof. This is actually a practical tool now. So in a sense,"
    },
    {
      "end_time": 6432.432,
      "index": 233,
      "start_time": 6406.613,
      "text": " Hilbert's program has succeeded, has been carried out, that part of it at least. There were a lot of historical aspects to the original proposal of Hilbert, which are of less interest nowadays, I think. But the basic idea is what I've been trying to explain. So it may be that the system is open-ended, but not in a way that people care about that much."
    },
    {
      "end_time": 6461.971,
      "index": 234,
      "start_time": 6433.114,
      "text": " On the other hand, real mathematics is progressing. It's progressing very well. The century of Gödel's incompleteness theorem, the 1900s, was a century of tremendous progress in mathematics in contrast to the negative incompleteness result. So that's the question of whether the results that you can't get are interesting results or whether there are artificial things that you constructed to show incompleteness."
    },
    {
      "end_time": 6489.872,
      "index": 235,
      "start_time": 6462.483,
      "text": " And that's a very good question. That's a very good question. And I don't have a good answer. That would be a topic. If I could live another lifetime and continue to want to do mathematics, that would be a topic. Would another way of saying that be that, look, what's remarkable isn't incompleteness, but rather the completeness that we have. So despite the incompleteness and given that there's an infinite amount of statements that we can't decide whether they're true or untrue."
    },
    {
      "end_time": 6519.548,
      "index": 236,
      "start_time": 6490.708,
      "text": " Why is it that the overlap between what we're interested in and then those statements is remarkably low? Why is it that there's so much progress like in every single domain of math? It's a good question, but it's also a fact that, for example, if you try to do research on an area which is too hard, you never publish papers, you don't get a doctorate and you never become a professor. So the map of current mathematics is a map of what our current methods are able to achieve."
    },
    {
      "end_time": 6545.879,
      "index": 237,
      "start_time": 6520.009,
      "text": " So that's why it looks like everything pretty much is achievable. If that's the case, then we're in a bit of a thorny position philosophically, because it means that math isn't something that's so dispassionately abstract, but rather it's influenced by the real world, influenced by empiricism. Well, it's also influenced by fashion. If you look at popular mathematical"
    },
    {
      "end_time": 6576.118,
      "index": 238,
      "start_time": 6551.374,
      "text": " That changes the function of time. And I think if we met Martians, their mathematics might be different, concerned with different things than ours. For example, the ancient Greeks preferred geometry as the basis for mathematics. And now I think arithmetic is considered to be more fundamental. That was the arithmetization of analysis, which was a topic of the 1800s."
    },
    {
      "end_time": 6605.094,
      "index": 239,
      "start_time": 6576.288,
      "text": " Now, I think you've asked a good question and I don't have an answer, but I think it would be interesting to take a look at Stephen Wolfram's recent work because he has a book, as I said, on the physicalization of metamathematics. All formal axiomatic theories are part of the rule he had, as well as all possible algorithmic models of physics, worlds that can be algorithmic. And he, in fact, his approach is more empirical."
    },
    {
      "end_time": 6635.64,
      "index": 240,
      "start_time": 6605.674,
      "text": " I try to prove theorems and Stephen has this wonderful tool that he's created, right? Mathematic or the Wolfram language or whatever, tremendous software. And he does things like looking at all the theorems in Gluclidean geometry and all the connections between them. He does graphs, he does statistics. So he's taken a real look at the way he's looked at different axiomatizations, for example, for Boolean algebra, Boolean logic,"
    },
    {
      "end_time": 6654.206,
      "index": 241,
      "start_time": 6636.51,
      "text": " and found surprisingly, I think, one axiom that gives you everything that nobody had ever seen. So he's trying to get a statistics of all possible of how formal axiomatic theories work. And he's interested in the question of why a theorem is interesting. He's"
    },
    {
      "end_time": 6680.162,
      "index": 242,
      "start_time": 6656.271,
      "text": " So how do you tell the difference between that and a theorem that is deeply meaningful that will interest you and mathematicians? And Stephen has some good ideas about that. If you look at the graph of all possible connections between theorems where you're proving one theorem from another and how many steps it takes to go,"
    },
    {
      "end_time": 6711.169,
      "index": 243,
      "start_time": 6683.933,
      "text": " I think one of the things he says is that a theorem that is non-trivial, that's interesting, is sort of in a key spot, connecting many things. Edward Witten said something similar. So Edward Witten of physics was asked, what makes a theorem beautiful? And he was put on the spot. He said he can't articulate it. It's like asking, why is the song beautiful? But he said, if he had to say something right here after thinking about it only for 10 seconds,"
    },
    {
      "end_time": 6741.067,
      "index": 244,
      "start_time": 6712.073,
      "text": " It would be that you get more out of it than you put in, which sounds like something to do with complexity or compression. Yeah. Well, Stephen actually looks at these graphs of relationship between mathematical statements in a theory. He does looks at statistical aspects of the graphs and different theories. And I know he has discussed this question of what is the difference between a mathematical assertion in a theory that is interesting and the ones that aren't."
    },
    {
      "end_time": 6770.247,
      "index": 245,
      "start_time": 6741.613,
      "text": " So I recommend taking a look at this whole new immense body of work. Stephen is publishing, you know, a 400 page book once a year. Yeah, I don't know. He's in a tremendously fertile period. I'm happy to see this. For many years, he was involved with building this software stack, right? Which is all from language, all from alpha, which is a remarkable achievement. It's a non-human AI, but it is an AI in a sense, right?"
    },
    {
      "end_time": 6785.282,
      "index": 246,
      "start_time": 6770.862,
      "text": " But it's not a human kind of AI. But then recently he's gotten back to doing basic research and he's produced a remarkable series of books. So I think maybe you'll find him discussing"
    },
    {
      "end_time": 6812.858,
      "index": 247,
      "start_time": 6787.568,
      "text": " Yeah, I've spoken to him a couple of times, off air and on air as well, and I'll likely be speaking to him in a few months. So I'll talk to him about that. By which time he may have a whole new theory to talk about. He's gone through a tremendously productive period. It's wonderful. It's wonderful to see. It's my understanding that what happens at Wolfram Research, the Wolfram Physics Project, stays in the Wolfram's Physics Project. And what I mean by that is that"
    },
    {
      "end_time": 6838.831,
      "index": 248,
      "start_time": 6812.858,
      "text": " There's not much academic talk about what Steven produces. This is from my perception from the outside and also speaking to a few people. And I'm not sure if that's true. And if it's true, I'm not sure why. If it's because, well, Wolfram is operating from a space outside academia. Yeah, well, I think I think he has some younger collaborators who need to worry perhaps about publishing for their future. I met in Morocco"
    },
    {
      "end_time": 6875.35,
      "index": 249,
      "start_time": 6845.623,
      "text": " His name is Hatem. It's true that Stephen is operating outside the normal academic environment. I think part of it is that he can't be bothered. If they had to publish refereed papers in traditional journals, it would slow them down tremendously. And since he relies on getting a feel for the data by looking at graphs of different mathematical theories and connections. Well, in a normal journal published on paper, this is the past, of course, you can't fit"
    },
    {
      "end_time": 6902.244,
      "index": 250,
      "start_time": 6875.828,
      "text": " All of that, right? So Stephen's way of working is not like my way of working. I like to prove theorems and I like to keep it simple. Stephen attacks the full complexity of an issue, looks at many cases, draws graphs, draws pictures that give you an intuitive feel for what's happening in each case. And that way you're building empirical data and you're building intuition of how these systems function. That's the way of empirical scientists functions."
    },
    {
      "end_time": 6931.169,
      "index": 251,
      "start_time": 6902.722,
      "text": " Although the world he's looking at is a world of computational world rather than the physical world. So that's his modus operandi. And there's also the problem that he's moving so fast that who would referee his papers. I think I need to change the earbuds. Sure. We'll take a small break. I don't know if this is correct, but it's my understanding that Kurt Gödel didn't see his incompleteness theorem as a hindrance, but rather he saw it as indicating that"
    },
    {
      "end_time": 6955.589,
      "index": 252,
      "start_time": 6931.596,
      "text": " mathematicians don't proceed logically or rationally when they're coming up with their proofs or theorems. They have some connection to the platonic world that that manifests through intuition or creative sparks. Is that correct? I think so. What happened with Gödel is that some of his papers were even typeset and he proofread them and he went through different versions and"
    },
    {
      "end_time": 6986.049,
      "index": 253,
      "start_time": 6956.135,
      "text": " finally decided not to release them for publication. He does have places where if you look through the collected works which I've done, I don't remember which part was published and which part were versions of papers that were never published but were really in pretty final form and sometimes he has several versions that all look very good. He does state that he believes that his theorem is not an obstacle to the progress of mathematics because as you said"
    },
    {
      "end_time": 7015.572,
      "index": 254,
      "start_time": 6986.749,
      "text": " He's saying something like a mathematician closes his eyes and thinks in the dark and somehow manages to perceive the platonic world of ideas. He thinks we're not a formal axiomatic theory. You know, the human mind is not a formal axiomatic theory. And he believes that through mathematical intuition, in some sense, mathematicians connect with this world of ideas. I believe that's his philosophical position. And maybe the reason"
    },
    {
      "end_time": 7041.613,
      "index": 255,
      "start_time": 7016.101,
      "text": " He didn't release some of these papers for publication is because I view ghetto as a philosopher who would only publish papers which are not controversial because he would prove philosophical results with with mathematical arguments so no one could argue with him. He does he has very small output for a mathematician because he doesn't do papers on on"
    },
    {
      "end_time": 7068.695,
      "index": 256,
      "start_time": 7042.312,
      "text": " on topics that are not deep deep topics philosophically so that means he has very little output for a pure mathematician with his brains you know very smart broad mathematician he has a wonderful paper on rotating universes in general relativity for example in the arrow of time that he wrote for his friend einstein so but so i think he was"
    },
    {
      "end_time": 7098.643,
      "index": 257,
      "start_time": 7068.968,
      "text": " to publish papers where he couldn't actually give a mathematical proof for assertions he made. And perhaps for that reason, these remarks you made about the fact that he didn't, yeah, I think, I don't remember all of this, but I think you can make the case, maybe Rebecca Goldstein makes this case in her book, that Gödel was deeply misunderstood and that he viewed the point of his theorem as liberating, as different from what people"
    },
    {
      "end_time": 7123.166,
      "index": 258,
      "start_time": 7098.916,
      "text": " thought it meant. I've forgotten some of this, I'm sorry to say, or I could give you a more coherent answer, but I would agree with what you said. Yeah, so in other words, if Gödel's incompleteness theorem wasn't there, wasn't true, then math is this fixed tree. So what Gödel done, rather than putting shackles on, was remove the shackles from math."
    },
    {
      "end_time": 7144.343,
      "index": 259,
      "start_time": 7123.609,
      "text": " Right, but that also pulls the rug out from mathematical certainty. It's not completely black or white, because if you don't have one fixed theory of everything for pure math, you can start arguing about which axioms to accept. Math could split up into different communities that go off in different directions with axioms that contradict each other, like non-Euclidean geometries, for example."
    },
    {
      "end_time": 7174.002,
      "index": 260,
      "start_time": 7144.889,
      "text": " That actually happened. There was intuitionistic mathematics. There was a mathematician whose name escapes me in the Netherlands. Brouwer? Brouwer, maybe. And Hilbert and Brouwer really hated each other. Hilbert kicked Einstein out of the Board of Editors of one of his magazines because he wanted to get Brouwer out of the Board of Editors and he couldn't think of how to do that except by eliminating the Board of Editors altogether."
    },
    {
      "end_time": 7197.193,
      "index": 261,
      "start_time": 7174.377,
      "text": " So this was a serious controversy and Hilbert's proposal was intended in fact to refute the Brouwer's position. So in intuitionist logic is there no such thing as girdles and completeness then? Because you have to construct every statement and the girdle statements are unconstructible? That's a very good question."
    },
    {
      "end_time": 7220.043,
      "index": 262,
      "start_time": 7197.995,
      "text": " I'm not an expert on Gödel incompetence because I've gone off in a different direction, right? More the Turing's direction. But I can tell you the historical motivation for Hilbert's program. It's not just coming up with a theory of everything for all of pure mathematics that Gödel refutes."
    },
    {
      "end_time": 7249.053,
      "index": 263,
      "start_time": 7220.742,
      "text": " The theories of everything that Hilbert wanted to study and for people to agree on and take as the foundations of mathematics were theories that allowed non-constructive proofs, which Brouwer did not allow. So the effort, the attempt was to show that these non-constructive proofs wouldn't lead to a contradiction using only constructive reasoning outside the system. This was the"
    },
    {
      "end_time": 7277.125,
      "index": 264,
      "start_time": 7249.445,
      "text": " original purpose for the meta mathematics, to which Brouwer replied, just because a policeman doesn't stop one, it doesn't mean that somebody didn't commit a crime. So this was part of the motivation for Hilbert's proposal of formalism that people forget about nowadays. In retrospect, I don't see this as the central issue. For me, the central issue is, is there incompleteness or not? But that was only part of the"
    },
    {
      "end_time": 7306.032,
      "index": 265,
      "start_time": 7277.363,
      "text": " set of issues that historically were in play at the time. The real history is always more complicated, right? I'm giving what I see as the central issue because it led to my work. And you asked a good question. I know that the meta-mathematical arguments, which were supposed to convince mathematicians that the non-constructive formal axiomatic theory that Hilbert would have come up with, hopefully, but didn't,"
    },
    {
      "end_time": 7336.391,
      "index": 266,
      "start_time": 7306.886,
      "text": " was okay in spite of allowing non-constructive arguments. In metamathematics, you weren't supposed to use non-constructive arguments. So that way Brouwer would have, they were also informal, they were done outside the system in words. This was to convince Brouwer that it was okay to use non-constructive arguments because it wouldn't lead you to a contradiction. This was a fight between the two of them. Now in practice, constructive mathematics is the computer. You know, now"
    },
    {
      "end_time": 7365.145,
      "index": 267,
      "start_time": 7337.466,
      "text": " Because of computers, a lot of mathematics is computational and very constructive. But you pay a price. For example, sometimes it's much easier to prove that, say, a partial differential equation has a solution than to give you an efficient way of calculating the solution. So that's part of the reason that mathematicians still like to use non-constructive arguments. And there are places where the arguments have to be non-constructive."
    },
    {
      "end_time": 7394.684,
      "index": 268,
      "start_time": 7365.52,
      "text": " My own work, since I'm dealing with things which are un-computable, this stuff, I guess, Brouwer would have said doesn't exist, you know, that the halting probability doesn't exist. It's not a computable real number. But that's the whole point. What I'm trying to do is find things that are non-constructive and un-computable, but are just across the border from what you can calculate and that clearly has meaning."
    },
    {
      "end_time": 7424.394,
      "index": 269,
      "start_time": 7395.384,
      "text": " because it has algorithmic meaning, computational meaning. So I try to find the boundary that's as close as possible. So you just get into trouble by going across the boundary. So that's the halting probability. It almost looks computable. You can calculate the limit from below. One of my books, I have a list program that calculates the halting problem in the limit from below of infinite time, but it converges non-computably slowly. You see, so the halting probability is almost a computable real number."
    },
    {
      "end_time": 7453.746,
      "index": 270,
      "start_time": 7424.872,
      "text": " There's no regulator of convergence. So it's just across the border from, from what you can calculate in theory and what you can't. So it's an attempt to find the boundary, you know, now Cantor has much more abstract things. His very large infinite sets, you know, you have amazingly. I see you see them as far across the mountains. Yeah. They're almost like theology. Yeah."
    },
    {
      "end_time": 7477.995,
      "index": 271,
      "start_time": 7453.951,
      "text": " Not that there aren't wonderful mathematicians doing, a friend of mine, Robert Solovey was one of them. There's a, is he an Israeli mathematician? I can't recall his name, who's just incredibly powerful. It's beautiful work. There's a small community of really good mathematicians working on abstract set theory. They've decided to add a new axiom called projected determinacy."
    },
    {
      "end_time": 7501.783,
      "index": 272,
      "start_time": 7478.575,
      "text": " I think this is great stuff, but I regard it a little bit as mathematical theology. You know, it's far off in Never Neverland. It's great work. I have nothing against it, but I'm working at a much lower level, very near to what's computable, you see, because I'm using the notion. So instead of having the hierarchy of bigger and bigger infinities and amazingly big infinities that you have in abstract set theory,"
    },
    {
      "end_time": 7531.34,
      "index": 273,
      "start_time": 7502.637,
      "text": " I was talking to you about the hierarchy of a Turing machine, its oracle, then a Turing machine with an oracle for the halting problem and its oracle and you get like that more and more powerful Turing machines of which only the ones at the bottom level sort of correspond to what we can calculate in this world. And the other ones are increasingly mathematical fantasies as you go higher up. So all this is in it. But in a way, the world of pure mathematics is beautiful and simple because it's unreal."
    },
    {
      "end_time": 7559.667,
      "index": 274,
      "start_time": 7531.903,
      "text": " Von Neumann has a remark that pure mathematics is easy, real life is hard. And I think that's actually a deep remark. It's not just a cocktail party joke. And part of the reason that pure mathematics is beautiful is because it is an ideal world. It's maybe in some ways a fantasy world. So it has beautiful, simple properties. In the real world, that's tougher."
    },
    {
      "end_time": 7584.787,
      "index": 275,
      "start_time": 7560.06,
      "text": " Now it's possible that the world of your mathematics is really real in the most real possible sense if you state that maybe that's the fundamental ontology, the substrate out of which the world is built, which is a possible position. I don't know out of what the world is built, but physicists use a lot of mathematics, right? But the mathematics changes as a function of time."
    },
    {
      "end_time": 7615.265,
      "index": 276,
      "start_time": 7585.776,
      "text": " Presumably, if we had a deep understanding of physics, we would know what the world is made out of. One possibility would be information. I don't know. At this point in time, that's like pre-Socratic philosophy, where by pure thought, you say maybe the world is fire, maybe the world is all unity, maybe it's not, you know, all these different attitudes. Everything has changed or everything is... Before we move to cold fusion, what is it that drives you to operate on the perimeter?"
    },
    {
      "end_time": 7645.503,
      "index": 277,
      "start_time": 7615.759,
      "text": " between computability, uncomputability, or contradictory statements, paradoxes, and completeness. What is it that you sit around your house, you go for walks, and then you start thinking about them by nature? Do you have a predilection for it? That's a very good question. Well, I can't give you a full answer, but I'll tell you that as a child, I'm self-taught. I don't have a university degree. I just have two honorary doctorates, right?"
    },
    {
      "end_time": 7673.968,
      "index": 278,
      "start_time": 7646.032,
      "text": " 20 books that I published. Yeah. So I'm an amateur mathematician. I earned a living as a computer programmer at IBM doing, working on hardware and software design. So as a child, I used to read a lot on my own on many, many topics. And at first I was very interested in astronomy. I built a telescope, a ground, a reflecting telescope in the basement of what was then the Hayden Planetarium."
    },
    {
      "end_time": 7702.449,
      "index": 279,
      "start_time": 7674.582,
      "text": " Another topic I was interested in was physics. And in physics, you know, the two most mysterious topics, there was relativity theory, but there was general relativity, curve, space, time, gravity, and there was quantum mechanics. So I read a lot about this as a, as a child, as a teenager. And somehow I was interested in the most fundamental and mysterious things. I don't know why it's my personality. And then I heard about Goethe's incompleteness theorem."
    },
    {
      "end_time": 7731.681,
      "index": 280,
      "start_time": 7703.2,
      "text": " I was 11 years old. It was Gödel's Proof, that little book published in 1958 by Nagelin Newman. I got to take it out of the adult section even though I was 11 at the time of the public library in New York. And it fascinated me because it looked to me like a topic in pure math that was as deep as general relativity or quantum mechanics. It was as revolutionary as general relativity or quantum mechanics was in physics."
    },
    {
      "end_time": 7760.708,
      "index": 281,
      "start_time": 7732.875,
      "text": " I sort of became obsessed with trying to understand Goethe's complete theorem, but also I was interested in computers. Computers were just beginning. There was a book called Giant Brains or Giant Electronic Brains. I found books, one of the first books that showed examples of computer programs and I was computer programming in high school, which was unusual then."
    },
    {
      "end_time": 7790.503,
      "index": 282,
      "start_time": 7761.271,
      "text": " on mainframes. Now probably everybody in high school programs, right? But at the time I did it, it was unusual and I got to use mainframes. I was lucky at Columbia University when I was in high school. So the computer also fascinated me. Artificial intelligence I've never worked on and has had an interesting history that I've been following from outside. I've talked with Marvin Minsky about artificial intelligence and"
    },
    {
      "end_time": 7817.483,
      "index": 283,
      "start_time": 7791.22,
      "text": " Like everybody else, I'm amazed to see what neural nets have now accomplished. I used to talk with John McCarthy and Marvin Minsky about artificial intelligence, and they thought it would be based on logic, right? On formal logic, on imitating the kind of reasoning that mathematicians do. And of course, Minsky and Papert wrote a book which was bad for the funding for neural network research."
    },
    {
      "end_time": 7843.217,
      "index": 284,
      "start_time": 7817.875,
      "text": " And amazingly enough, the engineering approach, that doesn't surprise me. I've seen similar things at IBM, but a brute force engineering approach triumphed. I saw that in translation, language translation from one language to another. There were approaches which tried to be based on deep knowledge of syntax and semantics, and there were approaches which were more brute force engineering, like taking government documents"
    },
    {
      "end_time": 7876.049,
      "index": 285,
      "start_time": 7850.913,
      "text": " So it's been amazing to see all this happen and I'm sorry I don't remember why I'm talking about this. I lost the thread."
    },
    {
      "end_time": 7901.903,
      "index": 286,
      "start_time": 7876.288,
      "text": " About what inspires you in the direction? Oh, what inspired me? Well, I found the computer fascinating and I didn't like Ghetto's original proof. I mean, I could follow it step by step, but it seemed to me, you know, too tricky the way Grothendieck would say it. It was too much a trick. It was too clever by halves, the English would say. I thought"
    },
    {
      "end_time": 7924.906,
      "index": 287,
      "start_time": 7902.278,
      "text": " something as important as incompleteness needed to have a deeper proof. You need a deeper understanding. And as Karuthindig would say, once you see the right context to think about a problem, it looks trivial, right? The proof is very short because you discover the natural context for thinking about it, which I think I've done. People can argue that it's the wrong context, but it's certainly one"
    },
    {
      "end_time": 7960.401,
      "index": 288,
      "start_time": 7930.896,
      "text": " Now whether incompleteness is fundamental or not is a question because for all practical purposes it may not be. So this question of creativity and mathematics I think is a deep question. You asked it. So this I guess in a previous age maybe I would have been a tried to be a magician or a Kabbalist or something. You know this business of searching for deep understanding I sort of naturally gravitate to"
    },
    {
      "end_time": 7989.531,
      "index": 289,
      "start_time": 7961.323,
      "text": " The more philosophical, deeper questions. So that's my personality, I guess. I was like that already as a child and as a teenager. I mean, I came up with the idea of looking at program size at the age of 15, you know, and I published my first paper on it was published in the Journal of the ACM. When I was still a teenager, I think I was 19. It had been submitted before. At that time, the Journal of the ACM was the only"
    },
    {
      "end_time": 8018.78,
      "index": 290,
      "start_time": 7990.674,
      "text": " only journal on theoretical computer science. Well, I can say a little more. You see, in the fifties, when I grew up in the United States, there were a lot of refugees from Hitler in the U.S. Herman Weill, John von Neumann, Kurt Gödel, and I would read all their essays, all their books. And these gentlemen came from a European background where philosophy and physics were viewed as all part of the same thing, right?"
    },
    {
      "end_time": 8038.524,
      "index": 291,
      "start_time": 8019.36,
      "text": " Classical music also, I know nothing about classical music. You have a philosophical attitude, it's very clear. That's not exactly favored in the current environment. Shut up and calculate, which I think is a bad prescription for fundamental advances. You said it was suicidal to be shut up and calculate is suicidal."
    },
    {
      "end_time": 8065.862,
      "index": 292,
      "start_time": 8039.155,
      "text": " Yeah, well, you can, you know, there's normal science and there are paradigm shifts, as Kyon said. I'm not interested in normal science. That's the kind of thing that can be done in an industrial lab, for example, for new technology. You feel like it's incremental and can be cranked out. Yeah, exactly. They shut up and calculate because you basically have a theory and you just need to work out a case, especially one that is useful for practical applications. But like that, you're never going to do revolutionary work."
    },
    {
      "end_time": 8094.343,
      "index": 293,
      "start_time": 8066.049,
      "text": " And by the way, universities are also conservative places. The current environment is tough. Sidney Brenner, a remarkable guy, worked with Craig, one of the creators of Molecular Biology Nobel Prize. He has a lot of friends who won Nobel Prizes in, I guess it's, I don't know where it is, medicine, if you're doing molecular biology. And he said none of them could have done the work that got them the Nobel Prize in the current environment with funding and"
    },
    {
      "end_time": 8124.514,
      "index": 294,
      "start_time": 8095.503,
      "text": " Research grants and referee reports and all of that. You have to remember the Cavendish had a blanket funding for the whole lab. And then the head of the Cavendish who was Bragg, I think it was the son of the Bragg who did a diffraction of something or other by crystals, some kind of waves by crystals to expose atomic structure, X-ray diffraction. Maybe it was invented by the father."
    },
    {
      "end_time": 8154.104,
      "index": 295,
      "start_time": 8125.333,
      "text": " Anyway, he was the head of the, whatever it was called, that lab at Cambridge that Watson and Crick did their work. And the funding was for the whole lab for several years. People didn't have to waste their time promising what they would do in advance, submitting research grants and everything. So I think the current bureaucracy is really dangerous. You know, I'm sort of disappointed that there hasn't been"
    },
    {
      "end_time": 8182.005,
      "index": 296,
      "start_time": 8154.292,
      "text": " In my view, really fundamental advances in physics for a century. The last previous advances, it seems to me, were general relativity and quantum mechanics. Those were really big ruptures in our understanding of the physical world. So is it that we know everything? I don't think so. But I think with the current environment, the sociology of science makes it difficult for... Can I talk about a little bit about stuff that I think is revolutionary? Please."
    },
    {
      "end_time": 8211.732,
      "index": 297,
      "start_time": 8183.08,
      "text": " This kind of stuff attracts me. There's a crazy guy called Randall Mills, who has a little company near Princeton, New Jersey. It's called something like, I don't know, Black Light or Brilliant Light. Anyway, he studied his... Yeah, Brilliant Light Power. Exactly. You know a lot about this. OK, so he was doing sort of practical work in devices, applications in medicine, you know,"
    },
    {
      "end_time": 8238.524,
      "index": 298,
      "start_time": 8212.534,
      "text": " Bright guy, farm boy, brilliant, brilliant. And then he has a car accident that almost kills him. And he decides he's not going to waste his time on practical applications. He's going to go after fundamental things. So he goes to MIT and studies physics. He goes to MIT and one of his professors was called, I think, Herman House. And Herman House has a theory for how free electron lasers work. They work much better than I thought."
    },
    {
      "end_time": 8262.551,
      "index": 299,
      "start_time": 8239.002,
      "text": " and he found novel solutions of Maxwell's equations. I don't have time to talk about this anyway. I don't understand it that well. Okay, so Randall Mills is a student there and he says, oh, I like this theory that was very good for looking at how free electron lasers do so well. Let me apply it to the hydrogen atom."
    },
    {
      "end_time": 8292.363,
      "index": 300,
      "start_time": 8263.336,
      "text": " Remember, quantum mechanics is an attempt to explain the fact that the hydrogen atom is stable, right, because the electron should radiate and should spiral into the proton and goodbye hydrogen atom. So you invent quantum mechanics. Well, Randall Mills finds solutions of Maxwell's equation where the electron, instead of sort of being in orbit, a particle in orbit around the proton, is more like a sheet or like a sphere. And anyway, I don't understand it in detail, but it doesn't radiate."
    },
    {
      "end_time": 8321.084,
      "index": 301,
      "start_time": 8292.602,
      "text": " These are solutions of Maxwell's equations where you don't have the energy radiated away. This was the professor, his professor at MIT from an house. So then he says, ah, well, maybe he goes a little too far. He says, maybe quantum mechanics wasn't necessary because you can get a stable hydrogen atom just using classical physics. Well, that of course, no physicist would agree with, right? Because quantum mechanics seems to be very successful."
    },
    {
      "end_time": 8350.316,
      "index": 302,
      "start_time": 8321.51,
      "text": " And I have to admit I'm skeptical, although I'm interested in this crazy stuff. But the most interesting thing is, he says, according to his theory, there should be forms of the hydrogen atom that are below the normal ground state. That is to say, with the electron closer to the proton than is allowed in normal quantum physics. And he says, this is the dark matter. The dark matter is the most common substance in the universe, which is hydrogen."
    },
    {
      "end_time": 8380.196,
      "index": 303,
      "start_time": 8351.527,
      "text": " It's just a more stable form of hydrogen that doesn't radiate. That's why it's dark. And he actually has, I'm not a physicist, he has 23 different kinds of proof of the existence of this stuff. He calls them hydrinos. They're hydrogen atoms below the normal ground state. He thinks he can put it in a bottle. And coming from all this crazy theory, he has a device"
    },
    {
      "end_time": 8406.92,
      "index": 304,
      "start_time": 8380.657,
      "text": " which creates hydrinos, takes hydrogen and drops them into this lower state. It involves, I don't know, plasma physics or something and it generates a lot of light, very powerful light. That's why it's called Brilliant Light Power, this company. So starting with this crazy theory, which you know any sane physicist would be very skeptical about, he comes up with a lot of experimental evidence of the existence of these hydrinos, he calls them, which it would be"
    },
    {
      "end_time": 8434.241,
      "index": 305,
      "start_time": 8407.142,
      "text": " I asked physicists to take a good look at it. I was curious to see the reaction and nobody did in Mexico. I gave a talk on this. Obviously people were very skeptical, but I said, there's all this experimental work that's been published. It's been refereed. Please take a look at it. I mean, I even understood one of the results. I did some chromatography experiments, paper chromatography, liquid, when I was a child following articles in Scientific American Amateur Scientist."
    },
    {
      "end_time": 8462.995,
      "index": 306,
      "start_time": 8435.162,
      "text": " The hydrinos migrate too quickly. They migrate faster than hydrogen. So they look like they're smaller, right? So that was one experiment where I understand a little bit the idea. So anyway, so he has this practical device which seems to generate a lot of very powerful light and then with photogotetics he gets electricity out of it. So I think this is remarkable."
    },
    {
      "end_time": 8493.097,
      "index": 307,
      "start_time": 8463.439,
      "text": " And I think that the idea that maybe the dark matter should be a more stable form of hydrogen than exists according to normal physics would explain why there's so much dark matter. It's just hydrogen and hydrogen is the most common element. And the fact that he has an actual device and investors have been funding this work, I think is just amazing. Now there's a problem with sociology of science. This is the whole field of cold fusion. I think now"
    },
    {
      "end_time": 8520.52,
      "index": 308,
      "start_time": 8493.49,
      "text": " A better name for it is lattice enabled nuclear reactions or low energy nuclear reactions. There are a number of little companies doing this kind of stuff now and I've seen efforts to change the attitude, the public attitude toward this. I've been seeing news releases in Europe that are trying to redeem cold fusion and say it's a viable field. A lot needs to be done, but it's alive and it's"
    },
    {
      "end_time": 8550.811,
      "index": 309,
      "start_time": 8520.981,
      "text": " It's for real. Anyway, the one group that I've been following is in Japan. Clean Planet is the name of their company. And why is this being done in Japan with government funding and university participation? Whereas in the States for many years, this was considered to be trash science. Well, one of the reasons is the Japanese have no petroleum and they had a disaster with that nuclear reactor, right? The tsunami. And they have very good scientists and very good technologists."
    },
    {
      "end_time": 8576.561,
      "index": 310,
      "start_time": 8551.067,
      "text": " So there was a physicist there, I think his name was Arata, a very good physicist. They were skeptical. The government didn't start funding this and a university didn't get involved. The Japanese MIT, Stokoku University, didn't get involved until this skepticism was shown to be a mistake. There was a physicist called Arata who did a very clean experiment"
    },
    {
      "end_time": 8605.367,
      "index": 311,
      "start_time": 8577.346,
      "text": " And instead of the Fleischerman and Pons electrochemistry, you see what happens is you take a metal lattice like palladiums is a sponge for hydrogen, right? And you can do it now with nickel. The original experiment used palladium and deuterium, but now people can do this with nickel and ordinary hydrogen. And what happens is if somehow you populate the metal lattice with every interstice with a hydrogen atom in it, then funny things start happening."
    },
    {
      "end_time": 8635.128,
      "index": 312,
      "start_time": 8605.964,
      "text": " Because it looks like the metal lattice shields the normal effects, which would normally you would need an awful lot of energy to get to hydrogen atoms to combine into alien electromagnetic repulsion. But there seems to be some kind of a shielding effect when you have hydrogen in a metal lattice under certain conditions that are complicated, which was why the original experiments couldn't be repeated. So anyway, the version by errata is"
    },
    {
      "end_time": 8661.749,
      "index": 313,
      "start_time": 8635.947,
      "text": " is in a solid lattice not in a not electrochemical and I think it involves nano layers of nickel and copper and hydrogen is somehow pushed through the system I don't know how but it was a very clean experiment and you know the problem is with cold fusion the evidence that something unusual was happening was excess heat anomalous heat more heat than"
    },
    {
      "end_time": 8695.879,
      "index": 314,
      "start_time": 8667.619,
      "text": " had to be taken place. But heat is a messy thing to measure, right? Cholera emitters is really primitive stone age stuff. But if you get helium out, if you put hydrogen in and get helium out, then it's clear that some nuclear reaction is happening of some new sort, right? And Erada got helium out of his system. So at that point, the whole Japanese community, the government, the universities, everybody"
    },
    {
      "end_time": 8725.606,
      "index": 315,
      "start_time": 8696.305,
      "text": " started saying oh this is for real and we are a country without petroleum and our nuclear reactor was a disaster we need a new source of energy desperately and they had very good scientists and very good technology people Japan is a powerhouse right so the government started funding research in this area and there's a little department in Tohoku University which is the MIT of Japan the Tokyo University is the Harvard should we say Japan working on this there's"
    },
    {
      "end_time": 8754.548,
      "index": 316,
      "start_time": 8725.947,
      "text": " Clean Planet, a small company and these people already have a prototype commercial device that generates heat using this and they've made deals with Mitsubishi for international distribution. Mitsubishi has global distribution and they've also made a deal with the largest manufacturer of boilers for industrial purposes in Japan. I don't remember the name."
    },
    {
      "end_time": 8780.981,
      "index": 317,
      "start_time": 8754.906,
      "text": " They have a website which is supposed to be giving live data from a reactor at one of their research labs, but so far has nothing there. It says, coming live soon. I would love to see this. This is a pretty serious effort. Now, nobody talks about it outside of Japan because this was supposed to be junk science. If you were a physicist, I would mention this work to"
    },
    {
      "end_time": 8808.114,
      "index": 318,
      "start_time": 8781.51,
      "text": " physicists. And that would be the end of the conversation. They would say I was crazy and didn't want to talk to me anymore. Right. So I lost an appointment to a physics department because I mentioned it to somebody or a joint worker with a physics department. So, so anyway, this is a problem of sociology of science. Anyway, it turns out that Pons and Fleischman, Fleischman was one of the top electrochemists in the world at the time. He had been the head of the electrochemical society."
    },
    {
      "end_time": 8838.609,
      "index": 319,
      "start_time": 8809.616,
      "text": " Their interpretation of the experiment had flaws, but the experiment worked. It was very hard to repeat. It wasn't clear why sometimes it worked and sometimes it didn't. And 30 years of work by a community of eccentrics who've been basically doing this on their own without funding in secret have sort of clarified the conditions under which these things happen. And actually, Bonson-Fleichman weren't the first people. There were experiments in the 20s that"
    },
    {
      "end_time": 8855.128,
      "index": 320,
      "start_time": 8839.258,
      "text": " A KFC tale in the pursuit of flavor. The holidays were tricky for the Colonel. He loved people, but he also loved peace and quiet. So he cooked up KFC's $4.99 chicken pot pie."
    },
    {
      "end_time": 8876.852,
      "index": 321,
      "start_time": 8855.128,
      "text": " Never published."
    },
    {
      "end_time": 8911.561,
      "index": 322,
      "start_time": 8884.292,
      "text": " important physicist. So there have been a number of people who've seen things like this in the 20s. So anyway, the Japanese effort seems to me as far as I know, the effort that's most advanced, but there are a number of groups and Google has done some research in this area and seems to be there is a website in Europe that has information about energy and clean energy and things like that. I don't know who funds it, but lately they've been running a whole series of articles about"
    },
    {
      "end_time": 8940.776,
      "index": 323,
      "start_time": 8911.749,
      "text": " Low energy nuclear reactions or lattice-enabled nuclear reactions, I think is a better term. Talking about its history, the fact that there is something there, but work needs to be done. There is no good theoretical understanding of why it works, but there seems to be pretty clear empirical evidence. And this is too promising to drop without further investigation because it may be a clean source of energy, of tremendous energy, and it doesn't produce amazingly enough these nuclear reactions."
    },
    {
      "end_time": 8969.326,
      "index": 324,
      "start_time": 8941.63,
      "text": " Whatever the mechanism is, it isn't completely known. They don't produce radiation and they don't produce radioactive products and they certainly don't produce carbon dioxide because it's not combustion. So this would be a green source of energy. And if you're doing it with nickel, the original experiments were palladium, which is expensive, and deuterium, which is heavy hydrogen, right? No good for a practical technology."
    },
    {
      "end_time": 8995.009,
      "index": 325,
      "start_time": 8969.77,
      "text": " But now it's being done with nickel, with nickel and hydrogen. Nickel is a very common element. So what's the caveat here? Like what's the but? Yeah. So this is an example of the sociology of science. There's tremendous funding for tokamaks, right? And some of the people who supposedly tried to duplicate the Ponsman and Fleisch experiment to kill it as quickly as possible were associated with this."
    },
    {
      "end_time": 9022.841,
      "index": 326,
      "start_time": 8996.101,
      "text": " And once you have thousands of physicists working on fusion reactors and tremendous funding in many nations, and plus you have oil companies, it's tough. But it looks to me like it was a tremendous mistake not to continue working on this area. But it's an example of sociology of science. As Aristotle said, humans are political animals. There were two Nobel Prize physicists who right away got interested in this phenomenon because, of course,"
    },
    {
      "end_time": 9053.114,
      "index": 327,
      "start_time": 9023.353,
      "text": " A good physicist looks for a place where the theories don't work, a some new phenomenon. One of them was, I think, was Schwinger, Julian Schwinger, and he immediately wrote a paper on the topic and it was discarded by the journal he sent it to without even sending it out for refereeing. It was a journal of the American Physical Society and he resigned. He was a fellow of the American Physical Society, a Nobel Prize winner. He resigned saying this will be the death of science, you know, if stuff is dismissed. I think this was"
    },
    {
      "end_time": 9082.449,
      "index": 328,
      "start_time": 9053.507,
      "text": " Could this have been Julian Schwinger? Quantum electrodynamics. It was a Nobel Prize winner. I'm not sure it was him. So, but you know, humans are... Hugh Price also talks about this. Hugh Price, the philosopher Hugh Price, he calls it the reputation track, which is essentially the opposite of stigma. That is that a professor goes into what brings them prestige and they do so unconsciously most of the time and they follow along."
    },
    {
      "end_time": 9112.363,
      "index": 329,
      "start_time": 9083.353,
      "text": " graduate students are told look you don't want to work on the hardest problems or the most interesting problems you want to work on the tractable problems the solvable ones so that you can have a history of publishing so that you can get a postdoc then get a professorship in which case you also want to have a history of publishing in order to get eligible for this is crap though this will ensure that no really deep revolutionary work gets done I had a conversation with who is this wonderful physicist at Stanford"
    },
    {
      "end_time": 9139.138,
      "index": 330,
      "start_time": 9113.439,
      "text": " We were drinking beer in Arizona once and reminiscing. It turns out we're both from New York City. We went to public schools in New York City, high schools in New York City. So we were chatting and he said when he was a kid, he and his fellow physics graduate students, all were revolutionaries. They were subversive."
    },
    {
      "end_time": 9167.09,
      "index": 331,
      "start_time": 9139.974,
      "text": " They didn't want to listen to their professors. They were going to do something new and revolutionary. And now he says the students come up to him and they ask him for a problem to work on. They don't have their own problem to work on. They ask him to tell them a good problem and then they work on it and never change. They have to keep grinding out papers on this topic. And he said he was very disappointed by this. But the current"
    },
    {
      "end_time": 9196.903,
      "index": 332,
      "start_time": 9167.432,
      "text": " system, the bureaucracy, the whole thing makes this happen. I think the reason that there hasn't been that much fundamental work in physics since the creation of quantum mechanics at the level of the creation of quantum mechanics was a tremendous rupture, tremendous revolution. I think it doesn't mean that the physical universe doesn't have wonderful surprises for us. I think a lot of it reflects big science and the sociology of science. Interesting, big science. It's the first time I've heard that."
    },
    {
      "end_time": 9223.985,
      "index": 333,
      "start_time": 9197.483,
      "text": " Yeah, well, you have to remember, it was used in the 50s. You have to remember that studying nuclear physics in the 30s was like studying, writing poetry in ancient Greek. It had no practical applications. Very few people were interested in this topic. But after you build an atom bomb and nuclear reactors, this is big science. And von Neumann, who knew the history of the church, of the Catholic Church, he had a photographic memory."
    },
    {
      "end_time": 9253.166,
      "index": 334,
      "start_time": 9224.514,
      "text": " He said the transition which was happening in science was similar transition that happened in the history of Catholic Church. I would have to find this, this quote. So anyway, this has been very bad for scientific creativity and Huey Price gives practical reasons to this. Even though this is low probability, it could save the planet and the human race. So there should be some money invested and they've been investing billions in Tokamaks, right? For 30 years."
    },
    {
      "end_time": 9278.66,
      "index": 335,
      "start_time": 9253.353,
      "text": " They could invest a few hundred million a year in Cold Fusion, but it didn't happen that way. But the tide is turning, I think. I think the tide is turning. You're aware of this stuff, which is great. And I don't know if you're aware of... Maybe you didn't hear about this Japanese effort. No. You heard about Brilliant Light Power. Yeah. So the word is getting out. And I think..."
    },
    {
      "end_time": 9307.295,
      "index": 336,
      "start_time": 9279.343,
      "text": " Maybe Google is funding this. I've seen a number of press releases that don't have an authorship under them in Europe, which are trying to rehabilitate ColdFusion and say it's a legitimate field and they're startups. And it's true, we don't understand the mechanisms yet, but it's promising and it should be worked on. I've seen a whole series of press releases, one of which said Google as the authorship, but the others didn't indicate who was writing the article."
    },
    {
      "end_time": 9331.169,
      "index": 337,
      "start_time": 9307.875,
      "text": " My impression, and this comes from my background in math and physics in university, so just in the math and physics domain, for why there isn't much of a paradigm shift in physics in the past 50 years or so, is because, well firstly we're talking about high energy physics, not just physics in general, like condensed matter physics has so much that's new, but it's not fundamentally paradigm shifting like quantum mechanics was."
    },
    {
      "end_time": 9356.34,
      "index": 338,
      "start_time": 9331.493,
      "text": " One of the reasons is that you have to become so well versed in physics in order to make a change. Like you have to understand the landscape in order to propose something new. And doing so takes up so much time that by the time you're done studying it, you become ingrained and you can't think outside of it. And you're also encouraged not to while you're learning. Yeah, I agree. I mean, that's why I'm a college dropout."
    },
    {
      "end_time": 9379.599,
      "index": 339,
      "start_time": 9356.8,
      "text": " That's why I have this podcast. So we're similar. Yeah, to get a large overview of a rock of different fields and to explore them in depth. That is with rigor, hopefully, but in order to make connections between the fields. Absolutely. Universities are basically conservative institutions and enormous bureaucracies that administer scientific research now, you know, guarantee that no really"
    },
    {
      "end_time": 9408.677,
      "index": 340,
      "start_time": 9380.009,
      "text": " Fundamentally novel stuff can get funded or it can get published. The referees will never accept it. So this is not the way to do good science. But I think the politicians don't want revolutionary science because it scares them. They prefer to have control over the scientific community. They don't want another development as scary as the atom bomb. That may be a reason for controlling everything, but it's very bad for science to advance."
    },
    {
      "end_time": 9437.329,
      "index": 341,
      "start_time": 9409.377,
      "text": " You know, I think that Randall Mills is clearly very brilliant. I think he's going too far. He thinks quantum mechanics was not necessary and he can do everything with his approach. Well, I can't follow his approach in detail, but I'm a little skeptical. Maybe if there's very clear experimental evidence as there seems to be that hydrinos exist, then clearly he's onto something. But maybe"
    },
    {
      "end_time": 9455.572,
      "index": 342,
      "start_time": 9437.568,
      "text": " His ideas can be combined with quantum mechanics into a richer theory. He's lucky that he came up with a practical device because he never would have gotten funding to do all the research he's been doing. The fact that he starts out presenting his stuff by saying that he's"
    },
    {
      "end_time": 9486.032,
      "index": 343,
      "start_time": 9456.903,
      "text": " decided that quantum mechanics is a mistake and he's found another path is a little suicidal, right? But it makes it fascinating from a scientific point of view. You know, he's gotten spectrums out that he says are the spectrum of the solar corona. There's the question of war. There's not only the question of the dark matter. There's a question of why is the solar corona so hot? Where does all that energy come from? And he says it's from hydrogen dropping into hydrinos, which is the dark matter."
    },
    {
      "end_time": 9514.718,
      "index": 344,
      "start_time": 9486.886,
      "text": " And you know, there's so much dark matter. The most common thing in the universe is hydrogen, right? So if it's all hydrogen, it makes sense. So I think that Randall Mills deserves a chance. Yeah, the people should really listen to him. I find it absolutely fascinating. Now the stuff on low energy nuclear actors, the lattice enabled nuclear reactions, I think is better."
    },
    {
      "end_time": 9542.363,
      "index": 345,
      "start_time": 9516.254,
      "text": " It's not as fundamentally new science because condensed matter is a very complicated environment. As somebody on the Physics Nobel Prize committee who looked at some of the work in this area said, there's a lot of phenomenological stuff in condensed matter physics and there is room for this new phenomenon to take place maybe."
    },
    {
      "end_time": 9571.63,
      "index": 346,
      "start_time": 9542.637,
      "text": " So he took it seriously. He died. He was an elder statesman, which is why he could make a statement like this without destroying his career. He already had had a wonderful career. The fact that in a nickel lattice, if you populate almost every interstice with a hydrogen atom, somehow some of the hydrogens can turn into helium. This would not seem to be fundamentally new science."
    },
    {
      "end_time": 9601.715,
      "index": 347,
      "start_time": 9571.869,
      "text": " the way Randall Mills' hydrinos are. Now, technologically, it could be just wonderful. And that's what the Clean Planet Group is betting on with government funding and university support and also now industrial partnerships in Japan. But I have to say that Randall Mills fascinates me more because that would be very interesting. And he starts with a piece of work done by his professor"
    },
    {
      "end_time": 9631.63,
      "index": 348,
      "start_time": 9604.838,
      "text": " The Japanese group seems to be interesting engineering, whereas what Randall Mills is talking about or Randy Mills is interesting physics for myself. I agree. I have the same point of view. Now there's another crazy guy. I don't know whether to take him seriously in this field or not. He's called Andrea Rossi. He has something called the energy catalyzer. I think one can be justifiably skeptical about his work."
    },
    {
      "end_time": 9660.469,
      "index": 349,
      "start_time": 9632.108,
      "text": " Because Randall Mills has an organization that people have invested, I don't know, $140 million in. It's a serious organization. And the Japanese effort is involved with universities, industry, the government. That's a serious thing. But this guy, Andrew Ressie, seems to keep all the cards close to his chest. He's doing it all on his own. It's a little crazy. But he claims now he had more conventional low-energy or lattice-enabled nuclear reactions, much closer to Ponce inflation already 10 years ago."
    },
    {
      "end_time": 9690.162,
      "index": 350,
      "start_time": 9660.811,
      "text": " But now he claims he's getting energy out of the vacuum, you know, out of quantum fluctuations in the vacuum. He has a new device. I don't know whether one should take him seriously or not, because he doesn't really publish. He doesn't reveal very much information. One demo that he did was was just wrong. He had a light bulb, supposedly. Well, anyway, he claims now he's going to have an electric car where he's going to be"
    },
    {
      "end_time": 9718.78,
      "index": 351,
      "start_time": 9690.503,
      "text": " running the charging up the battery with his his device that supposedly gets electricity. He gets electricity directly, not heat or not light. He's going to keep the battery of the car charged while it's going round and round the racetrack. He claims he's going to do a demo in October, but you know, he usually doesn't want to reveal enough information for people to see whether they should take him seriously or not. The poor guy has been working on this for a long time. He's not so young."
    },
    {
      "end_time": 9747.944,
      "index": 352,
      "start_time": 9719.48,
      "text": " There's certainly justification for a lot of skepticism. There is a paper he published, I don't know, ARXIV or something, someplace like that. It's not in a referee journal talking about explaining how he thinks he can extract energy from the vacuum. I don't know enough physics to judge if this paper is nonsense or makes any sense. So, but this is just indicate that there's all this fascinating stuff going on out there outside the system."
    },
    {
      "end_time": 9777.329,
      "index": 353,
      "start_time": 9748.882,
      "text": " Because the system can't do crazy stuff. And if you don't allow some stuff that is clearly crazy, in the future, one of these crazy people like Randall Mills, Andrea Rossi, well, the Japanese effort is more solid engineering. But if you don't allow stuff that is going to be wrong, you won't find new stuff that is right. You know, you have to allow, as one friend of mine put it at IBM, if a lot of your research projects"
    },
    {
      "end_time": 9806.357,
      "index": 354,
      "start_time": 9777.773,
      "text": " Don't fail. You're not being aggressive enough. You're not doing real research, but now you can't fail. You have to announce in advance what you're going to do to get the research grant, right? So, so I think we have a serious problem in the sociology of science and I agree with you in price. You know, the dark matter was clearly giving us a picture that we're missing something big because this is supposed to be most of the matter in the universe and we don't understand what it is."
    },
    {
      "end_time": 9833.729,
      "index": 355,
      "start_time": 9806.732,
      "text": " So clearly something big is missing in our picture of the physical universe, right? And the only proposal I've heard, well, I'm not a physicist, right, or an astrophysicist, but the stuff by Randy Mills, I find fascinating. I'm not in a position to judge his reworking of all of physics using classical physics. He has four volumes on this, but"
    },
    {
      "end_time": 9861.425,
      "index": 356,
      "start_time": 9834.019,
      "text": " But he has 23 experimental proofs of the existence of hydrinos. He says he has them in a bottle. He's had videos showing reactions with them. This sounds like the kind of thing that, you know, a good experimental physicist could take a look at and see. Well, some of it has been published in referee journals, see if the evidence is good enough. I may have emailed Randy or he may have emailed me or someone may have like, I don't recall. I should look into interviewing him."
    },
    {
      "end_time": 9891.766,
      "index": 357,
      "start_time": 9862.739,
      "text": " I think he's a fascinating guy. I think it would be great to interview him. Now, Andrea Rossi will not want to be interviewed because he doesn't want to reveal any information about what he's doing. The Japanese effort, they have somebody who does public relations, a lady who gave a good talk at the last International Congress on Cold Fusion. Maybe you could, I'm sure that I have a feeling they would be happy to talk to you, or she would be happy to talk to you, since that's her job. And they"
    },
    {
      "end_time": 9920.913,
      "index": 358,
      "start_time": 9892.176,
      "text": " They're going to, they want to market this thing worldwide, you know, and they think they're getting close to that. So those are two people I would suggest. Now I don't know them, you know, I don't have a, if I were friends with them, I could get you an interview with them. So you've got to break through the layers of protection. I would love to, but I'm just not going to do it if it's to promote them. If I'm an arm off of their marketing team, I don't care about that. I'll do it to understand the,"
    },
    {
      "end_time": 9937.773,
      "index": 359,
      "start_time": 9921.203,
      "text": " Condensed Matter Physics in particular."
    },
    {
      "end_time": 9968.473,
      "index": 360,
      "start_time": 9938.473,
      "text": " And the Japanese effort, well, the person to interview would be the physicist Arata, who came up with the evidence that convinced the Japanese to take seriously this whole phenomenon. I don't know if he's still alive. He was already another statesman, but I think he did those experiments with a younger collaborator who might still be alive. There's a department at Tohoku University that's working on this stuff. Okay. Well, I'll look into that. So Greg, before we get going, I want to ask you,"
    },
    {
      "end_time": 9989.155,
      "index": 361,
      "start_time": 9968.899,
      "text": " What are the different types of randomness? I know they're different kinds. There are lots of different kinds. There are several different definitions of randomness. I was just speaking with a Russian historian about this. The whole subject of program size complexity"
    },
    {
      "end_time": 10019.616,
      "index": 362,
      "start_time": 9989.77,
      "text": " There were three of us who came up with this idea in the 60s. Remember, the computer had just begun. And if you're a pure mathematician, it's a sort of a natural question to want to look at complexity of computations and prove theorems about it. Most people felt time was the right kind of complexity to look at. Most of the work on complexity has to do with time, runtime. Three of us thought that maybe program size was more fundamental. That was Andrey Komogorov, who was very old. He was my age in Russia."
    },
    {
      "end_time": 10049.309,
      "index": 363,
      "start_time": 10020.213,
      "text": " There was Ray Solomonoff, a friend of Marvin Minsky. They're both gone, the two of them. Ray Solomonoff was interested in artificial intelligence and machine learning. Good friend of Marvin Minsky, a very unconventional guy, not with a university position, but a friend of Marvin Minsky. Minsky's a very bright guy. He doesn't have any stupid friends. And Marvin Minsky liked Solomonoff's work and he"
    },
    {
      "end_time": 10078.353,
      "index": 364,
      "start_time": 10049.582,
      "text": " included it in some of his review articles. And there was me, I was a teenager, right, at the Bronx guy School of Science. Kolmogorov and I wanted to define randomness. We're pure mathematicians. Our most basic interest was to define lack of structure, because probability theory doesn't talk about that, you know, distinguishing a sequence of zeros and ones that has structure from one that is lawless. Solomon's motivation was induction, predicting the future and machine learning."
    },
    {
      "end_time": 10108.234,
      "index": 365,
      "start_time": 10079.428,
      "text": " He didn't define randomness. Wait, sorry. Shannon's work didn't even have to do with zeros and ones and which ones are more lawless or more chaotic than others? Yeah. I came up with this idea by reading Shannon and reading Turin at the same time and combining the two. But Shannon's information theory doesn't take an individual sequence of bits and say whether it has structure or not. It's a statistical theory. So what's random in Shannon's theory"
    },
    {
      "end_time": 10132.381,
      "index": 366,
      "start_time": 10108.473,
      "text": " Is you have an ensemble of all possible messages and what's random is the distribution of probability over the ensemble of possibilities so the information is. The entropy is maximum when the distribution is uniform over all the possibilities right but you're not looking at an individual message and i thought you should look at an individual sequence of zeros and ones you don't have a statistical ensemble that you embedded in."
    },
    {
      "end_time": 10161.084,
      "index": 367,
      "start_time": 10132.722,
      "text": " You just look at that sequence of zeros and ones and ask does it have structure or doesn't it? Does it obey our law or doesn't it? We wanted to define that notion. I wanted to define it because I was interested in completeness and I had this intuitive feeling that this might have something to do. Now Kolmogorov wanted to define it because he was one of the creators of modern probability theory. Another guy I admire is Emile Borel who gets less credit."
    },
    {
      "end_time": 10189.224,
      "index": 368,
      "start_time": 10161.288,
      "text": " Borel sets like the Heine-Borel theorem, the same Borel? Yes, the same Borel. He proved that almost all real numbers are normal, which means that digits are equidistributed in every base or blocks of digits. He's a guy I admire. Kolmogorov had a more abstract formulation of probability theory that was very convincing for pure mathematicians via measure theory. Borel had a more down-to-earth"
    },
    {
      "end_time": 10212.91,
      "index": 369,
      "start_time": 10189.531,
      "text": " But it did beautiful work. Beautiful work, Borel. Anyway, they're different kinds of minds. Anyway, Kolmogorov wanted it because he was one of the creators of modern probability theory. And he noticed that this notion of unstructured sequence or random sequence, although you talk about randomness, it doesn't really play a role in measure theoretic probability theory."
    },
    {
      "end_time": 10238.524,
      "index": 370,
      "start_time": 10213.404,
      "text": " Okay, so these were three of us with different goals that came up with this same idea, basically, different versions of it in the 60s. Now, for finite sequences, it's pretty straightforward to define lack of structure. It gets a little more difficult when you're talking about an infinite sequence, like a real number with an infinite number of bits. So this is where Martin-Loff comes in."
    },
    {
      "end_time": 10263.968,
      "index": 371,
      "start_time": 10239.718,
      "text": " There's the question of how do you define an infinite sequence to be unstructured? And the bits of my halting probability are infinite sequence, right? When you write it in binary. Well, you can sort of say that all the initial segments have to be unstructured, right? And then the whole thing is the infinite sequence structure. There's a problem with that though. A certain amount of structure happens just by chance. It has to happen."
    },
    {
      "end_time": 10291.664,
      "index": 372,
      "start_time": 10265.247,
      "text": " As Feller points out, there's a law of either logarithm, you have to get, you're going to get runs of the same bit. You know, if you go out and it's roughly log n, there's a very beautiful theorem about this. So the complexity will dip, it has to dip with probability one. So in the case of infinite sequences, it gets a little more delicate and the original version that Kolmogorov proposed"
    },
    {
      "end_time": 10320.725,
      "index": 373,
      "start_time": 10292.193,
      "text": " doesn't work and I propose a definition of infinite sequence randomness that doesn't have that problem but has another problem, okay? It's too inclusive, it allows sequences that aren't random and Kolmogorov's definition demanding that every initial sequence should be close to the maximum complexity doesn't work because with probability one this is going to fail, right? And you want the random sequences to come out with probability one. Okay, so Martenloff looked at this and said,"
    },
    {
      "end_time": 10347.705,
      "index": 374,
      "start_time": 10321.049,
      "text": " to define the random infinite sequence, you can't use program size. It didn't work. So he came up with a measure, the definition using constructive measure theory. That's the Martin-Loff random sequence. Okay. But then I went back and changed the definition of program size complexity. And then I got a neat definition because my definition says exactly how much it should dip."
    },
    {
      "end_time": 10377.534,
      "index": 375,
      "start_time": 10347.961,
      "text": " I changed to a program size complexity where n bit strings instead of having n bit complexity maximum they have n plus the information content of n. So then the dips turn out to be exactly that extra piece. So the way you define an infinite random sequence using program size is to say that the complexity of the initial segment has to always stay above n minus c for all n. And that takes into account the dips because the maximum possible complexity for n bit sequence"
    },
    {
      "end_time": 10405.794,
      "index": 376,
      "start_time": 10377.978,
      "text": " with self-delimiting program size complexity that I and Levin invented independently 10 years later in the seventies. That's exactly how far down it is going to dip from N plus the program size complexity of N, which is roughly log N down to about N and minus C. So you end up having actually three different definitions of an infinite random sequence. There's my definition talking about program size complexity, but not the original one from the sixties."
    },
    {
      "end_time": 10435.879,
      "index": 377,
      "start_time": 10406.118,
      "text": " It's a better definition that I call self-delimiting programs and Levin came up with his own version of that in the 70s as I did. And then there's Martin Love's definition and then there's another definition that's very beautiful done by Robert Solovey, the set theorist who was briefly interested in this area. He and I were both at IBM Research for a while and we talked about all of this and he said, okay, I'm going to think about this. Maybe it will help me to settle P equals NP."
    },
    {
      "end_time": 10465.486,
      "index": 378,
      "start_time": 10436.34,
      "text": " And he did some beautiful work, but it didn't settle PNP and he left the field, never bothered to publish. Is P equals NP shown to be a girdle statement, by the way? I don't think so. Well, yes, it depends on, it depends on, you can show that it doesn't relativize. There are oracles for which P is equal to NP. If you're doing hypercomputation with an oracle,"
    },
    {
      "end_time": 10490.026,
      "index": 379,
      "start_time": 10466.067,
      "text": " You can make the oracle such that P is equal to NP and you can make the oracle such that P is not equal to NP by changing the oracle instead of allowing normal computation. That's a theorem of who was it? Was it Gill and Salovey a long time ago? And that means that all the normal messes of proof can't really settle this question. But I don't know, I don't think it's shown to be"
    },
    {
      "end_time": 10519.633,
      "index": 380,
      "start_time": 10491.203,
      "text": " Undecidable from the normal. Is there a way to classify complexity of logical systems? I don't know if logical system is the right word for here, but the NAND gate is functionally complete. To construct any other gate, you just need a combination of NAND gates. But in quantum logic or in quantum computing, you need more than just one gate. You need maybe three or four. So then is that some way of saying quantum logic or quantum computing is more complex"
    },
    {
      "end_time": 10547.568,
      "index": 381,
      "start_time": 10520.333,
      "text": " than classical logic computing. Can you use that as a measure? Susskind has proposed connecting quantum circuit complexity with some fundamental questions in physics, including cave or black holes and whether information is lost or not. I don't understand quantum computation or the kind of physics that Susskind does."
    },
    {
      "end_time": 10575.657,
      "index": 382,
      "start_time": 10547.944,
      "text": " I think about classical computation. I can give you a historical comment, though. You know, von Neumann worked on pure math and also on quantum mechanics, right? He has a book which proposes the Hilbert space formulation of quantum mechanics. He was a student of Hilbert's. He invented the name Hilbert space and von Neumann did. Okay, so von Neumann thought"
    },
    {
      "end_time": 10606.203,
      "index": 383,
      "start_time": 10576.254,
      "text": " If the logic of the world is really quantum logic rather than classical logic, I mean, if the world is quantum mechanical, not classical, maybe we should use a quantum logic to do pure mathematics rather than classical logic. And he and was it Berkoff or something published a few papers on this, but they couldn't pull it off. It didn't seem to work. I don't know if there's an attempt to revive this work now with all the work on quantum computing. Maybe it's time to revisit."
    },
    {
      "end_time": 10626.425,
      "index": 384,
      "start_time": 10606.766,
      "text": " These issues, I've heard that some people are trying to do that. I'm not familiar with, you know, I have to confess, I'm sort of a narrow specialist in my area. So I wanted to just, since you asked about Martin Lowe's randomness, so there are three definitions of an infinite random sequence. There's one using program size that I came up with."
    },
    {
      "end_time": 10652.108,
      "index": 385,
      "start_time": 10626.681,
      "text": " there's the one of martinloff which uses constructive measure theory and there's a one by robert salovey which is related to the one by martinloff it also uses construction measure theory but from a mathematical point of view is closer to what you want to prove properties of it has to do with the borel cantelli lemma which is fundamental in probability theory did you mention that or you mentioned something else by borel"
    },
    {
      "end_time": 10676.015,
      "index": 386,
      "start_time": 10652.534,
      "text": " Okay, so these are three different definitions, and the beautiful thing is that you can prove that they're all equivalent, even though they look very different. And that's always very good. You see, when you have a fundamental mathematical concept you're trying to capture, if people have proposed different definitions and it turns out you can prove they're all equivalent, it means you probably have really gotten it. The same thing happened with Turing machines."
    },
    {
      "end_time": 10705.23,
      "index": 387,
      "start_time": 10676.271,
      "text": " There were different definitions of what is computable, many different in fact, and it was shown that they're all equivalent because each one of them could simulate the other, which was good. It meant that a fundamental idea had been captured. So, Martenloff then went on to work on other things, I think, on intuitionistic logics or type theory. I had the pleasure of meeting him once in Stockholm. I never met Kolmogorov. I did meet Ray Salmonov a few times."
    },
    {
      "end_time": 10732.551,
      "index": 388,
      "start_time": 10705.503,
      "text": " So professor, what advice do you have for people who are getting into mathematics and computer science, maybe even physics, that is undergraduates? And then what advice do you have for people who are in the field? Yeah, well, I don't know how you guys can work in the current environment. The environment that I worked in wasn't as tough, but I survived only by hiding myself in an industrial lab and doing"
    },
    {
      "end_time": 10760.674,
      "index": 389,
      "start_time": 10732.927,
      "text": " the research I really believed in as a hobby while I was doing practical hardware and software engineering. So I didn't have to get funding for my research. I wasn't at a university. I don't know how people manage nowadays. There's my young PhD student, Philippe Abraham, and it's a tough environment for him. I'm very upset by the way all the work on lattice enabled nuclear reactions was treated. We lost 30 years."
    },
    {
      "end_time": 10789.019,
      "index": 390,
      "start_time": 10761.459,
      "text": " That could have been done in a year if it had the proper funding, people had done what they should have done. And now we wouldn't have global warming. The situation is really getting desperate, right? With heat waves and stuff like that. So and also we wouldn't have, you know, electrical cars are wonderful, but to make the batteries, the batteries are very polluting. They involve all kinds of stuff. This is going to have a big impact on the environment also."
    },
    {
      "end_time": 10818.183,
      "index": 391,
      "start_time": 10789.48,
      "text": " So if we had a cheap source of energy, low energy nuclear reactions, it would solve a lot of our problems. And it was just sociology of science or politics, in other words, that kept that from being done as it should have been. So I don't know. My advice is ignore the system, be against the system. You know, G.H. Hardy has a remark like that, which is very elitist, but I admire G.H. Hardy, wonderful pure mathematician."
    },
    {
      "end_time": 10848.08,
      "index": 392,
      "start_time": 10819.036,
      "text": " He said, by definition, it's never worthwhile for someone whose first rate to work on fashionable topics, because by definition, there are plenty of people who are working on the fashionable topics. Right, right. You know, but he, he was living in an aristocratic environment where he was a member of the British elite. He wasn't an aristocrat, but he was on the fringes. You know, he was at Cambridge. He was able to, to, to do that, but it's very hard in the, in the current environment."
    },
    {
      "end_time": 10877.108,
      "index": 393,
      "start_time": 10848.933,
      "text": " which means that the current environment is very inimical to real scientific progress and creativity and understanding. And I think that's just too bad. I think young people should be optimistic and they should be against the system. Now you can't, you can't be against the system. You have to try to ignore it as much as possible. So one way to do it, which is tough, is to have a day job, which earns you a living or work on fashionable topics."
    },
    {
      "end_time": 10907.654,
      "index": 394,
      "start_time": 10877.858,
      "text": " and work on what you really believe in in secret. The guy who solved Fermat's Last Theorem, nobody knew what he was working on all the time, but he kept publishing papers or he would have lost his professorship. He had to keep publishing and he probably cursed that he had to do this work just to stay in the game, but he managed to pull it off. As Sydney Branger said, he has a bunch of friends who are all Nobel Prizes,"
    },
    {
      "end_time": 10935.145,
      "index": 395,
      "start_time": 10907.875,
      "text": " He said none of them could have survived and done the work that made them famous in the current environment. This is a tremendous indictment of the current environment. So I don't know what's going to happen. One optimistic solution is to say that civilization will collapse and then a whole new world will begin and we'll have a chance to try to do things better. But that would probably be the cataclysmic. I think young people should be optimistic."
    },
    {
      "end_time": 10963.285,
      "index": 396,
      "start_time": 10935.862,
      "text": " You have your blog. I managed to hide myself and do my work by some miracle. But it's not easy. It's not easy. Don't follow the fashion. You don't have to spend years learning the existing paradigms because what's more important is to invent a new paradigm. I believe in paradigm shifts. I'm not interested in normal science and I want to encourage all it takes is a new idea. You know, I had a new idea when I was 15."
    },
    {
      "end_time": 10982.927,
      "index": 397,
      "start_time": 10963.541,
      "text": " Then I had to work on it many years, right? But all it takes is a new idea and then you have to work hard at it. I have a feeling that the next paradigm shift will come from the quote unquote fringes but will be verified by academia. Yeah, academia will then say they knew all along, they're already trying to change the story about gold fusion."
    },
    {
      "end_time": 11010.145,
      "index": 398,
      "start_time": 10987.278,
      "text": " Yeah. Anyway, what I say to young people is don't give up hope, be enthusiastic. You're young, you have energy, a lot of energy, you have your life ahead of you. Don't let them crush you to death. See, this is the same issue with people who, let's say there's something that the majority of people who call themselves skeptics believe is false. So I have a feeling that if"
    },
    {
      "end_time": 11036.459,
      "index": 399,
      "start_time": 11010.503,
      "text": " Decades later it comes out, oh, it turns out those millions of people were correct and the skeptics were incorrect. The skeptics would still say, yeah, but we were correct because at the time there was no evidence. So technically we're still correct. And now look, we've changed our mind because of new evidence. So we're still following the rational train of thought. But there's the other view, which is that if you're so wrong and you had such a low problem, like you signed it with 0.0001%,"
    },
    {
      "end_time": 11062.756,
      "index": 400,
      "start_time": 11036.954,
      "text": " You know, one of the reasons Einstein was not a wonderful mathematician, but he had an instinct for where something funny was happening that suggested new physics. And I'm sure there are a lot of places out there"
    },
    {
      "end_time": 11091.067,
      "index": 401,
      "start_time": 11064.258,
      "text": " where one of them is the dark matter, right? But it's not enough to realize that there's a hole. You have to come up with a suggestion for an explanation, something new. And as Einstein said, there's no systematic way to go from experiment to theory. It's an act of leap of imagination. You've got to be a little crazy to do a paradigm shift because you have to go against what everybody believes is the case."
    },
    {
      "end_time": 11118.063,
      "index": 402,
      "start_time": 11091.92,
      "text": " And it may be that you're just crazy and you're wrong. But if, yeah, but if you're right, the mafia will say, oh, we knew all along that this was a good idea, right? We had the idea ourselves earlier. And almost every idea historically can be traced back to some other idea. So you can find a linear that leads you to a common branch. It's true. That's rewriting history. What is that called? The wig history, the history by the winners."
    },
    {
      "end_time": 11145.282,
      "index": 403,
      "start_time": 11118.268,
      "text": " Revisionist history? Revisionist history, that's right. We knew all along that cold fusion was right, and you can find it in the 20s, for example. And actually, I guess it will be a good sign if the establishment claims it, because that'll mean that cold fusion works, otherwise they wouldn't claim it. But still, I think it's dreadful what they did. Fleischman, who was a wonderful electrochemist, he had to resign his professorship."
    },
    {
      "end_time": 11175.043,
      "index": 404,
      "start_time": 11145.776,
      "text": " He and Pons had to flee the United States and go to Europe. And this could have saved the world if people had taken it seriously. But there was too much politics against it, of course. So the fact that the Japanese are taking this seriously. Well, first there was this wonderful physicist who found a very clean experiment producing helium. So that's got it. That's a nuclear signature. You know, heat is a messy thing to measure."
    },
    {
      "end_time": 11203.746,
      "index": 405,
      "start_time": 11175.913,
      "text": " Calimeter calorimeters, but there's also the fact that the Japanese have no oil and they tried Using nuclear reactors, but it was a catastrophe Yeah, I don't understand why they didn't tarnish nuclear energy as a whole whether it's fusion or fission Like that something fission didn't work and then they're like, well, let's try fusion, but that could also be explosive Now the green movement strangely enough in Sweden"
    },
    {
      "end_time": 11233.746,
      "index": 406,
      "start_time": 11204.548,
      "text": " was very unappreciative of lattice-enabled nuclear reactions. Why? I thought they would like it because you won't be burning fossil fuels anymore. No, they don't like it. They attacked it savagely. Why? Because if there's a wonderful source of nearly free energy that doesn't contaminate anything, then the human footprint on planet Earth will increase. And the green people in Sweden, they want the human population to go down. They want people"
    },
    {
      "end_time": 11259.206,
      "index": 407,
      "start_time": 11234.206,
      "text": " to have fewer factories, to have a smaller human footprint on planet Earth. So they were vicious in attacking Andrea Rossi. So what's driving in that case, at least purportedly, is not clean energy, but less population? Yeah, I think they prefer that the planet should"
    },
    {
      "end_time": 11286.254,
      "index": 408,
      "start_time": 11259.855,
      "text": " Go back to the way it looked before human beings were here, ideally, or as close to that as they can get. I think that's sort of the impulse, part of the impulse behind that. So it's true that if there's this wonderful source of nearly free energy that doesn't contaminate anything and it's not dangerous, doesn't produce anybody, that's going to transform the planet. And now it's going to help nations which are poor, right?"
    },
    {
      "end_time": 11314.377,
      "index": 409,
      "start_time": 11286.459,
      "text": " If you have a lot of energy, you can desalination, right? Which means you can get water from places where there's only salt water, for example. This will make a lot of changes and may mean that the human population will be able to increase more and take over more of the planet. That may be why Elon Musk thinks we have to have a colony on Mars, right? Just in case we mess up things here, there'll be some human beings left elsewhere. And he may be right."
    },
    {
      "end_time": 11340.811,
      "index": 410,
      "start_time": 11315.265,
      "text": " He's a pretty deep, he's a pretty deep, he's a pretty deep thinker. I'll say one message. I think the future is going to be radically different than any of us can imagine. Not just because political movements or geopolitical changes, but because of AI and also because of, I think LENR now is going to happen. It's going to happen because climate change is already becoming unbearable, right?"
    },
    {
      "end_time": 11367.619,
      "index": 411,
      "start_time": 11341.254,
      "text": " So all of a sudden there's going to be political backing. I see signs that that's already happening, that there's an attempt to portray LNR as something reasonable that should be explored rather than trash science the way it was portrayed for 30 years. So that's going to be a very different future from our current future. And also medical knowledge is increasing enormously with molecular biology and things like that."
    },
    {
      "end_time": 11385.06,
      "index": 412,
      "start_time": 11368.183,
      "text": " Professor, I have to get going."
    },
    {
      "end_time": 11412.858,
      "index": 413,
      "start_time": 11385.64,
      "text": " Thanks a lot. This was a lot of fun. Keep up the good work. Thank you. Bye bye."
    },
    {
      "end_time": 11425.606,
      "index": 414,
      "start_time": 11413.234,
      "text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people."
    },
    {
      "end_time": 11455.469,
      "index": 415,
      "start_time": 11425.674,
      "text": " You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
    },
    {
      "end_time": 11476.442,
      "index": 416,
      "start_time": 11455.811,
      "text": " Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms?"
    },
    {
      "end_time": 11505.708,
      "index": 417,
      "start_time": 11476.442,
      "text": " iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting Patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough."
    },
    {
      "end_time": 11600.742,
      "index": 418,
      "start_time": 11588.797,
      "text": " Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?"
    },
    {
      "end_time": 11618.848,
      "index": 419,
      "start_time": 11601.186,
      "text": " Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
    }
  ]
}

No transcript available.