Audio Player
✓ Using synced audio (timestamps accurate)
Starting at:
Richard Borcherds (Fields Medalist) on the Monster Group, String Theory, Self Studying and Moonshine
March 17, 2021
•
1:40:36
•
undefined
Audio:
Download MP3
✓ Synced audio available: Click any timestamp to play from that point. Timestamps are accurate because we're using the original ad-free audio.
Transcript
Enhanced with Timestamps
225 sentences
14,245 words
Method: api-polled
Transcription time: 97m 43s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
Theoretically, it's absolutely wonderful. It's this wonderfully consistent theory with all sorts of surprising coincidences that all just happen to work out. It's behaving as if it were the ultimate theory of life, the universe and everything. Unfortunately, it just doesn't tie up with experiment yet.
Today we cover E8 quantum field theory and Ed Whitten with today's guest, Richard Borchards, a professor of mathematics at the University of Berkeley and a fields medalist. Just so you know, a fields medal is the mathematical equivalent of the Nobel Prize. This is part two of our conversation. The first part you can find by clicking the link in the description and it covers string theory, the monstrous moonshine conjecture, Richard's work schedule slash habits. And in this podcast, we delve even deeper into those topics.
In mathematics, there was a conjecture by John Conway suggesting that there's more than a superficial connection between the monster group and the J function. This is what's called the monstrous moonshine conjecture, and this is in part what Richard Borchers won the Fields Medal for.
In fact, there are two highly cited papers, one called The Monstrous Moonshine and Monstrous Li Super Algebras, and the second is called Vertex Algebras, Kakmudi Algebras, and The Monster. At some point in the next few months, I would love to tackle these papers and do an explanation video in the same vein as the Crash Course on Physics,
which you can see here there's a thumbnail somewhere and you can click the link in the description to see if you have a vote for which paper in particular that is the lee super algebra paper or the vertex operator algebra paper then let me know in the comment section my name is kurt jaimungal i have a background in mathematical physics this podcast is called theories of everything is dedicated to the exploration of theories of everything
from a theoretical physics perspective but as well as exploring the role consciousness has to the fundamental laws of nature each sponsor as well as the patrons improves the quality of the videos drastically it improves the depth it improves the frequency and it goes toward paying the staff for instance someone who's editing this full time right now and then we have an operations manager in that vein i want to thank today's sponsor brilliant if you're familiar with toe you're familiar with brilliance but for those who don't know brilliance is a place where you go to learn math science and engineering
Through these bite sized interactive learning experiences, for example, and I keep saying this, I would like to do a podcast on information theory, particularly Chiara Marletto, which is David Deutsch's student has a theory of everything that she puts forward called constructor theory, which is heavily contingent on information theory. So I took their course on random variable distributions and knowledge and uncertainty.
In order
It would be unnatural to define it in any other manner. Visit brilliant.org slash toe, that is T-O-E, to get 20% off the annual subscription, and I recommend that you don't stop before four lessons. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking. At some point, I'll also go through the courses and give a recommendation in order.
If you'd like to support the Theories of Everything podcast and help assist, that is, the improved video quality, the depth, the frequency of the podcast, and paying the staff, then do consider going to patreon.com slash KurtGymungle. The link is on screen as well as in the description. There's a custom amount as well as already delineated tiers. Either way, your viewership is thank you enough. Enjoy this part two with Richard Borchards.
Professor, since last we spoke, which was approximately two years ago, what's happened with you? What's new? Absolutely nothing exciting that hasn't happened to everybody else. I mean, I guess it's not news to hear we were having some fun with COVID. Other than that, not much has been going on. So what's new? What have you been working on? That's a secret.
The problem is that if I tell everybody it will merely confirm that I've gone senile or insane or something so I'm keeping quiet about it unless it works. Am I to surmise from that you're working on some grand conjecture that's unsolved and that if you were to say well people like that you shouldn't be tackling that, that's something like that? Something like that, yeah. It's probably going to fail and whatever. I've got tenure so it doesn't matter.
Great, great. Now, speaking of that, do you find that your colleagues who get tenure, they tend to not be as creative or not take as many risks? This is something I've heard colloquially in physics and math, but I'm not sure if, well, I've just heard this. I don't think there's a single good answer that just varies so much from person to person. Yeah, there are some people who get tenure and promptly stop doing anything, and there are others who get tenure and use this to
You know, spend five or ten years working on a really hard problem and there are some who get tenure and are really productive afterwards. Is what you're working on related to physics? Probably not. Okay. Is it possible to introduce the audience to what the monstrous moonshine conjecture theory is? I can give the very brief version.
So the classification of simple groups is the longest theorem in mathematics, 10,000 pages, and one of the simple groups it classifies is the monster, which is a really incredibly complicated simple group. I think people sometimes say the number of elements in it is about the number of atoms in the planet Jupiter.
and it lives in space of dimension one nine six eight eight three in other words it's rotations of a one nine six eight eight three dimensional object. Meanwhile there's the theory of modular functions in mathematics which has absolutely nothing to do with finite group theory and there's this function called the elliptic modular function and one of the coefficients is one nine six eight eight four
And John McKay noticed that these two numbers differed by one, which seemed to be a weird coincidence. But when he told people about it, they basically dismissed it as being a random fluke, because finite groups have nothing to do with modular functions, no conceivable relation. And if you write down a lot of numbers, some of them are going to be pretty close just by coincidence. So everybody dismissed this as a fluke, except it turned out not to be because
The elliptic modular function has other coefficients and these turn out to be dimensions of other representations of that monster and so on. So Monstrous Moonshine is asking why does the monster simple group have anything to do with elliptic modular functions? What's the term moonshine about?
Moonshine was just a term invented by John Conway, who was really good at inventing cool, catchy terminology. In fact, the monster was originally supposed to be called the friendly giant, but the term monstrous moonshine was so catchy that the term monster took over from it.
So there's the monster group, then there's elliptic modular forms, and the monstrous moonshine conjecture is just a conjecture that says there's some relationship between the monster group and then these forms? Yeah, that's pretty much it. So the problem is to explain why the monster is related to modular forms or modular functions.
So in math, what does it mean to explain why? Do you mean to just give a proof that there is a relationship, like that number is not just a coincidence? Is that what it's meant by explain why or you have to explain something else? Yeah, well, the problem is a proof is rather trivial. I mean, proving that one nine six eight eight three plus one is one nine six eight eight four is I think can be left as an exercise for the reader. So explaining it is
It's a rather vague term. What you want is some sort of, you know, you want to find some sort of mathematical structure that maybe is acted on by the monster group and at the same time has the elliptic modular function coefficients appearing in it. And such a structure was found by Franco-Lipowski and Merman. So they constructed
There are different groups. There's AN, and then there's BN, and then there's some exceptional ones like E, all the way up to E8.
Yeah, exactly. The people who named these were not very imaginative. And I think physicists call one of them SL. The point is that these correspond to something that physicists care about, except the E's, which now there's, at least for the past few decades, there's some effort to find the physics of E8. Now, basically what I'm wondering is, is there a relationship between E8 and the monster group?
Yes, there is moonshine for the monster group. There is also some rather similar moonshine phenomena for the E8 group and in fact you can find similar sorts of things going on for almost any of these groups of type AN, BN and so on that you were mentioning. So some sort of moonshine phenomena seems to happen for most but not all of the
known simple groups. There are still a handful of groups which we don't know of any very convincing moonshine. Can you explain what E8 is? And imagine that you're talking to upper-year undergrad slash first, second-year graduates. I can try. So the usual Lie groups you come across first are things like orthogonal groups, so just rotations in three-dimensional space.
and you can do something very similar in four or five dimensions so you can find orthogonal groups of rotations in any dimensional space and that gives you one series of groups which and they're sort of bounded in some sense the technical term is compact and mathematicians like Cartan and Killing managed to classify all the compact
the groups about a hundred years ago and as well as the orthogonal groups you get other things like unitary groups and symplectic groups and then there are five funny ones left over which are called G2, F4, E6, E7 and E8 and it's a bit difficult to explain where they come from. I mean orthogonal groups are kind of easy, that's just rotations. Groups like G2
It's quite tricky to find something they're the symmetries of, so G2 is symmetries of something called the Cayley numbers and E8 is a bit odd because the only thing it seems to be the symmetries of is E8 itself, so it doesn't seem to be anything simpler than E8 that E8 is the symmetries of which makes it really difficult to understand because you can't reduce it to something smaller.
So the group of rotations in three dimensions is just a three-dimensional group, which is not too bad, whereas E8 is 248 dimensions, which means if you want to write its elements down explicitly, you need matrices that are 248 by 248, which is rather too much to do by hand. Is it possible then to develop some visual intuition as to what E8 is?
You can try. I mean, mathematicians have come up with all sorts of ways of drawing pictures of groups by using things called root systems and Dynkin diagrams. So while E8 has dimension 248, which is really too much to handle, it has something called a rank, which is only eight dimensional, which is much easier to handle. I mean, you can almost draw a picture of that. In fact, mathematicians have this way of drawing things called Dynkin diagrams, where you
When you draw a little graph with a point for each rank, so to speak. So for E8 the graph only has eight nodes and you can actually draw that. So what is the application of E8 to physics? That's a very controversial question. First of all people have tried coming up with grand unified theories where you
extend the group of the standard model to some bigger group and these are theoretically very compelling and natural but unfortunately none of them actually seem to work. Sort of very frustrating that they ought to work but don't. The other thing you can do is mess around with string theory and the E8 group turns up quite a lot very naturally in string theory and super string theory but
Unfortunately we have the same problem that string theory and supersymmetry really ought to
explain physics but no one has actually managed to do so in a convincing way yet. Why do you say that it ought to? So why do you say that it looks so convincing but then it doesn't? Theoretically it's absolutely wonderful. It's this wonderfully consistent theory with all sorts of surprising coincidences that all just happen to work out and it sort of very much looks as if it is
It's behaving as if it were the ultimate theory of life, the universe and everything. Unfortunately, it just doesn't tie up with experiment yet. I mean, it sort of predicts all sorts of supersymmetric particles and things like that, that just haven't appeared in experiments. It's incredibly frustrating.
There are different flavors of string theory, and then one of them is called SO32, and then another is called E8 x E8. If E8 wasn't complex enough, then E8 x E8, and it's more complicated. Why is it E8 x E8? Why does that have something to do with string theory, but not E8? It's something to do with... you need something 16-dimensional for some of the string theories. Roughly speaking, there's interesting stuff going on in 26 dimensions with string theory,
And there's interesting stuff going on in 10 dimensions with superstring theory, and the difference between these numbers is 16. And my understanding is that means you want a group of rank 16 to sort of splice them together. And the two natural ways of doing that work are the groups you mentioned, E8 times E8, which is rank 16, and SO32, which also is ranked 16.
When we last spoke, I recall you telling me quantum field theory gives you a headache, and so you abandoned it. Why? Why does it give you a headache? I just couldn't figure it out. I spent several years trying to understand it and just couldn't. I mean, first of all, there's this problem with the philosophy of quantum mechanics that everybody knows, you know, there's all these paradoxes where
Quantum mechanics gives a fantastically good explanation of what's going on, as long as you're only looking at two or three electrons or something. But as soon as you start looking at large objects like cats or chairs or whatever, it gives results that don't seem to make sense. I mean, a cat is not a superposition of several different cats in different places.
And I've never been able to figure this out, and as far as I know, nobody else has ever been able to figure it out either. So you mean to say that the physical interpretation as to what the heck is going on, what are these symbols representing, that's what you had the trouble with? That's one of the many things I had a trouble with. The other problem is quantum field theory involves something called Feynman integration, where you're integrating over an infinite dimensional space. And the trouble is,
If you look at this too closely, it doesn't actually make sense. No one has ever been able to give a really satisfactory definition of what a Feynman integral is in for four-dimensional quantum field theory. I mean, people have managed to do this in two dimensions and sort of a little bit in three dimensions, but for four dimensions, no one has ever managed to figure out how to make rigorous sense of this.
Businesses don't care because you can do it in a sort of unrigorous way and you get results that agree with experiment to unbelievable precision. You know, you're talking about 10 or 12 significant figures precision, which is really convincing that you're doing something right.
If you look at the Feynman integral first you define it for integrating over something n times and then you take n as the limit of as n approaches infinity and then you call that the actual Feynman integral. Anytime you see the Feynman integral just remember that's a stand-in for as n approaches infinity and any calculations that we do with the computers anytime it's with a finite dimensional n so there is no problem there's a problem if you try and extend it to infinity. So what do you say to that?
That's more or less correct. We can define finite dimensional approximations to the Feynman integral, but as you say, taking the limit as something tends to infinity is something we don't know how to do. The trouble is if you don't take a limit as things go to infinity, you run into other problems. For instance, you might find that the
Your theory isn't invariant under the symmetries of special relativity or something like that. So you can avoid one problem by not taking a limit, but then other problems turn up instead.
Do you have any intuition as to why it's difficult to combine general relativity with quantum mechanics? Do you see the problem being merely a mathematical one, a physical one, a lack of experimental data? Do you have an intuition? What the heck is underlying this problem? I have no idea. I mean, I thought about it a little bit and got nowhere. I'm going to have to pass on that due to lack of competence on my part.
How about this? What have you heard that's most convincing while you're in the field? About the relation between quantum mechanics and general relativity. Well, the only thing that looks at all promising so far is string theory or super string theory, which as I said, unfortunately, although it's the most promising, it's still not really all that promising. I mean, as I said, it's still theoretically, it looks very nice. If you try and connect it with experiment,
Some people said that that's a problem because there's a whole field of people working with a supposed connection to physics and they're getting grant money and they're also largely in charge of what gets published and doesn't get published in the field of high energy physics and it's unproven
and it's so theoretical and it also contradicts with reality because they require anti-de Sitter space or supersymmetry, which hasn't been observed. Do you see that as a problem, like a sociological problem? Well, yeah, I mean, there does seem to be a big fuss going on in physics between people who do string theory and people who don't. I'm sort of don't really know enough about what's going on to comment very much.
Is there a similar problem happening in mathematics? Like, is there a similar divide somewhere? Mathematics, we very rarely get such divides. Almost all the time, everybody agrees on whether something is correct or not. I mean, the only exception I can think of is a few years ago, there was this possible proof of the ABC conjecture. As far as I know, people still haven't come to an agreement on whether this proof is correct or not.
In math, sometimes you get a professor who's an intuitionist who says you can't have proofs by contradiction and you must construct.
So is that a similar divide or is that just, well, they're being persnickety? No, not at all. Actually, the difference between intuitionism and classical mathematics is really very, very small. You can turn, roughly speaking, you can turn any mathematical argument into an intuitionistic argument just by putting a double negation in front of something. You say,
Why do you say roughly? Where's the outlying case? Where's the nuance in that?
Well, you can get into other arguments about whether power sets, whether you can quantify over the power set of an infinite set or something like that. I mean, it's not quite true that you can turn any correct mathematical arguments into an intuitionistic one by putting a double negation in front of it. But if you just confine yourself to simple statements about the integers, it's pretty much true.
I have a question from Norman Weilberger, who's a professor of math. I've always challenged analysts to concretely and completely evaluate the sum of three real numbers, pi, e, and the square root of 2, and I haven't received what I feel like is a satisfactory answer. Is not our collective inability to come up with a precise value to this explicit computation a dagger through the claim that we have a proper theory of arithmetic with real numbers?
don't understand what would be an answer to that question. I mean, we can add them up perfectly well in the sense, you know, you can write down an explicit computer program that will give you a rational approximation to the sum with any precision you want. I mean, we can't tell you whether the number is rational or not, for example, but we can't. We know it's irrational, but we can't prove it. But there's
No problem in calculating this number to any precision you like. I'm just not sure what else you would mean by knowing what the sum is. Have you heard much from other mathematicians who say only what we can compute is real in a sense and even the real numbers themselves are thus not real? Let's stop talking about irrational numbers and just deal with rational numbers and try to build mathematics from that.
Yeah, sure. This is a perfectly valid branch of mathematics. It's usually called, you know, constructive mathematics or something like that. And it's very closely related to intuitionism that you mentioned earlier. And, you know, you want to eliminate all talk of things that are too infinite, like power sets of integers and so on that don't seem to make sense. And
Probably most mathematics could sort of be turned into constructive mathematics if you wanted. The trouble is doing so just becomes very clumsy. I mean it's hard enough to do mathematics if you allow yourself to use infinite sets and making things even more difficult doesn't appeal to most people. I mean I guess it's sort of like
Doing constructive mathematics is a bit like trying to climb Everest without oxygen. I mean, you can say maybe climbing Everest without oxygen is better than climbing it with oxygen, but most people have quite enough trouble climbing it with oxygen. Thank you very much. So I'd say most mathematicians don't bother being constructive because it just makes things unnecessarily difficult. But I think, again, most mathematicians would agree that if you have a constructive proof, it is in some sense better than a non-constructive one.
So there's constructive mathematics and then that's in contrast to what is called classical mathematics? Probably. I mean, I don't think people usually distinguish it too much. I mean, quite often what happens is we will first find a non-constructive proof of something and then later on try and improve it to a constructive proof. Has there ever been a case where there's a theorem in, let's call it classical mechanics, sorry, classical mathematics?
And so has there ever been a case where you prove something in classical mathematics and then it turns out to be false in constructive mathematics? No. Is there the possibility that that can exist or, like you mentioned, that it's very unlikely? Unless something really bizarre happens, like how foundations of mathematics turn out to be unexpectedly inconsistent. But I think this is in relation to what I said that
Very roughly, constructive or intuitionistic mathematics is just classical mathematics, except you say things are not disprovable rather than saying they are provable. In particular, if intuitionistic mathematics isn't contradictory, then you expect classical mathematics to be non-contradictory. Well, very roughly.
And vice versa, so it would go the other way around. Yeah, and anything provable in intuitionistic mathematics is more or less automatically provable in classical mathematics. Again, with some slight caveats because the definitions are slightly different. So then in the analogy with the Mount Everest, it would be that, hey, you can climb Mount Everest without oxygen, but we can still get to all the same places and all the places you can get, I can get and vice versa.
Yeah, exactly. I mean, if you can get somewhere without oxygen, then you can get there with oxygen. It's just easier. Is it all right if I ask you a couple more physics questions? I know it's been a while since you've been in there, but how about this? So Feynman called renormalization a dopey process. There's been much work on renormalization since Feynman's day. But regardless, I'm curious if you agree with that. Is there something inherently incorrect about renormalization physically or mathematically?
No, mathematically it's just a large area. There are some parts of renormalization which is well understood. It's like if you're trying to focus in with a microscope into a small area, then every time you double the magnification you might have to adjust the focus or something.
and one version of renormalization is is just doing that systematically and then there are some cases of renormalization when we don't really know how to do this so this is the problem you run into in some quantum field theory in high dimensions you seem to have to renormalize things repeatedly meaning you know you sort of adjust the focus or something but we we don't know how to prove that you can always do that
So apologies that I jump around so much, but you give extremely concise answers. Unlike most guests, you give an answer such that I no longer have a follow-up for. So I'm like, all right, I'll just move on to the next one then. So the next question is, why are symmetries in physics described by groups instead of a more general groupoid?
Yeah, I'm slightly baffled by this. Well, I mean, there are some things that are described by groupoids, like if you're taking the fundamental group of something, it's really a fundamental groupoid. But I guess maybe one reason why they're mostly described by group is that a group is pretty much defined to be the collection of symmetries of something. So groupoids are
little bit more general. I mean, a groupoid would be something like the symmetries of a collection of different objects, very roughly speaking. So if you have a cube, you would just talk about the symmetries of a cube, and there are 24 or 48 symmetries. But maybe if you've got a cube and a dodecahedron, then you've got the symmetries of a dodecahedron and the symmetries of a cube, and you could think of
You know, you would think about having a groupoid, which is just a sort of union of the symmetries of a cube and the symmetries of a dodecahedron if you wanted. So you do sort of get groupoids, but in a way that's so trivial that nobody usually bothers with them. Speaking of groups, going back to E8, would it be possible with the pen and paper that you have to show again to an upper undergraduate or lower year graduate?
How one constructs the E8 lattice or how one constructs E8? So firstly, what's the difference between E8 lattice and then the E8 group? Well, the E8 lattice is just an eight dimensional, it's just a lattice in eight dimensions. I mean, I could draw, I mean, I could, do you want me to use pencil and paper and show how to actually construct that? So there's the leech lattice. Yeah. Is that the same as the E8 lattice?
The leach lattice is related so that the E8 lattice lives in eight dimensions, the leach lattice lives in 24 dimensions, and it's constructed sort of like the E8 lattice only more so. What do you mean? Well, you start off by copying the construction of the E8 lattice and then you do a little twiddly bit at the end to make it slightly more complicated. All right, I see. So it's E8 plus a bit. Yeah.
Speaking on the topic of groupoids, someone has, some have wondered if there are any interesting applications of Lie groupoids and Lie algebraids, specifically with regard to differential geometry or physics. Can you speak to some applications? I'm going to pass on that because I don't even know what a Lie algebraid is and I can't offhand think of any useful applications of Lie groupoids.
I mean, the problem is groupoids aren't actually that far from groups. Groupoid is sort of like groups, but there's a little bit of extra bookkeeping, and most of the time you can just do the bookkeeping by hand if you want, so you can get away without using groupoids. What are your thoughts on this trend in math for categorification? So, for instance, hop algebras, topological field theory, and the whole enterprise of nLab.
I'm estimated I don't actually know all that much about it. I mean the trouble is whenever I try and learn about it my head starts spinning. I mean people doing categorification, you start off with categories and I can cope with categories because I know what they look like and then they stop using things called two categories which make me rather nervous because
two category is like a category except the collection of morphisms between two objects is also a category and this is already getting confusing and then you get to things called three categories and four categories and you go all the way up to infinity categories and i'm frankly already lost at two categories and all my attempts at learning the definition of an infinity category i've tried several times the next day everything has just gone keen out of my mind so
I don't know, you probably have to start learning this stuff when you're five years old or something in order to be able to do it.
To you, when you say, hey, I've tried to learn infinity categories and it's not sticking or it doesn't make sense for whatever reason, what does that learning look like? Are you just referring to I'm reading books? Are you referring to I'm also talking to people who are in the field and I'm asking them this question? Like, what does it mean when you say you've tried? Mostly looking at books and reading the definition and looking at some examples. And I sort of understand the definition perfectly well for five minutes after reading it. But the next day I
Okay, so you're able to understand it, it just doesn't stay. It just doesn't stick for some reason. My mind is just as a sort of aversion to these things. I see. I think the problem is I don't actually have a use for it. If I had a use for higher category theory, I'd probably be much more motivated to actually learn them.
So there's an infinity category, and I assume that's just the regular cardinality of the natural number infinity. Are there other infinities of higher cardinality categories? I don't think so, although I wouldn't put it past someone to try inventing them. But all the ones I've seen infinity is really, as you say, it's just the infinity of the natural numbers.
One time, I think I was watching one of your lectures, or maybe from a conversation before, and you said that I have no idea why the sporadic groups are there. And then I was wondering, like, why do the sporadic groups need an explanation? Like, what would an explanation look like? This goes back to our earlier question of the moonshine conjecture, like, what does it mean to explain? To me, like, why is it not as arbitrary as saying, I don't know why the number 7300 exists? Or why are there five platonic solids?
As part of, we would like to understand them. So if you take the compact Li groups, we have a classification of them that you mentioned earlier, you know, there's A and B and C and
and things like that. And then we've got a very simple explanation of why this list turns up, that they more or less correspond to finite reflection groups, and we know how to classify finite reflection groups and understand them very easily. And this allows us to give a, we can give a single uniform construction of all the compact Li groups.
but there's nothing like that for the Spradic groups. I mean we can sort of copy what we did for compact groups and for that we find most of the finite simple groups. We find all the ones of Lie type but then there are these 26 left over which just don't fit into this pattern and it's very frustrating because what we would like is some uniform way to construct all the finite simple groups.
And what really worries me is that maybe we already have the correct way to understand the war, which is to go through this 20,000 page proof of the classification. And we're simply too stupid to understand why this is the correct explanation. Is there something special about lower dimensions, like one, two, three, and four in particular? Yeah.
everything goes wrong in lower dimensions. So in particular what happens is in low dimensions two groups that ought to be different turn out to be really the same group. So for instance if you take you know something called the projective special linear group you can in two dimensions you can take that over the field with four elements or the field with five elements and these are usually totally different groups.
but for the special linear group in two dimensions these happen to be the same, they're both the simple group of order 60, and in low dimensions the whole area of finite simple groups is littered with these accidental coincidences where two seemingly different groups turn out to be the same, and this seems to be connected with the Spradic groups because
Spradic groups are often associated with two small simple groups unexpectedly turning out to be isomorphic to each other.
Is there more than just in group theory that there's something special about the lower dimensions? So for instance, with the Poincare conjecture, it was proved first for the higher dimensions, and then N number four was N proved, and then three turned out to be exceptionally difficult. Is this a trend where the lower dimensions seem to be more difficult to prove some famous theorem or some relationship?
Well, it's certainly true, as you say, that in many other mathematical subjects, the small dimensions up to about four or five, there are all sorts of weird things going on that don't happen in higher dimensions. And the case you mentioned is very unusual in that the low dimensional case is actually harder than the high dimensional cases. Rather, obviously, it's nearly always the high dimensional cases that are harder. Von Curry conjecture is this weird exception.
Can you list some examples where the harder case is the larger dimensions? Well, almost anything other than the Poincare conjecture. So, for instance, in algebraic geometry, if you want to classify varieties and dimensions, we sort of understand curves reasonably well in one-dimensional thing surfaces. We sort of know a fair amount about
Okay, so then that's just an outlier, the Poincare Conjecture, because I was thinking if there are more coincidences in the lower dimensions, then that should give you more play-doh to play with in your proof. It should be easier in the lower dimensions. Yeah, most of the time it is easier.
In geometry, there's this weird problem in low dimensions in that there isn't enough room to untie complications. I mean, for instance, if you've got a knot in three dimensions, it's just got a complicated knot. If you move to four dimensions, you can undo it by moving the threads into four dimensions. And this is partly why
Three and four dimensions are particularly difficult. You don't have enough room to...
But I guess in one and two dimensions, there's not enough room to have tangles and knots. And in high dimensions, five dimensions and above, there's so much room that if you've got a tangle, you can just undo it. But in three and four dimensions, they're particularly difficult because there's enough room to have tangles, but not enough room to undo the tangles once you've got them. We're going to jump a bit around here. This comes from the audience. Which results of growth do you find particularly astonishing?
Well, the funny thing about Grothendieck is he mostly didn't prove single theorems. I mean, he did sometimes. The Grothendieck-Riemann-Roch theorem is a pretty stunning theorem by anybody's standards. But what he did most was sort of develop these huge theories rather than prove individual hard theorems. I mean, he said this himself explicitly that
You know, you kind of crack hard problems just by... He had this metaphor of a rising sea where the islands just sort of slowly disappear and the sea rising. Northendic said that. He said something like that, I think. I can't remember exactly what. So he came up with all these fantastic tools like a talco homology and
and schemes and so on. For each of these you know he wrote thousands and thousands of pages just working out all their basic properties and somehow when you finish doing that you sometimes just suddenly find that various problems have been solved. Is he one of the greatest mathematicians? Yes. What makes a great mathematician?
It's hard to give a precise definition of that. How about to you? Forget about, it's not an objective statement, just to yourself. Well, the ultimate thing is just proving a very hard problem that everyone's been trying to prove for years, like Perelman proving the Poincare conjecture or Wiles proving Fermat's Last Theorem. So that's the really clear-cut case where, you know, there are these problems that have been open for
decades or centuries that everyone's trying to prove and, you know, finally proving them is a good indication. But, as I was saying, Grothendieck is a little bit different because what he was mainly doing was developing the tools, developing the tools used for solving these problems. So, Delene's proof of the Riemann hypothesis for finite fields and Wiles's work on Fermat's last theorem were both
They made very heavy use of all the ideas that Grotendick introduced. So you mentioned Perlman. If you were able to speak to Perlman, what question would you ask him? Or a set of questions? Well, just the problem is, I mean, he's working in
low-dimensional differential geometry, which I simply don't really know very much about, so I wouldn't waste this time by asking him questions. I'd have to learn a lot of differential geometry before I got to the point at which I could ask him a sensible question. What question would you ask Grothendieck? Again, I sort of don't feel worthy of asking him questions. He's just
To be honest, Professor, I don't feel worthy of asking you questions. Well, I don't know, growth and dick is. I mean, I. And the trouble is, any question I ask growth and dick is probably already answered somewhere in his writings. So really, I should be read. I should learn more about his writings before asking technical questions about it. OK, then how about this?
Which mathematician that you cannot speak to, for whatever reason, they speak a different language or they've gone missing or they're no longer with us, what mathematician would you like to ask a question to and what question would that be? Well, I think the same answer as before. I mean, I don't really usually have particular questions I want to ask mathematicians about.
The trouble is when you're learning the mathematics that somebody like Grothendieck is doing, you don't learn Grothendieck's mathematics by asking a question or two or three questions or something. You have to spend years reading his articles and papers and so on. Single questions simply don't make very much difference. You mentioned the Riemann-Roch theorem earlier. Can you explain what that is to the audience and then what the Grothendieck-Riemann-Roch theorem is?
Well, yeah, so the Riemann-Roch theorem would be, well, the simplest case of it would be suppose you've got the complex numbers and you want to know how many rational functions can you find with zeros at certain points and poles at various other points.
And that particular question is rather trivial to solve. I mean, since rational functions and the complex plane are so easy and well understood. Then you can ask the same question for a more complicated algebraic curve. So you can say suppose I've got an algebraic curve like a cubic curve and I want to know how many functions have zeros at certain points and maybe poles at other points. And the Riemann-Roch theorem sort of answers that question. Well,
Yeah, it gives you about as good an answer to that question as you want. And the version Grothendicken-Hitzebrook solved was generalizing this from one-dimensional curves to higher-dimensional algebraic varieties.
So again, you can think of it as being very roughly, if you've got a high dimensional algebraic variety, you want to know how many functions of zeros at certain places and poles at other places. And the Grothendieck-Riemann rock theorem will give you information about this. An algebraic variety is a generalization of a surface or a manifold? Pretty much, yes. You write down some equations and
The set of common zeros of these equations is just an algebraic variety. So the simplest example, you would just write down one equation, say x to the n plus y to the n equals z to the n, and that will give you a two dimensional variety in a three dimensional space.
When learning differential geometry, there's this emphasis on intrinsic geometry, the intrinsic approach, where you don't embed it into a higher dimensional space and write an equation for it. So what I'm hearing about algebraic varieties is you write the equation. Does that mean that there's something extrinsic or you choose a basis? No, that was the old way of doing algebraic geometry in the 19th century, where you would embed the variety into Euclidean space or something like that.
Nowadays, you don't do that. As you say, just as in differential geometry, you try and do intrinsic geometry where you study a variety without embedding it into space. That's just a little bit more abstract and harder to think about. Speaking about what's abstract and difficult to think about, another quote from you the last time we spoke was, Ed Whitten is terrifying. So I imagine you mean in terms of his ability or what he knows or his work ethic or a combination of it. But I want to ask you, what did you mean
If you can recall, like, why is Ed Witten a terrifying physicist slash mathematician? Well, he just seems to produce, I mean, he's just so much more productive than me, you know, he produces this endless stream of papers, each of which is 50 or 100 pages long. And many of them have incredible new ideas. And then there was the Slyberg-Witten invariance, for example, where he
used some sort of weird black magic from physics to come up with these new mathematical invariants for manifolds that mathematicians hadn't noticed and that wiped out a lot of hard mathematical problems. And that paper on the side by Witten and Vance was a huge advance in mathematics and it was just a sort of tiny incidental spin-off from something
something else he was working on. And this is something that he's repeated multiple times so it's not just luck? He's repeated it multiple times. He came up with the other Witten Invents with three manifolds and he's done the Cyber Witten Invents with four manifolds and you know he's not even working in that area. He's working on physics and a sort of
incidental minor spin-offs without really trying his wiping out these major problems in mathematics. Have you ever had the opportunity to work with Whitten? No.
There's some stories about John von Neumann where people were working on some equation or some problem, then he comes in the room just overhearing and he says, oh, it's this. And then they're thinking, how the heck did you solve? They said, oh, I either solved it months ago or years ago, or I just came up with a solution now. And I'm curious if you had some similar story about a mathematician could be Witten, could be someone else. I had that once happened with Simon Norton.
I was discussing some problem about finite fields with somebody else in the common room. And, you know, we'd spent 10 minutes or so and weren't really quite sure how to do it. And Simon Norton happened to be walking past and I asked him as he was walking past and basically without breaking step, he told us the answer to it and carried on out of the room. That was a kind of a bit humiliating. And you were a professor at this time or you were undergrads together?
Now, speaking of great mathematicians, what is it about John Conway that made him so great? I mean, one of the things that really impressed me about him is he seemed to make really major contributions and a whole lot of completely unrelated fields. I mean, he worked
He found these new sporadic groups which you know only a handful of people have ever found one. He did a lot of work in knop theory enough to get Conway polynomial was named after him. He more or less invented a large area of game theory related to surreal numbers
And there were also several other things he worked on that I don't actually know anything about. I mean, as this vague idea, he also worked on finite automata or something like that. So he was working in several different, almost completely unrelated fields. Yeah, he even did things in set theory. So there's this weird problem related to the action of choice of finite sets that he applied the classification of finite groups to, for example.
So it's really quite unusual for someone to be working in, it must be almost half a dozen unrelated areas of mathematics. Is that something that you envy or you just respect or you admire? Like, do you wish you could do that or you don't care, you're happy with what you're doing, you just are in awe that other people can do that? Well, it's just kind of awesome that someone can do, you know, can excel at several different, different things.
There's something called geometric Langlands, and I've heard this expressed as a unifying theory of math in the same way that there's an attempt to find a unified theory of physics. And then there's also local Langlands. So firstly, I want to know if you can explain to the audience, please, what is geometric Langlands and its relationship to a potential unified theory of math? The answer to that is very simple. No, I can't explain it. The Langlands program is something you can spend your entire life trying to figure out
what it is. And I spent months or years trying to figure out what's going on. I would say I still have no idea what is going on. But very roughly speaking, it says that there's a very close connection between modular forms and the Galois group of the rational numbers.
So this is a bit like moonshine. Moonshine said there was this bizarre connection between finite simple groups and modular forms. Langlands program says there is this bizarre connection between Galois groups and modular forms. And like moonshine, there doesn't seem to be any connection at all between them. And back in the 19th century, people sort of discovered a connection between
Galois groups in one-dimensional modular forms, except they didn't call them one-dimensional modular forms. That's called class field theory, and that was already considered to be the jewel in the crown of number theory. And Langlands noticed that this was just the one-dimensional case of an amazing correspondence between
the Galois group of the rational numbers. You would think of this as being the symmetries of all algebraic numbers and modular forms. And this is really bizarre. I mean, as late as the 1960s, I think that there's a story that someone mentioned the Schumer and Tanayama conjectures to André Wey, which is
very roughly a sort of special case of the Langlands conjecture. And Veille supposedly just sort of dismissed this as, you know, there obviously wasn't any serious connection between elliptic curves and modular forms. So according to the story, even someone like André Veille, who was an expert in this area, didn't notice there was anything going on. So the Langlands conjecture says that you can sort of describe
The Langlands program is like the moonshine conjecture in that it's
vague. It's saying there's a relationship between these two. Why? But it's not like you can solve the Langlands program. It's not like a theorem that you prove. It's more an area of math that you work on. It's a bit of both. I mean, one of the problems with the Langlands program is actually even finding out exactly what it says is rather hard. So it says there's a relation between representations of the Galois group and
You mentioned class field theory. Do you mind explaining what that is?
Yeah, you can think of that as being the one-dimensional easy case of the Langlands program. So what it does is it says that one-dimensional representations of the Galois group of the rational numbers, which is essentially just telling you what the abelian bit of the
of the rational numbers is, and it says roughly these correspond to one-dimensional modular forms. Well, one-dimensional modular forms are kind of also easy to describe. These you can think of as being, what would be the easy way to describe it, have been more or less, you know, representations of the integers modulo n under multiplication or something like that.
So the easiest special case of it would just be saying that any abelian extension of the rational numbers can be described in terms of taking roots of unity. This is something called the Kronecker-Weber theorem, which is already a rather hard theorem. What's the relationship between geometric Langlands and the local Langlands program?
Are there several others? I've only heard of geometric and then local. Are there more? Local Langlands conjecture is a special case where instead of working over the rational numbers, you work over a local field. The simplest example of local field is just the real numbers.
You may think the real numbers are more complicated than the rational numbers, but if you're doing arithmetic, the real numbers are a whole lot easier than the rational numbers to deal with. So the Langlands conjecture for the real numbers is more or less the classification of representations of all the groups. So this is the easiest case of the Langlands program and it's already incredibly difficult in classifying all infinite dimensional representations of all
semi-simple e-groups is really hard and there are also other things called local fields like piadic fields and so on which again are much easier than the rational numbers to deal with and there's a sort of analogue of the Langlands conjectures for those called the local Langlands. Geometric Langlands I don't really know much about
My impression is this is something like doing Langlands conjecture except instead of using the rational numbers you might use something like the field of functions on a complex curve or something like that and you try and find an analog of the Langlands conjectures for that. Do you know what Higgs bundles are? No. I've vaguely heard them turning off in
Lanyon's conjectures over function fields, but I have no idea what they are or how they're used. There's a question about class field theory and what the best way for a beginner to learn it would be. They ask is it best through local periodic methods or direct methods? Well, class field theory over local fields is indeed easier. If you want to learn class field theory,
There's a book by Castles and Froehlich, which is as good a place as any to start. And that's got a very nice article by Serres on class field theory over local fields, which is a very short and clear account of it. You mentioned that you love the writings of Serres. Yeah, he's not only a great mathematician, but he's a great expositor as well. Have you written a book, by the way, a textbook?
Too lazy, no. Is there something that you wish other people who write textbooks would adopt from Ser? Well, if they could adopt his, his somehow very clear writing style. I think he puts in all details, even if they seem to be unnecessary, for example. So
All too often authors will miss out details that they think are obvious, and maybe are obvious if you're an expert, but kind of throw you if you're a beginner. So there's a tendency to put in details even if they seem a little bit obvious and boring, which is very useful if they turn out to be not as obvious and boring as the author thinks.
So you prefer over explain to me because I can always skip pages if I don't... You can always, yeah, if you over explain the reader can always skip it. What is the J function? Probably the elliptic modular function. The reason I'm asking is that there's a question about the connection between the monster group J function and vertex operator algebras. Like how does one make that connection in string theory? And they're asking about your work so I want to just
Take some time to explain what is the J-function. You've already explained what the monster group is and then I'm going to ask about vertex operator algebra. Then I'm going to ask what the heck is their relationship to string theory? How did you find that? So J-function. Well, J-function is another name for the elliptic modular function. You can think of it as roughly being the simplest function on the upper half plane that's invariant under a certain group. And what are vertex operator algebras?
Vertex operator algebra is this rather complicated algebraic structure, so most algebraic structures like groups and rings have one or two operations, so if you're talking about a ring it's got addition and multiplication and these satisfy some identities. Vertex operator algebra has an infinite number of operators and these satisfy some rather complicated identities, so it's sort of an
What precipitated you to find a connection between the monster group J functions and vertex operator algebras in string theory? How did that come about?
Long complicated roundabout story. I think the connection to string theory is probably really due to Igor Frenkel. So Igor Frenkel had this rather astonishing discovery that you could use string theory to
produce upper bounds on a certain Lie algebra relation with the Liech lattice. And I came across some notes describing Frenkel's ideas. And that's how I learned about the connection to string theory. So the connection to string theory was found by Frenkel, not by me. And I spent some months kind of fiddling around with Frenkel's ideas, trying to figure out what is going on.
You know, doing hundreds and hundreds of pages of calculation and eventually noticed there seemed to be these operators satisfying various identities and that's essentially what a vertex algebra is. It's something with operators satisfying the identities that I was messing around with. Is that generally how you get a handle on a new piece of mathematics is you perform hundreds and hundreds of calculations with it or is something different?
But that's often how people first find something new. You spend weeks or months or years just doing a lot of trial and error, trying to figure out what's going on. And granted, you notice there are various patterns appearing. Frankel is someone that you mentioned plenty in this talk, but also in your papers as references. So can you tell the audience the significance of Frankel's work?
Well, Franklin is again one of these people who've had lots of ideas in almost completely unrelated areas, and he's had a lot of extraordinary ideas, and I think he quite often doesn't seem to get the credit for them he should have done. I mean, I noticed, for example,
There are these things called basic hypergeometric functions which satisfy some terrifyingly complicated identities. What are they called again? Sorry, repeat that. I think basic hypergeometric functions. Okay. The word basic is kind of misleading. They're not basic at all. They're horrifyingly complicated. And Igor Frenkel came up with this wonderful method of explaining these gruesome identities in terms of
some infinite dimensional algebras and this is just one of several extraordinary ideas he's come up with that almost nobody seems to know about. I think the problem is he comes up with ideas that are so complicated that people just don't understand them and so they don't get well known. But the relation of the monster to
True to the no ghost theorem and string theory originates with an idea of Igor Frenkel. He was the one who noticed that the no ghost theorem and string theory can be used to prove things in mathematics about the algebras. And what is the no ghost theorem? The no ghost theorem says very roughly that 26 is a critical dimension for string theory.
So if you're writing down the equations of string theory, you can write them down in any number of dimensions. But in 26 dimensions, something very weird happens. So in quantum mechanics, you have something called a Hilbert space, and the lengths of vectors in Hilbert space have to be positive real numbers, otherwise it doesn't make sense.
And if you try and construct this Hilbert space in string theory, and if you go above 26 dimensions, then these vectors no longer have lengths that are real numbers, and it doesn't make sense, and you can't really do quantum mechanics. And the No-Ghost theorem tells you that in 26 dimensions and below, all the vectors have lengths that are real numbers, so you can do quantum mechanics.
So,
It turns out to be exactly right for dealing with the monster because the monstrous relation of the leech lattice, the leech lattice is 24 dimensions and 24 dimensions is very close to 26 dimensions and this turns out to be critical in explaining moonshine.
What happens with those other two dimensions, the 26 minus 24? Those other two dimensions, you have to stick on a little two-dimensional. In order for the no-ghost theorem to work, you need to stick on a little two-dimensional space-time to something. If you want to fit around with the no-ghost theorem and the leach lattice, you need to stick on a little two-dimensional space-time to the leach lattice
And that gives you 26 dimensions. You mentioned that above 26 dimensions, you're no longer dealing with real numbers as the norm of Hilbert space. Yeah. So what are you dealing with? Complex numbers or octonions or something else? No, no, no. You can construct something that's sort of like a Hilbert space. The only problem is that the vectors have negative norm. In other words, the inner product of a vector with itself might be negative, which doesn't make sense in Euclidean space.
Have you delved much into quaternions, actually in particular octonionic models of physics? I've looked very briefly at them for about five minutes and kind of gave up. Lots of people seem to get excited about them and they never really seem to lead anywhere.
So they don't lead anywhere meaning like what do you mean that they don't produce a prediction or what? Well, as far as I know, no one has ever really managed to do anything very convincing relating octonians to physics. I mean,
Every five years someone comes up with a new theory of everything involving Octonians and they all seem to just fizzle out. Basically what I'm getting at is that I want to know what is meant by that it doesn't amount to anything. So it doesn't amount to anything in that it doesn't produce a specific prediction or it's just the field gets excited initially and then they leave it alone because they feel for whatever reasons, maybe sociologically, I don't know. So what is meant exactly that it's not interesting?
Well, as far as I know, none of them have actually worked yet. I mean, no one has actually managed to produce, as you say, no one has managed to produce a physical prediction that has been tested using an E8 model, sorry, using a model using the octurnians. There's also loop quantum gravity and string theory, which also don't have
predictions, but then what's different about them may be that they produce interesting mathematics and interesting coincidences and so forth, whereas with octonians there's not that. Well, yeah, I mean, octonians is a perfectly good piece of mathematics that does, you can do some things with octonians like construct the E8 and the algebra with it. It's just, my impression is it never really seems to lead to anything
really new and unexpected. String theory, by the way, does produce tested predictions. The only problem is these predictions aren't physical predictions, they're mathematical predictions. So it will produce, you know, it will produce predictions about things like numbers of curves on varieties and so on, and you can go off and test these
There's the ADS-CFT correspondence. Now, is that a similar ill-defined relationship like moonshine conjecture is not precisely defined, it's just saying there exists a relationship or is it extremely specific ADS-CFT correspondence?
I have no idea. This correspondence is something that is on my list of things I really ought to find out about but haven't got round to yet, so I'm going to plead ignorance on that one.
Well, my next question was going to be about the role vertex operator algebra may have to play in the ADS CFD correspondence. So I will cross that off my list. Again, I will pass on that due to complete ignorance. I think the first time we spoke, I asked you, how is it that you spend your day? I'm going to re ask you that same question.
Generally speaking, how is it that you spend your day? You wake up and then you do so-and-so. Is it the same each day and you go to bed at the same time? It's very interesting. It's the same every day. Every morning I realize what was wrong with what I did yesterday and every afternoon I try and I come up with a new solution to the problem. So you work pretty much each day? Yeah.
Most of the time just finding out what's wrong with the previous ideas I had. You just keep on coming up with ideas and 99% of them don't work and every now and then you make a little bit of progress. You wake up with the realization that your ideas from the previous day had holes in them or you then in the morning working on them you realize that? All too often I realize what's wrong with them when I wake up. Somehow while you're asleep your mind seems to
be subconsciously picking up the errors you've made. And generally the errors that you make are of what kind? I don't think that, I mean... So here's an example, like a calculation error, you multiply 12 times 13 incorrectly.
It wouldn't be calculation errors, it's sort of you optimistically guess what's going on and think, ah yes, if everything worked out like this then I would be able to solve my problem like that and you get very excited about it because everything seems to be fitting together and then when you go and check the details you find out that your optimistic assumptions do not actually hold and you have to try something else.
There's this mathematician, and I'm blanking on the name, but he had a book about how mathematicians think. And it's a famous book, or How Mathematicians Solve Problems. It's from the 1900s, mid 1900s. Possibly Hadamard. Yes, that's right. Have you read that book? No, I think I was thinking I ought to have a look at it once. But again, it's one of the many things I never got around to.
Can you give us an example of an error that you made that you feel like, oh, yeah, I shouldn't have made that error? A mathematical error. Yes. Yeah, yeah. Not talking about investment advice. That too. Mathematical error.
I seem to make mathematical errors all the time. I mean, I sort of noticed whenever I give a lecture, I always make two or three errors during the lecture and so on. But I actually found some mathematical errors were actually quite useful. I mean, I remember there's one theorem I proved that I only proved it because I made an error that
I made an error in proving some results and was then able to use this result to classify lattices in some dimension and then went back and discovered that my proof of this result was just completely wrong. I thought I'd proved two things were equal and all I'd done was prove that one of them was less than or equal to the other. If I hadn't made this error, I would have never gone on to
use this result to classify lattices and once I'd classified the lattices I could go back and actually prove the result I'd been assuming. So somehow making an error made the result easy enough for me to complete it and once I'd completed it I could then go back and fix the error.
You incorrectly prove something that turned out to be true, and then you use that true statement to prove something else fantastic, but then later realized, OK, yeah, yeah, yeah. And then, yeah. And once once I proved everything, I was then able to go back and fix the error. But without making the initial error, it would probably just been too difficult for me to do. So maybe making simplifying errors can actually be useful sometimes. Do you have any other examples of errors that ended up being salutary, producing something positive?
I can't think of any off the top of my head. I mean most errors are just sort of stupid and annoying like you make a sign error somewhere or something like that. I remember when we spoke last you said you were particularly embarrassed about how often you make errors in lectures because well you just don't want to make it and no one likes to make an error especially not publicly
And I'm curious, is that the case? Like, are you embarrassed? Because there's also the case like you have a Fields Medal, you're allowed to make certain errors. We understand you know, we know that you understand the material. So like, why would that embarrass you? It just does. YouTube videos are the worst. Whenever I make a YouTube video, I always get these comments pointing out these really dumb, stupid errors I made got so bad, I had to stop reading comments.
Ah, okay. Yeah, here's something. I'm curious about a time, fairly recently, let's say in the past few months, where you've forgotten something basic, and then you're like, oh, I should have known that. Like, for instance, what is Ark Kos? Oh, what the heck is that? Oh yeah, that's what it is. Like, something basic. In an effort to humanize this god-like field medalist that's in front of the audience right now, what's an error that you've made that is fairly basic? It happens
All the time when I'm lecturing, I've noticed this before that in lectures someone can ask me some question that I think about and I haven't got a clue what the answer is and then at the end of the lecture it suddenly becomes obvious. I think it's something to do with that the higher parts of your brain just close down under stress and you sort of make stupid errors and are unable to fix them if there's any sort of
I have a question from a friend of mine, a fellow mathematician, and his name is Julian Prito. He says, you said that you're not good at time management, but surely there's modesty in that, as you've attained an unbelievable level of success that most other professional mathematicians don't achieve. So what do you attribute this success to?
In particular, how do you block out time for research without being distracted by admin work like responding to emails or even life tasks like grocery shopping? Asking my advice about time management is kind of like asking my advice about dating. I think you can probably get better advice from other people. As I said, I'm just not good at time management.
How to deal with administrative tasks? Well, one theory I've heard from somebody else is the best way to deal with administrative tasks is to be so bad at them that nobody ever asks you to do them. I think the idea is that if someone asks you to do the washing up, you should be careful to break a couple of dishes every time and they'll soon stop asking you to wash up.
So then what do you attribute your success to? I don't know, probably luck as much as anything. I was in the right place at the right time. I mean, I just happened to be working on the monster at a time and the right time. And by incredible luck, there was something interesting to find there. And my feeling is, you know, that there are
100 similar problems I could have been working on and 99 of them there wouldn't have been anything interesting to discover and I just happened to be lucky enough to be working in the one area where there was this interesting thing waiting to be discovered. Do you think that the pure mathematics community needs more funding from public or private sources? And if so, like what can professors graduate students do to contribute to the financial health of their discipline?
I must admit I actually don't really know very much about funding of mathematics. My vague impression is there's enough funding for people who actually want to do math research to earn a reasonably comfortable living.
Can you reflect back on your own mathematical life? What are the major milestones, let's say, from childhood till now? Were you always interested in math? And then what was, let's say, some significant event? And then what was, well, besides the Fields Medal, like much prior to that,
Well, when I was a teenager, between about 10 and 15, I think I was more into chess than mathematics. I had this sort of fantasy I was going to be world chess champion or something like that. And then sometime when I was 15 or 16, it suddenly dawned on me that I was nowhere near as good at chess as I'd been dreaming about and lost interest in it. Do you remember your rating, by the way?
I'm not sure I ever really had a rating. So when you were a teenager, you wanted to be a chess master, and then the realization came upon you that, okay, maybe this is not what I want to do, or what I should do, or what I'm cut out for, compared to other people in this domain, and then what happened? Well, I mean, I'd also been interested in mathematics and kind of ended up doing that instead.
Partly by default, I just wasn't much good at anything else. I mean, I've never been much good at things like languages or sports or anything like that. What advice would you give your former self, let's say at the undergrad level, knowing what you know now, what would you change?
Sorry, nothing very obvious springs to mind. I'm not very good at giving people advice. The only thing I can think of is I remember there was something in the Hitchhiker's Guide to the Galaxy where there was this old woman asked for advice and she said, well, here are some books where I've written down every decision I've ever made in my life. All you have to do is whenever you want to make a decision, look through these books and do the opposite of whatever I did.
What's an example of a professional blunder you've made that you're able to talk about?
I don't know. I don't seem to be very good at thinking of mistakes I have made. I think there must be some sort of psychological block about this. Certainly one mistake I've made several times is simply not writing things off and publishing them properly.
I mean, I'm very lazy about writing up and publishing as you know, you were asking whether I've written a book and most mathematicians by my level have often written several books and I'm just too lazy to do this.
Is pure laziness or is it that you feel like if you published it would get critiqued and you would rather not that? Is there something else in addition to laziness? It's partly laziness and partly perfectionism. Whenever I start writing a book, I always notice how bad it is and throw it away and try and start again. Are there results that are out there that you said, oh man, I came up with that two years ago, I should have published that? We don't have to be specific about the results, but has that happened multiple times?
Yeah, yeah, I mean, there have been plenty of times people have published results that I sort of was thinking maybe I should have done that a few years ago, but it was too lazy. Is another reason that you don't publish because you feel like the result is not significant enough? Well, there's a bit of that as well. I mean, there are certainly a lot of papers where you kind of wonder why the person bothered publishing it because it's
doesn't really do anything. There are large numbers of papers that are little more than homework exercises for grad students. Do you not only wonder why did you publish this, but why did that get accepted? Like, why was that even allowed to be published? Not only why did the person think it was publishable, but why did it end up getting published? Yeah, yeah, yeah, sure. I mean, there are lots. I mean, if you go to a math library, you'll find there are
Dozens of bookshelves full of journals, full of papers. Most of these papers have probably never been read by anybody other than the referee and the author, probably not even by the referee. Before I get to a last question by Julian Prito, I want to know, you mentioned you can't talk about what you're working on right now. You prefer to keep it private. Could you give us a tease? Like what area of mathematics is it in? Is it in algebraic topology? Is it in so-and-so?
No, if I even mention the area, this will confirm I've gone insane. So the area itself gives it away. Well, OK, that's a tease itself. I'll figure it out. This question comes from Julian Pridow. In your last interview with Borchards, Kurt, you asked if the monster group was connected to physics and Borchards said no. This is true that there's no evidence for it. However, can you ask Richard Borchards if he's aware to make linkages like Witton's and what he thinks of them?
We will never know until we have experiments, but I wonder what the feeling is. So hold on, I can give a bit more of an explanation. So he says, there is at present no evidence of the monster group in physics, but does your gut feeling tell you that there's one? That the monster group does have a special place in the universe that will one day be discovered. In particular, what do you think about Witten's proposal relating the number of quantum states of a black hole of a minimum size in 3D gravity? So this was a paper by Witten in 2007.
If you'd like me to go over this 2007 three dimensional gravity reconsidered paper by Edward Witten in the same style as the crash course on physics, then let me know in the comments section as I'm debating whether or not to cover this paper or the vertex algebra paper and the monster by Richard Borchards. Let me know what you prefer. Yeah, well, I mean, that's an interesting paper. I just.
Whether it's physics or not depends very much on what you mean by physics. I'd say it's a paper sort of inspired by physics, but it's not physics in the sense that it's something you can go and do experiments in our actual world about. Whether the monster eventually turns up in the theory of
in a grand theory of life, the universe and everything. Well, I mean, all I can say is I hope it will, but I mean, at the moment, there's no strong evidence for it. And the other problem is it's so big. I mean, it's only fits into one nine six eight eight four dimensional space, which makes it really hard to tie up with three or four dimensional space time.
It could be a symmetry that is not a space-time symmetry, but some other type of symmetry. Well, yeah, maybe, you know, there's some, you can fantasize about maybe four-dimensional space-time has to have some vector bundle of some huge dimension lying over it, which has the monster tied up in it somehow. But yeah, I mean, it's possible. This is all completely speculative.
I see your gut is leaning toward no, not not even leaning toward I don't know leaning toward no. Well, leaning towards I mean, it's I don't know. And it's predicting what's going to happen in math or physics in a few hundred years time is I'm not even going to try. Sure. And now the last question I have sent as a chat on
The next question is, what do you think about Dustin Clausen's and Peter Schall's attempt at inventing condensed mathematics, that is, replacing native topological spaces with stone spaces, which are homeomorphic to compact projective limits of discrete topological spaces, in order to reduce analytic slash algebraic topological problems to pure algebra?
Um, that's easy. I simply don't know enough about it to comment on it. I mean, it looks interesting idea, but I'm just not qualified to say anything about it. Professor, thank you so much for spending an hour and a half with me. I appreciate that. Again, it's always fun to speak with you. Oh, yeah, great talking to you too. So interesting challenge. It's just fun to connect. And it's an honor. Okay. Thanks. It's always fun to speak with you. Oh, yeah, great talking to you too.
Even if I have to pass on most of your questions. Well, it's just fun to connect and it's an honor. If you eliminate all the times I was just dithering and failing to answer questions. Sure. It's probably not very interesting to watch me looking puzzled for five minutes and then saying I don't know.
Yeah, well, I feel embarrassed when I am flummoxed at coming up with the question. And I also feel bad when I change course so drastically, like I'll ask you a math question, I'll ask you a personal question, then I ask you about water. And then I feel a bit bad that it's so disconnected. But it's good practice for me. Yeah, what do you mean? Well, having lots of random questions that I have to think of answers to off and on the spur of the moment is
You're more comfortable in the mathematical domain, you don't particularly like to introspect.
find myself unable to think of anything. Oh, you mean closing down like you get a bit stressed, a bit too much pressure? Maybe something to do with that. You're asking things like were there what milestones were in my life and I just couldn't think of anything at all. It's just completely blanking out. I've noticed in the past I also have this problem that half the time I just sort of sit there
blankly saying nothing because I can't think of anything. What are the sorts of questions that you enjoy being asked? Well, technical mathematics questions I can usually answer, although the problem is I probably tend to get too technical. I mean, I start using words like isomorphism and variety and I'm not quite sure how most of your audience handles this.
Well, luckily for us, we have a audience that's fairly technical, graduate level, and there are some professors and PhDs. Okay, grad students can probably cope with isomorphism and varieties. Yeah. Okay. Well, thank you, professor. Thank you. Oh, yeah. Well, thanks for the
The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc.
It shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. If you'd like to support more conversations like this, then do consider visiting theories of everything dot org. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you.
Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?
Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 11840,
"audio_duration_seconds": 5863.03,
"completed_at": "2025-12-01T02:27:20Z",
"segments": [
{
"end_time": 20.896,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
},
{
"end_time": 36.067,
"index": 1,
"start_time": 20.896,
"text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 36.34,
"text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 82.602,
"index": 3,
"start_time": 66.237,
"text": " Theoretically, it's absolutely wonderful. It's this wonderfully consistent theory with all sorts of surprising coincidences that all just happen to work out. It's behaving as if it were the ultimate theory of life, the universe and everything. Unfortunately, it just doesn't tie up with experiment yet."
},
{
"end_time": 114.241,
"index": 4,
"start_time": 84.701,
"text": " Today we cover E8 quantum field theory and Ed Whitten with today's guest, Richard Borchards, a professor of mathematics at the University of Berkeley and a fields medalist. Just so you know, a fields medal is the mathematical equivalent of the Nobel Prize. This is part two of our conversation. The first part you can find by clicking the link in the description and it covers string theory, the monstrous moonshine conjecture, Richard's work schedule slash habits. And in this podcast, we delve even deeper into those topics."
},
{
"end_time": 130.247,
"index": 5,
"start_time": 114.241,
"text": " In mathematics, there was a conjecture by John Conway suggesting that there's more than a superficial connection between the monster group and the J function. This is what's called the monstrous moonshine conjecture, and this is in part what Richard Borchers won the Fields Medal for."
},
{
"end_time": 148.404,
"index": 6,
"start_time": 130.247,
"text": " In fact, there are two highly cited papers, one called The Monstrous Moonshine and Monstrous Li Super Algebras, and the second is called Vertex Algebras, Kakmudi Algebras, and The Monster. At some point in the next few months, I would love to tackle these papers and do an explanation video in the same vein as the Crash Course on Physics,"
},
{
"end_time": 169.735,
"index": 7,
"start_time": 148.404,
"text": " which you can see here there's a thumbnail somewhere and you can click the link in the description to see if you have a vote for which paper in particular that is the lee super algebra paper or the vertex operator algebra paper then let me know in the comment section my name is kurt jaimungal i have a background in mathematical physics this podcast is called theories of everything is dedicated to the exploration of theories of everything"
},
{
"end_time": 199.206,
"index": 8,
"start_time": 169.735,
"text": " from a theoretical physics perspective but as well as exploring the role consciousness has to the fundamental laws of nature each sponsor as well as the patrons improves the quality of the videos drastically it improves the depth it improves the frequency and it goes toward paying the staff for instance someone who's editing this full time right now and then we have an operations manager in that vein i want to thank today's sponsor brilliant if you're familiar with toe you're familiar with brilliance but for those who don't know brilliance is a place where you go to learn math science and engineering"
},
{
"end_time": 220.93,
"index": 9,
"start_time": 199.206,
"text": " Through these bite sized interactive learning experiences, for example, and I keep saying this, I would like to do a podcast on information theory, particularly Chiara Marletto, which is David Deutsch's student has a theory of everything that she puts forward called constructor theory, which is heavily contingent on information theory. So I took their course on random variable distributions and knowledge and uncertainty."
},
{
"end_time": 237.688,
"index": 10,
"start_time": 221.254,
"text": " In order"
},
{
"end_time": 259.514,
"index": 11,
"start_time": 237.688,
"text": " It would be unnatural to define it in any other manner. Visit brilliant.org slash toe, that is T-O-E, to get 20% off the annual subscription, and I recommend that you don't stop before four lessons. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking. At some point, I'll also go through the courses and give a recommendation in order."
},
{
"end_time": 281.852,
"index": 12,
"start_time": 259.514,
"text": " If you'd like to support the Theories of Everything podcast and help assist, that is, the improved video quality, the depth, the frequency of the podcast, and paying the staff, then do consider going to patreon.com slash KurtGymungle. The link is on screen as well as in the description. There's a custom amount as well as already delineated tiers. Either way, your viewership is thank you enough. Enjoy this part two with Richard Borchards."
},
{
"end_time": 303.575,
"index": 13,
"start_time": 282.227,
"text": " Professor, since last we spoke, which was approximately two years ago, what's happened with you? What's new? Absolutely nothing exciting that hasn't happened to everybody else. I mean, I guess it's not news to hear we were having some fun with COVID. Other than that, not much has been going on. So what's new? What have you been working on? That's a secret."
},
{
"end_time": 331.493,
"index": 14,
"start_time": 303.985,
"text": " The problem is that if I tell everybody it will merely confirm that I've gone senile or insane or something so I'm keeping quiet about it unless it works. Am I to surmise from that you're working on some grand conjecture that's unsolved and that if you were to say well people like that you shouldn't be tackling that, that's something like that? Something like that, yeah. It's probably going to fail and whatever. I've got tenure so it doesn't matter."
},
{
"end_time": 358.148,
"index": 15,
"start_time": 331.817,
"text": " Great, great. Now, speaking of that, do you find that your colleagues who get tenure, they tend to not be as creative or not take as many risks? This is something I've heard colloquially in physics and math, but I'm not sure if, well, I've just heard this. I don't think there's a single good answer that just varies so much from person to person. Yeah, there are some people who get tenure and promptly stop doing anything, and there are others who get tenure and use this to"
},
{
"end_time": 387.978,
"index": 16,
"start_time": 358.916,
"text": " You know, spend five or ten years working on a really hard problem and there are some who get tenure and are really productive afterwards. Is what you're working on related to physics? Probably not. Okay. Is it possible to introduce the audience to what the monstrous moonshine conjecture theory is? I can give the very brief version."
},
{
"end_time": 415.776,
"index": 17,
"start_time": 388.456,
"text": " So the classification of simple groups is the longest theorem in mathematics, 10,000 pages, and one of the simple groups it classifies is the monster, which is a really incredibly complicated simple group. I think people sometimes say the number of elements in it is about the number of atoms in the planet Jupiter."
},
{
"end_time": 444.309,
"index": 18,
"start_time": 416.51,
"text": " and it lives in space of dimension one nine six eight eight three in other words it's rotations of a one nine six eight eight three dimensional object. Meanwhile there's the theory of modular functions in mathematics which has absolutely nothing to do with finite group theory and there's this function called the elliptic modular function and one of the coefficients is one nine six eight eight four"
},
{
"end_time": 474.701,
"index": 19,
"start_time": 444.906,
"text": " And John McKay noticed that these two numbers differed by one, which seemed to be a weird coincidence. But when he told people about it, they basically dismissed it as being a random fluke, because finite groups have nothing to do with modular functions, no conceivable relation. And if you write down a lot of numbers, some of them are going to be pretty close just by coincidence. So everybody dismissed this as a fluke, except it turned out not to be because"
},
{
"end_time": 499.002,
"index": 20,
"start_time": 475.009,
"text": " The elliptic modular function has other coefficients and these turn out to be dimensions of other representations of that monster and so on. So Monstrous Moonshine is asking why does the monster simple group have anything to do with elliptic modular functions? What's the term moonshine about?"
},
{
"end_time": 521.766,
"index": 21,
"start_time": 500.043,
"text": " Moonshine was just a term invented by John Conway, who was really good at inventing cool, catchy terminology. In fact, the monster was originally supposed to be called the friendly giant, but the term monstrous moonshine was so catchy that the term monster took over from it."
},
{
"end_time": 548.234,
"index": 22,
"start_time": 523.37,
"text": " So there's the monster group, then there's elliptic modular forms, and the monstrous moonshine conjecture is just a conjecture that says there's some relationship between the monster group and then these forms? Yeah, that's pretty much it. So the problem is to explain why the monster is related to modular forms or modular functions."
},
{
"end_time": 574.241,
"index": 23,
"start_time": 548.933,
"text": " So in math, what does it mean to explain why? Do you mean to just give a proof that there is a relationship, like that number is not just a coincidence? Is that what it's meant by explain why or you have to explain something else? Yeah, well, the problem is a proof is rather trivial. I mean, proving that one nine six eight eight three plus one is one nine six eight eight four is I think can be left as an exercise for the reader. So explaining it is"
},
{
"end_time": 602.602,
"index": 24,
"start_time": 575.009,
"text": " It's a rather vague term. What you want is some sort of, you know, you want to find some sort of mathematical structure that maybe is acted on by the monster group and at the same time has the elliptic modular function coefficients appearing in it. And such a structure was found by Franco-Lipowski and Merman. So they constructed"
},
{
"end_time": 621.169,
"index": 25,
"start_time": 602.841,
"text": " There are different groups. There's AN, and then there's BN, and then there's some exceptional ones like E, all the way up to E8."
},
{
"end_time": 641.732,
"index": 26,
"start_time": 621.578,
"text": " Yeah, exactly. The people who named these were not very imaginative. And I think physicists call one of them SL. The point is that these correspond to something that physicists care about, except the E's, which now there's, at least for the past few decades, there's some effort to find the physics of E8. Now, basically what I'm wondering is, is there a relationship between E8 and the monster group?"
},
{
"end_time": 669.411,
"index": 27,
"start_time": 643.336,
"text": " Yes, there is moonshine for the monster group. There is also some rather similar moonshine phenomena for the E8 group and in fact you can find similar sorts of things going on for almost any of these groups of type AN, BN and so on that you were mentioning. So some sort of moonshine phenomena seems to happen for most but not all of the"
},
{
"end_time": 698.797,
"index": 28,
"start_time": 669.753,
"text": " known simple groups. There are still a handful of groups which we don't know of any very convincing moonshine. Can you explain what E8 is? And imagine that you're talking to upper-year undergrad slash first, second-year graduates. I can try. So the usual Lie groups you come across first are things like orthogonal groups, so just rotations in three-dimensional space."
},
{
"end_time": 723.78,
"index": 29,
"start_time": 699.343,
"text": " and you can do something very similar in four or five dimensions so you can find orthogonal groups of rotations in any dimensional space and that gives you one series of groups which and they're sort of bounded in some sense the technical term is compact and mathematicians like Cartan and Killing managed to classify all the compact"
},
{
"end_time": 752.193,
"index": 30,
"start_time": 724.241,
"text": " the groups about a hundred years ago and as well as the orthogonal groups you get other things like unitary groups and symplectic groups and then there are five funny ones left over which are called G2, F4, E6, E7 and E8 and it's a bit difficult to explain where they come from. I mean orthogonal groups are kind of easy, that's just rotations. Groups like G2"
},
{
"end_time": 780.026,
"index": 31,
"start_time": 753.166,
"text": " It's quite tricky to find something they're the symmetries of, so G2 is symmetries of something called the Cayley numbers and E8 is a bit odd because the only thing it seems to be the symmetries of is E8 itself, so it doesn't seem to be anything simpler than E8 that E8 is the symmetries of which makes it really difficult to understand because you can't reduce it to something smaller."
},
{
"end_time": 808.302,
"index": 32,
"start_time": 780.913,
"text": " So the group of rotations in three dimensions is just a three-dimensional group, which is not too bad, whereas E8 is 248 dimensions, which means if you want to write its elements down explicitly, you need matrices that are 248 by 248, which is rather too much to do by hand. Is it possible then to develop some visual intuition as to what E8 is?"
},
{
"end_time": 837.978,
"index": 33,
"start_time": 808.831,
"text": " You can try. I mean, mathematicians have come up with all sorts of ways of drawing pictures of groups by using things called root systems and Dynkin diagrams. So while E8 has dimension 248, which is really too much to handle, it has something called a rank, which is only eight dimensional, which is much easier to handle. I mean, you can almost draw a picture of that. In fact, mathematicians have this way of drawing things called Dynkin diagrams, where you"
},
{
"end_time": 861.169,
"index": 34,
"start_time": 838.968,
"text": " When you draw a little graph with a point for each rank, so to speak. So for E8 the graph only has eight nodes and you can actually draw that. So what is the application of E8 to physics? That's a very controversial question. First of all people have tried coming up with grand unified theories where you"
},
{
"end_time": 889.053,
"index": 35,
"start_time": 861.886,
"text": " extend the group of the standard model to some bigger group and these are theoretically very compelling and natural but unfortunately none of them actually seem to work. Sort of very frustrating that they ought to work but don't. The other thing you can do is mess around with string theory and the E8 group turns up quite a lot very naturally in string theory and super string theory but"
},
{
"end_time": 894.804,
"index": 36,
"start_time": 889.582,
"text": " Unfortunately we have the same problem that string theory and supersymmetry really ought to"
},
{
"end_time": 921.732,
"index": 37,
"start_time": 895.35,
"text": " explain physics but no one has actually managed to do so in a convincing way yet. Why do you say that it ought to? So why do you say that it looks so convincing but then it doesn't? Theoretically it's absolutely wonderful. It's this wonderfully consistent theory with all sorts of surprising coincidences that all just happen to work out and it sort of very much looks as if it is"
},
{
"end_time": 942.449,
"index": 38,
"start_time": 923.2,
"text": " It's behaving as if it were the ultimate theory of life, the universe and everything. Unfortunately, it just doesn't tie up with experiment yet. I mean, it sort of predicts all sorts of supersymmetric particles and things like that, that just haven't appeared in experiments. It's incredibly frustrating."
},
{
"end_time": 971.971,
"index": 39,
"start_time": 943.746,
"text": " There are different flavors of string theory, and then one of them is called SO32, and then another is called E8 x E8. If E8 wasn't complex enough, then E8 x E8, and it's more complicated. Why is it E8 x E8? Why does that have something to do with string theory, but not E8? It's something to do with... you need something 16-dimensional for some of the string theories. Roughly speaking, there's interesting stuff going on in 26 dimensions with string theory,"
},
{
"end_time": 999.633,
"index": 40,
"start_time": 972.363,
"text": " And there's interesting stuff going on in 10 dimensions with superstring theory, and the difference between these numbers is 16. And my understanding is that means you want a group of rank 16 to sort of splice them together. And the two natural ways of doing that work are the groups you mentioned, E8 times E8, which is rank 16, and SO32, which also is ranked 16."
},
{
"end_time": 1026.288,
"index": 41,
"start_time": 1001.51,
"text": " When we last spoke, I recall you telling me quantum field theory gives you a headache, and so you abandoned it. Why? Why does it give you a headache? I just couldn't figure it out. I spent several years trying to understand it and just couldn't. I mean, first of all, there's this problem with the philosophy of quantum mechanics that everybody knows, you know, there's all these paradoxes where"
},
{
"end_time": 1049.343,
"index": 42,
"start_time": 1026.647,
"text": " Quantum mechanics gives a fantastically good explanation of what's going on, as long as you're only looking at two or three electrons or something. But as soon as you start looking at large objects like cats or chairs or whatever, it gives results that don't seem to make sense. I mean, a cat is not a superposition of several different cats in different places."
},
{
"end_time": 1079.991,
"index": 43,
"start_time": 1050.998,
"text": " And I've never been able to figure this out, and as far as I know, nobody else has ever been able to figure it out either. So you mean to say that the physical interpretation as to what the heck is going on, what are these symbols representing, that's what you had the trouble with? That's one of the many things I had a trouble with. The other problem is quantum field theory involves something called Feynman integration, where you're integrating over an infinite dimensional space. And the trouble is,"
},
{
"end_time": 1109.36,
"index": 44,
"start_time": 1080.845,
"text": " If you look at this too closely, it doesn't actually make sense. No one has ever been able to give a really satisfactory definition of what a Feynman integral is in for four-dimensional quantum field theory. I mean, people have managed to do this in two dimensions and sort of a little bit in three dimensions, but for four dimensions, no one has ever managed to figure out how to make rigorous sense of this."
},
{
"end_time": 1127.551,
"index": 45,
"start_time": 1109.855,
"text": " Businesses don't care because you can do it in a sort of unrigorous way and you get results that agree with experiment to unbelievable precision. You know, you're talking about 10 or 12 significant figures precision, which is really convincing that you're doing something right."
},
{
"end_time": 1157.551,
"index": 46,
"start_time": 1128.268,
"text": " If you look at the Feynman integral first you define it for integrating over something n times and then you take n as the limit of as n approaches infinity and then you call that the actual Feynman integral. Anytime you see the Feynman integral just remember that's a stand-in for as n approaches infinity and any calculations that we do with the computers anytime it's with a finite dimensional n so there is no problem there's a problem if you try and extend it to infinity. So what do you say to that?"
},
{
"end_time": 1184.036,
"index": 47,
"start_time": 1157.824,
"text": " That's more or less correct. We can define finite dimensional approximations to the Feynman integral, but as you say, taking the limit as something tends to infinity is something we don't know how to do. The trouble is if you don't take a limit as things go to infinity, you run into other problems. For instance, you might find that the"
},
{
"end_time": 1200.691,
"index": 48,
"start_time": 1185.333,
"text": " Your theory isn't invariant under the symmetries of special relativity or something like that. So you can avoid one problem by not taking a limit, but then other problems turn up instead."
},
{
"end_time": 1229.377,
"index": 49,
"start_time": 1202.312,
"text": " Do you have any intuition as to why it's difficult to combine general relativity with quantum mechanics? Do you see the problem being merely a mathematical one, a physical one, a lack of experimental data? Do you have an intuition? What the heck is underlying this problem? I have no idea. I mean, I thought about it a little bit and got nowhere. I'm going to have to pass on that due to lack of competence on my part."
},
{
"end_time": 1259.991,
"index": 50,
"start_time": 1230.128,
"text": " How about this? What have you heard that's most convincing while you're in the field? About the relation between quantum mechanics and general relativity. Well, the only thing that looks at all promising so far is string theory or super string theory, which as I said, unfortunately, although it's the most promising, it's still not really all that promising. I mean, as I said, it's still theoretically, it looks very nice. If you try and connect it with experiment,"
},
{
"end_time": 1279.923,
"index": 51,
"start_time": 1260.691,
"text": " Some people said that that's a problem because there's a whole field of people working with a supposed connection to physics and they're getting grant money and they're also largely in charge of what gets published and doesn't get published in the field of high energy physics and it's unproven"
},
{
"end_time": 1306.203,
"index": 52,
"start_time": 1280.418,
"text": " and it's so theoretical and it also contradicts with reality because they require anti-de Sitter space or supersymmetry, which hasn't been observed. Do you see that as a problem, like a sociological problem? Well, yeah, I mean, there does seem to be a big fuss going on in physics between people who do string theory and people who don't. I'm sort of don't really know enough about what's going on to comment very much."
},
{
"end_time": 1336.63,
"index": 53,
"start_time": 1306.715,
"text": " Is there a similar problem happening in mathematics? Like, is there a similar divide somewhere? Mathematics, we very rarely get such divides. Almost all the time, everybody agrees on whether something is correct or not. I mean, the only exception I can think of is a few years ago, there was this possible proof of the ABC conjecture. As far as I know, people still haven't come to an agreement on whether this proof is correct or not."
},
{
"end_time": 1356.493,
"index": 54,
"start_time": 1337.312,
"text": " In math, sometimes you get a professor who's an intuitionist who says you can't have proofs by contradiction and you must construct."
},
{
"end_time": 1382.466,
"index": 55,
"start_time": 1357.056,
"text": " So is that a similar divide or is that just, well, they're being persnickety? No, not at all. Actually, the difference between intuitionism and classical mathematics is really very, very small. You can turn, roughly speaking, you can turn any mathematical argument into an intuitionistic argument just by putting a double negation in front of something. You say,"
},
{
"end_time": 1400.794,
"index": 56,
"start_time": 1383.029,
"text": " Why do you say roughly? Where's the outlying case? Where's the nuance in that?"
},
{
"end_time": 1432.944,
"index": 57,
"start_time": 1405.333,
"text": " Well, you can get into other arguments about whether power sets, whether you can quantify over the power set of an infinite set or something like that. I mean, it's not quite true that you can turn any correct mathematical arguments into an intuitionistic one by putting a double negation in front of it. But if you just confine yourself to simple statements about the integers, it's pretty much true."
},
{
"end_time": 1461.101,
"index": 58,
"start_time": 1433.626,
"text": " I have a question from Norman Weilberger, who's a professor of math. I've always challenged analysts to concretely and completely evaluate the sum of three real numbers, pi, e, and the square root of 2, and I haven't received what I feel like is a satisfactory answer. Is not our collective inability to come up with a precise value to this explicit computation a dagger through the claim that we have a proper theory of arithmetic with real numbers?"
},
{
"end_time": 1488.899,
"index": 59,
"start_time": 1461.357,
"text": " don't understand what would be an answer to that question. I mean, we can add them up perfectly well in the sense, you know, you can write down an explicit computer program that will give you a rational approximation to the sum with any precision you want. I mean, we can't tell you whether the number is rational or not, for example, but we can't. We know it's irrational, but we can't prove it. But there's"
},
{
"end_time": 1514.787,
"index": 60,
"start_time": 1489.667,
"text": " No problem in calculating this number to any precision you like. I'm just not sure what else you would mean by knowing what the sum is. Have you heard much from other mathematicians who say only what we can compute is real in a sense and even the real numbers themselves are thus not real? Let's stop talking about irrational numbers and just deal with rational numbers and try to build mathematics from that."
},
{
"end_time": 1543.029,
"index": 61,
"start_time": 1515.162,
"text": " Yeah, sure. This is a perfectly valid branch of mathematics. It's usually called, you know, constructive mathematics or something like that. And it's very closely related to intuitionism that you mentioned earlier. And, you know, you want to eliminate all talk of things that are too infinite, like power sets of integers and so on that don't seem to make sense. And"
},
{
"end_time": 1571.596,
"index": 62,
"start_time": 1545.555,
"text": " Probably most mathematics could sort of be turned into constructive mathematics if you wanted. The trouble is doing so just becomes very clumsy. I mean it's hard enough to do mathematics if you allow yourself to use infinite sets and making things even more difficult doesn't appeal to most people. I mean I guess it's sort of like"
},
{
"end_time": 1601.596,
"index": 63,
"start_time": 1571.971,
"text": " Doing constructive mathematics is a bit like trying to climb Everest without oxygen. I mean, you can say maybe climbing Everest without oxygen is better than climbing it with oxygen, but most people have quite enough trouble climbing it with oxygen. Thank you very much. So I'd say most mathematicians don't bother being constructive because it just makes things unnecessarily difficult. But I think, again, most mathematicians would agree that if you have a constructive proof, it is in some sense better than a non-constructive one."
},
{
"end_time": 1631.544,
"index": 64,
"start_time": 1602.278,
"text": " So there's constructive mathematics and then that's in contrast to what is called classical mathematics? Probably. I mean, I don't think people usually distinguish it too much. I mean, quite often what happens is we will first find a non-constructive proof of something and then later on try and improve it to a constructive proof. Has there ever been a case where there's a theorem in, let's call it classical mechanics, sorry, classical mathematics?"
},
{
"end_time": 1662.056,
"index": 65,
"start_time": 1632.398,
"text": " And so has there ever been a case where you prove something in classical mathematics and then it turns out to be false in constructive mathematics? No. Is there the possibility that that can exist or, like you mentioned, that it's very unlikely? Unless something really bizarre happens, like how foundations of mathematics turn out to be unexpectedly inconsistent. But I think this is in relation to what I said that"
},
{
"end_time": 1687.398,
"index": 66,
"start_time": 1662.688,
"text": " Very roughly, constructive or intuitionistic mathematics is just classical mathematics, except you say things are not disprovable rather than saying they are provable. In particular, if intuitionistic mathematics isn't contradictory, then you expect classical mathematics to be non-contradictory. Well, very roughly."
},
{
"end_time": 1714.582,
"index": 67,
"start_time": 1687.534,
"text": " And vice versa, so it would go the other way around. Yeah, and anything provable in intuitionistic mathematics is more or less automatically provable in classical mathematics. Again, with some slight caveats because the definitions are slightly different. So then in the analogy with the Mount Everest, it would be that, hey, you can climb Mount Everest without oxygen, but we can still get to all the same places and all the places you can get, I can get and vice versa."
},
{
"end_time": 1743.456,
"index": 68,
"start_time": 1715.503,
"text": " Yeah, exactly. I mean, if you can get somewhere without oxygen, then you can get there with oxygen. It's just easier. Is it all right if I ask you a couple more physics questions? I know it's been a while since you've been in there, but how about this? So Feynman called renormalization a dopey process. There's been much work on renormalization since Feynman's day. But regardless, I'm curious if you agree with that. Is there something inherently incorrect about renormalization physically or mathematically?"
},
{
"end_time": 1768.831,
"index": 69,
"start_time": 1744.889,
"text": " No, mathematically it's just a large area. There are some parts of renormalization which is well understood. It's like if you're trying to focus in with a microscope into a small area, then every time you double the magnification you might have to adjust the focus or something."
},
{
"end_time": 1795.93,
"index": 70,
"start_time": 1769.36,
"text": " and one version of renormalization is is just doing that systematically and then there are some cases of renormalization when we don't really know how to do this so this is the problem you run into in some quantum field theory in high dimensions you seem to have to renormalize things repeatedly meaning you know you sort of adjust the focus or something but we we don't know how to prove that you can always do that"
},
{
"end_time": 1819.718,
"index": 71,
"start_time": 1796.647,
"text": " So apologies that I jump around so much, but you give extremely concise answers. Unlike most guests, you give an answer such that I no longer have a follow-up for. So I'm like, all right, I'll just move on to the next one then. So the next question is, why are symmetries in physics described by groups instead of a more general groupoid?"
},
{
"end_time": 1852.108,
"index": 72,
"start_time": 1824.104,
"text": " Yeah, I'm slightly baffled by this. Well, I mean, there are some things that are described by groupoids, like if you're taking the fundamental group of something, it's really a fundamental groupoid. But I guess maybe one reason why they're mostly described by group is that a group is pretty much defined to be the collection of symmetries of something. So groupoids are"
},
{
"end_time": 1881.203,
"index": 73,
"start_time": 1852.841,
"text": " little bit more general. I mean, a groupoid would be something like the symmetries of a collection of different objects, very roughly speaking. So if you have a cube, you would just talk about the symmetries of a cube, and there are 24 or 48 symmetries. But maybe if you've got a cube and a dodecahedron, then you've got the symmetries of a dodecahedron and the symmetries of a cube, and you could think of"
},
{
"end_time": 1909.855,
"index": 74,
"start_time": 1882.108,
"text": " You know, you would think about having a groupoid, which is just a sort of union of the symmetries of a cube and the symmetries of a dodecahedron if you wanted. So you do sort of get groupoids, but in a way that's so trivial that nobody usually bothers with them. Speaking of groups, going back to E8, would it be possible with the pen and paper that you have to show again to an upper undergraduate or lower year graduate?"
},
{
"end_time": 1937.875,
"index": 75,
"start_time": 1910.862,
"text": " How one constructs the E8 lattice or how one constructs E8? So firstly, what's the difference between E8 lattice and then the E8 group? Well, the E8 lattice is just an eight dimensional, it's just a lattice in eight dimensions. I mean, I could draw, I mean, I could, do you want me to use pencil and paper and show how to actually construct that? So there's the leech lattice. Yeah. Is that the same as the E8 lattice?"
},
{
"end_time": 1966.852,
"index": 76,
"start_time": 1938.234,
"text": " The leach lattice is related so that the E8 lattice lives in eight dimensions, the leach lattice lives in 24 dimensions, and it's constructed sort of like the E8 lattice only more so. What do you mean? Well, you start off by copying the construction of the E8 lattice and then you do a little twiddly bit at the end to make it slightly more complicated. All right, I see. So it's E8 plus a bit. Yeah."
},
{
"end_time": 1996.101,
"index": 77,
"start_time": 1967.398,
"text": " Speaking on the topic of groupoids, someone has, some have wondered if there are any interesting applications of Lie groupoids and Lie algebraids, specifically with regard to differential geometry or physics. Can you speak to some applications? I'm going to pass on that because I don't even know what a Lie algebraid is and I can't offhand think of any useful applications of Lie groupoids."
},
{
"end_time": 2023.353,
"index": 78,
"start_time": 1997.944,
"text": " I mean, the problem is groupoids aren't actually that far from groups. Groupoid is sort of like groups, but there's a little bit of extra bookkeeping, and most of the time you can just do the bookkeeping by hand if you want, so you can get away without using groupoids. What are your thoughts on this trend in math for categorification? So, for instance, hop algebras, topological field theory, and the whole enterprise of nLab."
},
{
"end_time": 2056.681,
"index": 79,
"start_time": 2027.5,
"text": " I'm estimated I don't actually know all that much about it. I mean the trouble is whenever I try and learn about it my head starts spinning. I mean people doing categorification, you start off with categories and I can cope with categories because I know what they look like and then they stop using things called two categories which make me rather nervous because"
},
{
"end_time": 2086.715,
"index": 80,
"start_time": 2056.783,
"text": " two category is like a category except the collection of morphisms between two objects is also a category and this is already getting confusing and then you get to things called three categories and four categories and you go all the way up to infinity categories and i'm frankly already lost at two categories and all my attempts at learning the definition of an infinity category i've tried several times the next day everything has just gone keen out of my mind so"
},
{
"end_time": 2094.224,
"index": 81,
"start_time": 2087.671,
"text": " I don't know, you probably have to start learning this stuff when you're five years old or something in order to be able to do it."
},
{
"end_time": 2121.681,
"index": 82,
"start_time": 2095.333,
"text": " To you, when you say, hey, I've tried to learn infinity categories and it's not sticking or it doesn't make sense for whatever reason, what does that learning look like? Are you just referring to I'm reading books? Are you referring to I'm also talking to people who are in the field and I'm asking them this question? Like, what does it mean when you say you've tried? Mostly looking at books and reading the definition and looking at some examples. And I sort of understand the definition perfectly well for five minutes after reading it. But the next day I"
},
{
"end_time": 2147.466,
"index": 83,
"start_time": 2121.954,
"text": " Okay, so you're able to understand it, it just doesn't stay. It just doesn't stick for some reason. My mind is just as a sort of aversion to these things. I see. I think the problem is I don't actually have a use for it. If I had a use for higher category theory, I'd probably be much more motivated to actually learn them."
},
{
"end_time": 2172.09,
"index": 84,
"start_time": 2148.507,
"text": " So there's an infinity category, and I assume that's just the regular cardinality of the natural number infinity. Are there other infinities of higher cardinality categories? I don't think so, although I wouldn't put it past someone to try inventing them. But all the ones I've seen infinity is really, as you say, it's just the infinity of the natural numbers."
},
{
"end_time": 2197.602,
"index": 85,
"start_time": 2172.534,
"text": " One time, I think I was watching one of your lectures, or maybe from a conversation before, and you said that I have no idea why the sporadic groups are there. And then I was wondering, like, why do the sporadic groups need an explanation? Like, what would an explanation look like? This goes back to our earlier question of the moonshine conjecture, like, what does it mean to explain? To me, like, why is it not as arbitrary as saying, I don't know why the number 7300 exists? Or why are there five platonic solids?"
},
{
"end_time": 2209.889,
"index": 86,
"start_time": 2198.234,
"text": " As part of, we would like to understand them. So if you take the compact Li groups, we have a classification of them that you mentioned earlier, you know, there's A and B and C and"
},
{
"end_time": 2237.278,
"index": 87,
"start_time": 2210.742,
"text": " and things like that. And then we've got a very simple explanation of why this list turns up, that they more or less correspond to finite reflection groups, and we know how to classify finite reflection groups and understand them very easily. And this allows us to give a, we can give a single uniform construction of all the compact Li groups."
},
{
"end_time": 2265.896,
"index": 88,
"start_time": 2238.183,
"text": " but there's nothing like that for the Spradic groups. I mean we can sort of copy what we did for compact groups and for that we find most of the finite simple groups. We find all the ones of Lie type but then there are these 26 left over which just don't fit into this pattern and it's very frustrating because what we would like is some uniform way to construct all the finite simple groups."
},
{
"end_time": 2291.476,
"index": 89,
"start_time": 2267.193,
"text": " And what really worries me is that maybe we already have the correct way to understand the war, which is to go through this 20,000 page proof of the classification. And we're simply too stupid to understand why this is the correct explanation. Is there something special about lower dimensions, like one, two, three, and four in particular? Yeah."
},
{
"end_time": 2320.077,
"index": 90,
"start_time": 2292.346,
"text": " everything goes wrong in lower dimensions. So in particular what happens is in low dimensions two groups that ought to be different turn out to be really the same group. So for instance if you take you know something called the projective special linear group you can in two dimensions you can take that over the field with four elements or the field with five elements and these are usually totally different groups."
},
{
"end_time": 2348.183,
"index": 91,
"start_time": 2320.794,
"text": " but for the special linear group in two dimensions these happen to be the same, they're both the simple group of order 60, and in low dimensions the whole area of finite simple groups is littered with these accidental coincidences where two seemingly different groups turn out to be the same, and this seems to be connected with the Spradic groups because"
},
{
"end_time": 2358.456,
"index": 92,
"start_time": 2349.172,
"text": " Spradic groups are often associated with two small simple groups unexpectedly turning out to be isomorphic to each other."
},
{
"end_time": 2380.179,
"index": 93,
"start_time": 2358.865,
"text": " Is there more than just in group theory that there's something special about the lower dimensions? So for instance, with the Poincare conjecture, it was proved first for the higher dimensions, and then N number four was N proved, and then three turned out to be exceptionally difficult. Is this a trend where the lower dimensions seem to be more difficult to prove some famous theorem or some relationship?"
},
{
"end_time": 2411.459,
"index": 94,
"start_time": 2382.176,
"text": " Well, it's certainly true, as you say, that in many other mathematical subjects, the small dimensions up to about four or five, there are all sorts of weird things going on that don't happen in higher dimensions. And the case you mentioned is very unusual in that the low dimensional case is actually harder than the high dimensional cases. Rather, obviously, it's nearly always the high dimensional cases that are harder. Von Curry conjecture is this weird exception."
},
{
"end_time": 2436.425,
"index": 95,
"start_time": 2412.108,
"text": " Can you list some examples where the harder case is the larger dimensions? Well, almost anything other than the Poincare conjecture. So, for instance, in algebraic geometry, if you want to classify varieties and dimensions, we sort of understand curves reasonably well in one-dimensional thing surfaces. We sort of know a fair amount about"
},
{
"end_time": 2466.766,
"index": 96,
"start_time": 2437.073,
"text": " Okay, so then that's just an outlier, the Poincare Conjecture, because I was thinking if there are more coincidences in the lower dimensions, then that should give you more play-doh to play with in your proof. It should be easier in the lower dimensions. Yeah, most of the time it is easier."
},
{
"end_time": 2494.548,
"index": 97,
"start_time": 2467.21,
"text": " In geometry, there's this weird problem in low dimensions in that there isn't enough room to untie complications. I mean, for instance, if you've got a knot in three dimensions, it's just got a complicated knot. If you move to four dimensions, you can undo it by moving the threads into four dimensions. And this is partly why"
},
{
"end_time": 2502.244,
"index": 98,
"start_time": 2495.111,
"text": " Three and four dimensions are particularly difficult. You don't have enough room to..."
},
{
"end_time": 2530.35,
"index": 99,
"start_time": 2503.268,
"text": " But I guess in one and two dimensions, there's not enough room to have tangles and knots. And in high dimensions, five dimensions and above, there's so much room that if you've got a tangle, you can just undo it. But in three and four dimensions, they're particularly difficult because there's enough room to have tangles, but not enough room to undo the tangles once you've got them. We're going to jump a bit around here. This comes from the audience. Which results of growth do you find particularly astonishing?"
},
{
"end_time": 2560.52,
"index": 100,
"start_time": 2532.346,
"text": " Well, the funny thing about Grothendieck is he mostly didn't prove single theorems. I mean, he did sometimes. The Grothendieck-Riemann-Roch theorem is a pretty stunning theorem by anybody's standards. But what he did most was sort of develop these huge theories rather than prove individual hard theorems. I mean, he said this himself explicitly that"
},
{
"end_time": 2584.753,
"index": 101,
"start_time": 2560.896,
"text": " You know, you kind of crack hard problems just by... He had this metaphor of a rising sea where the islands just sort of slowly disappear and the sea rising. Northendic said that. He said something like that, I think. I can't remember exactly what. So he came up with all these fantastic tools like a talco homology and"
},
{
"end_time": 2612.483,
"index": 102,
"start_time": 2585.486,
"text": " and schemes and so on. For each of these you know he wrote thousands and thousands of pages just working out all their basic properties and somehow when you finish doing that you sometimes just suddenly find that various problems have been solved. Is he one of the greatest mathematicians? Yes. What makes a great mathematician?"
},
{
"end_time": 2642.108,
"index": 103,
"start_time": 2613.063,
"text": " It's hard to give a precise definition of that. How about to you? Forget about, it's not an objective statement, just to yourself. Well, the ultimate thing is just proving a very hard problem that everyone's been trying to prove for years, like Perelman proving the Poincare conjecture or Wiles proving Fermat's Last Theorem. So that's the really clear-cut case where, you know, there are these problems that have been open for"
},
{
"end_time": 2672.756,
"index": 104,
"start_time": 2642.807,
"text": " decades or centuries that everyone's trying to prove and, you know, finally proving them is a good indication. But, as I was saying, Grothendieck is a little bit different because what he was mainly doing was developing the tools, developing the tools used for solving these problems. So, Delene's proof of the Riemann hypothesis for finite fields and Wiles's work on Fermat's last theorem were both"
},
{
"end_time": 2699.428,
"index": 105,
"start_time": 2673.404,
"text": " They made very heavy use of all the ideas that Grotendick introduced. So you mentioned Perlman. If you were able to speak to Perlman, what question would you ask him? Or a set of questions? Well, just the problem is, I mean, he's working in"
},
{
"end_time": 2729.343,
"index": 106,
"start_time": 2700.026,
"text": " low-dimensional differential geometry, which I simply don't really know very much about, so I wouldn't waste this time by asking him questions. I'd have to learn a lot of differential geometry before I got to the point at which I could ask him a sensible question. What question would you ask Grothendieck? Again, I sort of don't feel worthy of asking him questions. He's just"
},
{
"end_time": 2759.787,
"index": 107,
"start_time": 2729.821,
"text": " To be honest, Professor, I don't feel worthy of asking you questions. Well, I don't know, growth and dick is. I mean, I. And the trouble is, any question I ask growth and dick is probably already answered somewhere in his writings. So really, I should be read. I should learn more about his writings before asking technical questions about it. OK, then how about this?"
},
{
"end_time": 2786.271,
"index": 108,
"start_time": 2760.35,
"text": " Which mathematician that you cannot speak to, for whatever reason, they speak a different language or they've gone missing or they're no longer with us, what mathematician would you like to ask a question to and what question would that be? Well, I think the same answer as before. I mean, I don't really usually have particular questions I want to ask mathematicians about."
},
{
"end_time": 2816.152,
"index": 109,
"start_time": 2786.766,
"text": " The trouble is when you're learning the mathematics that somebody like Grothendieck is doing, you don't learn Grothendieck's mathematics by asking a question or two or three questions or something. You have to spend years reading his articles and papers and so on. Single questions simply don't make very much difference. You mentioned the Riemann-Roch theorem earlier. Can you explain what that is to the audience and then what the Grothendieck-Riemann-Roch theorem is?"
},
{
"end_time": 2841.852,
"index": 110,
"start_time": 2819.718,
"text": " Well, yeah, so the Riemann-Roch theorem would be, well, the simplest case of it would be suppose you've got the complex numbers and you want to know how many rational functions can you find with zeros at certain points and poles at various other points."
},
{
"end_time": 2871.51,
"index": 111,
"start_time": 2842.927,
"text": " And that particular question is rather trivial to solve. I mean, since rational functions and the complex plane are so easy and well understood. Then you can ask the same question for a more complicated algebraic curve. So you can say suppose I've got an algebraic curve like a cubic curve and I want to know how many functions have zeros at certain points and maybe poles at other points. And the Riemann-Roch theorem sort of answers that question. Well,"
},
{
"end_time": 2893.916,
"index": 112,
"start_time": 2872.875,
"text": " Yeah, it gives you about as good an answer to that question as you want. And the version Grothendicken-Hitzebrook solved was generalizing this from one-dimensional curves to higher-dimensional algebraic varieties."
},
{
"end_time": 2921.903,
"index": 113,
"start_time": 2896.169,
"text": " So again, you can think of it as being very roughly, if you've got a high dimensional algebraic variety, you want to know how many functions of zeros at certain places and poles at other places. And the Grothendieck-Riemann rock theorem will give you information about this. An algebraic variety is a generalization of a surface or a manifold? Pretty much, yes. You write down some equations and"
},
{
"end_time": 2941.749,
"index": 114,
"start_time": 2922.415,
"text": " The set of common zeros of these equations is just an algebraic variety. So the simplest example, you would just write down one equation, say x to the n plus y to the n equals z to the n, and that will give you a two dimensional variety in a three dimensional space."
},
{
"end_time": 2971.459,
"index": 115,
"start_time": 2942.739,
"text": " When learning differential geometry, there's this emphasis on intrinsic geometry, the intrinsic approach, where you don't embed it into a higher dimensional space and write an equation for it. So what I'm hearing about algebraic varieties is you write the equation. Does that mean that there's something extrinsic or you choose a basis? No, that was the old way of doing algebraic geometry in the 19th century, where you would embed the variety into Euclidean space or something like that."
},
{
"end_time": 3001.067,
"index": 116,
"start_time": 2971.8,
"text": " Nowadays, you don't do that. As you say, just as in differential geometry, you try and do intrinsic geometry where you study a variety without embedding it into space. That's just a little bit more abstract and harder to think about. Speaking about what's abstract and difficult to think about, another quote from you the last time we spoke was, Ed Whitten is terrifying. So I imagine you mean in terms of his ability or what he knows or his work ethic or a combination of it. But I want to ask you, what did you mean"
},
{
"end_time": 3030.111,
"index": 117,
"start_time": 3001.698,
"text": " If you can recall, like, why is Ed Witten a terrifying physicist slash mathematician? Well, he just seems to produce, I mean, he's just so much more productive than me, you know, he produces this endless stream of papers, each of which is 50 or 100 pages long. And many of them have incredible new ideas. And then there was the Slyberg-Witten invariance, for example, where he"
},
{
"end_time": 3059.155,
"index": 118,
"start_time": 3031.22,
"text": " used some sort of weird black magic from physics to come up with these new mathematical invariants for manifolds that mathematicians hadn't noticed and that wiped out a lot of hard mathematical problems. And that paper on the side by Witten and Vance was a huge advance in mathematics and it was just a sort of tiny incidental spin-off from something"
},
{
"end_time": 3084.599,
"index": 119,
"start_time": 3059.821,
"text": " something else he was working on. And this is something that he's repeated multiple times so it's not just luck? He's repeated it multiple times. He came up with the other Witten Invents with three manifolds and he's done the Cyber Witten Invents with four manifolds and you know he's not even working in that area. He's working on physics and a sort of"
},
{
"end_time": 3096.51,
"index": 120,
"start_time": 3085.009,
"text": " incidental minor spin-offs without really trying his wiping out these major problems in mathematics. Have you ever had the opportunity to work with Whitten? No."
},
{
"end_time": 3121.254,
"index": 121,
"start_time": 3097.415,
"text": " There's some stories about John von Neumann where people were working on some equation or some problem, then he comes in the room just overhearing and he says, oh, it's this. And then they're thinking, how the heck did you solve? They said, oh, I either solved it months ago or years ago, or I just came up with a solution now. And I'm curious if you had some similar story about a mathematician could be Witten, could be someone else. I had that once happened with Simon Norton."
},
{
"end_time": 3148.643,
"index": 122,
"start_time": 3121.715,
"text": " I was discussing some problem about finite fields with somebody else in the common room. And, you know, we'd spent 10 minutes or so and weren't really quite sure how to do it. And Simon Norton happened to be walking past and I asked him as he was walking past and basically without breaking step, he told us the answer to it and carried on out of the room. That was a kind of a bit humiliating. And you were a professor at this time or you were undergrads together?"
},
{
"end_time": 3178.746,
"index": 123,
"start_time": 3149.07,
"text": " Now, speaking of great mathematicians, what is it about John Conway that made him so great? I mean, one of the things that really impressed me about him is he seemed to make really major contributions and a whole lot of completely unrelated fields. I mean, he worked"
},
{
"end_time": 3201.032,
"index": 124,
"start_time": 3178.968,
"text": " He found these new sporadic groups which you know only a handful of people have ever found one. He did a lot of work in knop theory enough to get Conway polynomial was named after him. He more or less invented a large area of game theory related to surreal numbers"
},
{
"end_time": 3231.596,
"index": 125,
"start_time": 3201.766,
"text": " And there were also several other things he worked on that I don't actually know anything about. I mean, as this vague idea, he also worked on finite automata or something like that. So he was working in several different, almost completely unrelated fields. Yeah, he even did things in set theory. So there's this weird problem related to the action of choice of finite sets that he applied the classification of finite groups to, for example."
},
{
"end_time": 3261.8,
"index": 126,
"start_time": 3232.927,
"text": " So it's really quite unusual for someone to be working in, it must be almost half a dozen unrelated areas of mathematics. Is that something that you envy or you just respect or you admire? Like, do you wish you could do that or you don't care, you're happy with what you're doing, you just are in awe that other people can do that? Well, it's just kind of awesome that someone can do, you know, can excel at several different, different things."
},
{
"end_time": 3291.954,
"index": 127,
"start_time": 3263.131,
"text": " There's something called geometric Langlands, and I've heard this expressed as a unifying theory of math in the same way that there's an attempt to find a unified theory of physics. And then there's also local Langlands. So firstly, I want to know if you can explain to the audience, please, what is geometric Langlands and its relationship to a potential unified theory of math? The answer to that is very simple. No, I can't explain it. The Langlands program is something you can spend your entire life trying to figure out"
},
{
"end_time": 3318.882,
"index": 128,
"start_time": 3292.654,
"text": " what it is. And I spent months or years trying to figure out what's going on. I would say I still have no idea what is going on. But very roughly speaking, it says that there's a very close connection between modular forms and the Galois group of the rational numbers."
},
{
"end_time": 3346.664,
"index": 129,
"start_time": 3319.394,
"text": " So this is a bit like moonshine. Moonshine said there was this bizarre connection between finite simple groups and modular forms. Langlands program says there is this bizarre connection between Galois groups and modular forms. And like moonshine, there doesn't seem to be any connection at all between them. And back in the 19th century, people sort of discovered a connection between"
},
{
"end_time": 3372.978,
"index": 130,
"start_time": 3347.159,
"text": " Galois groups in one-dimensional modular forms, except they didn't call them one-dimensional modular forms. That's called class field theory, and that was already considered to be the jewel in the crown of number theory. And Langlands noticed that this was just the one-dimensional case of an amazing correspondence between"
},
{
"end_time": 3402.568,
"index": 131,
"start_time": 3373.66,
"text": " the Galois group of the rational numbers. You would think of this as being the symmetries of all algebraic numbers and modular forms. And this is really bizarre. I mean, as late as the 1960s, I think that there's a story that someone mentioned the Schumer and Tanayama conjectures to André Wey, which is"
},
{
"end_time": 3429.94,
"index": 132,
"start_time": 3402.892,
"text": " very roughly a sort of special case of the Langlands conjecture. And Veille supposedly just sort of dismissed this as, you know, there obviously wasn't any serious connection between elliptic curves and modular forms. So according to the story, even someone like André Veille, who was an expert in this area, didn't notice there was anything going on. So the Langlands conjecture says that you can sort of describe"
},
{
"end_time": 3459.428,
"index": 133,
"start_time": 3430.247,
"text": " The Langlands program is like the moonshine conjecture in that it's"
},
{
"end_time": 3487.79,
"index": 134,
"start_time": 3459.821,
"text": " vague. It's saying there's a relationship between these two. Why? But it's not like you can solve the Langlands program. It's not like a theorem that you prove. It's more an area of math that you work on. It's a bit of both. I mean, one of the problems with the Langlands program is actually even finding out exactly what it says is rather hard. So it says there's a relation between representations of the Galois group and"
},
{
"end_time": 3511.203,
"index": 135,
"start_time": 3488.456,
"text": " You mentioned class field theory. Do you mind explaining what that is?"
},
{
"end_time": 3536.92,
"index": 136,
"start_time": 3511.834,
"text": " Yeah, you can think of that as being the one-dimensional easy case of the Langlands program. So what it does is it says that one-dimensional representations of the Galois group of the rational numbers, which is essentially just telling you what the abelian bit of the"
},
{
"end_time": 3566.254,
"index": 137,
"start_time": 3537.261,
"text": " of the rational numbers is, and it says roughly these correspond to one-dimensional modular forms. Well, one-dimensional modular forms are kind of also easy to describe. These you can think of as being, what would be the easy way to describe it, have been more or less, you know, representations of the integers modulo n under multiplication or something like that."
},
{
"end_time": 3595.401,
"index": 138,
"start_time": 3567.619,
"text": " So the easiest special case of it would just be saying that any abelian extension of the rational numbers can be described in terms of taking roots of unity. This is something called the Kronecker-Weber theorem, which is already a rather hard theorem. What's the relationship between geometric Langlands and the local Langlands program?"
},
{
"end_time": 3615.52,
"index": 139,
"start_time": 3595.657,
"text": " Are there several others? I've only heard of geometric and then local. Are there more? Local Langlands conjecture is a special case where instead of working over the rational numbers, you work over a local field. The simplest example of local field is just the real numbers."
},
{
"end_time": 3645.93,
"index": 140,
"start_time": 3616.476,
"text": " You may think the real numbers are more complicated than the rational numbers, but if you're doing arithmetic, the real numbers are a whole lot easier than the rational numbers to deal with. So the Langlands conjecture for the real numbers is more or less the classification of representations of all the groups. So this is the easiest case of the Langlands program and it's already incredibly difficult in classifying all infinite dimensional representations of all"
},
{
"end_time": 3675.401,
"index": 141,
"start_time": 3646.561,
"text": " semi-simple e-groups is really hard and there are also other things called local fields like piadic fields and so on which again are much easier than the rational numbers to deal with and there's a sort of analogue of the Langlands conjectures for those called the local Langlands. Geometric Langlands I don't really know much about"
},
{
"end_time": 3705.776,
"index": 142,
"start_time": 3676.186,
"text": " My impression is this is something like doing Langlands conjecture except instead of using the rational numbers you might use something like the field of functions on a complex curve or something like that and you try and find an analog of the Langlands conjectures for that. Do you know what Higgs bundles are? No. I've vaguely heard them turning off in"
},
{
"end_time": 3736.049,
"index": 143,
"start_time": 3706.493,
"text": " Lanyon's conjectures over function fields, but I have no idea what they are or how they're used. There's a question about class field theory and what the best way for a beginner to learn it would be. They ask is it best through local periodic methods or direct methods? Well, class field theory over local fields is indeed easier. If you want to learn class field theory,"
},
{
"end_time": 3764.172,
"index": 144,
"start_time": 3736.8,
"text": " There's a book by Castles and Froehlich, which is as good a place as any to start. And that's got a very nice article by Serres on class field theory over local fields, which is a very short and clear account of it. You mentioned that you love the writings of Serres. Yeah, he's not only a great mathematician, but he's a great expositor as well. Have you written a book, by the way, a textbook?"
},
{
"end_time": 3791.118,
"index": 145,
"start_time": 3765.606,
"text": " Too lazy, no. Is there something that you wish other people who write textbooks would adopt from Ser? Well, if they could adopt his, his somehow very clear writing style. I think he puts in all details, even if they seem to be unnecessary, for example. So"
},
{
"end_time": 3816.476,
"index": 146,
"start_time": 3791.852,
"text": " All too often authors will miss out details that they think are obvious, and maybe are obvious if you're an expert, but kind of throw you if you're a beginner. So there's a tendency to put in details even if they seem a little bit obvious and boring, which is very useful if they turn out to be not as obvious and boring as the author thinks."
},
{
"end_time": 3846.425,
"index": 147,
"start_time": 3816.852,
"text": " So you prefer over explain to me because I can always skip pages if I don't... You can always, yeah, if you over explain the reader can always skip it. What is the J function? Probably the elliptic modular function. The reason I'm asking is that there's a question about the connection between the monster group J function and vertex operator algebras. Like how does one make that connection in string theory? And they're asking about your work so I want to just"
},
{
"end_time": 3872.261,
"index": 148,
"start_time": 3846.681,
"text": " Take some time to explain what is the J-function. You've already explained what the monster group is and then I'm going to ask about vertex operator algebra. Then I'm going to ask what the heck is their relationship to string theory? How did you find that? So J-function. Well, J-function is another name for the elliptic modular function. You can think of it as roughly being the simplest function on the upper half plane that's invariant under a certain group. And what are vertex operator algebras?"
},
{
"end_time": 3899.838,
"index": 149,
"start_time": 3872.807,
"text": " Vertex operator algebra is this rather complicated algebraic structure, so most algebraic structures like groups and rings have one or two operations, so if you're talking about a ring it's got addition and multiplication and these satisfy some identities. Vertex operator algebra has an infinite number of operators and these satisfy some rather complicated identities, so it's sort of an"
},
{
"end_time": 3927.295,
"index": 150,
"start_time": 3900.367,
"text": " What precipitated you to find a connection between the monster group J functions and vertex operator algebras in string theory? How did that come about?"
},
{
"end_time": 3954.394,
"index": 151,
"start_time": 3930.316,
"text": " Long complicated roundabout story. I think the connection to string theory is probably really due to Igor Frenkel. So Igor Frenkel had this rather astonishing discovery that you could use string theory to"
},
{
"end_time": 3984.428,
"index": 152,
"start_time": 3955.026,
"text": " produce upper bounds on a certain Lie algebra relation with the Liech lattice. And I came across some notes describing Frenkel's ideas. And that's how I learned about the connection to string theory. So the connection to string theory was found by Frenkel, not by me. And I spent some months kind of fiddling around with Frenkel's ideas, trying to figure out what is going on."
},
{
"end_time": 4013.422,
"index": 153,
"start_time": 3984.991,
"text": " You know, doing hundreds and hundreds of pages of calculation and eventually noticed there seemed to be these operators satisfying various identities and that's essentially what a vertex algebra is. It's something with operators satisfying the identities that I was messing around with. Is that generally how you get a handle on a new piece of mathematics is you perform hundreds and hundreds of calculations with it or is something different?"
},
{
"end_time": 4039.531,
"index": 154,
"start_time": 4013.78,
"text": " But that's often how people first find something new. You spend weeks or months or years just doing a lot of trial and error, trying to figure out what's going on. And granted, you notice there are various patterns appearing. Frankel is someone that you mentioned plenty in this talk, but also in your papers as references. So can you tell the audience the significance of Frankel's work?"
},
{
"end_time": 4060.452,
"index": 155,
"start_time": 4040.128,
"text": " Well, Franklin is again one of these people who've had lots of ideas in almost completely unrelated areas, and he's had a lot of extraordinary ideas, and I think he quite often doesn't seem to get the credit for them he should have done. I mean, I noticed, for example,"
},
{
"end_time": 4090.811,
"index": 156,
"start_time": 4062.039,
"text": " There are these things called basic hypergeometric functions which satisfy some terrifyingly complicated identities. What are they called again? Sorry, repeat that. I think basic hypergeometric functions. Okay. The word basic is kind of misleading. They're not basic at all. They're horrifyingly complicated. And Igor Frenkel came up with this wonderful method of explaining these gruesome identities in terms of"
},
{
"end_time": 4119.548,
"index": 157,
"start_time": 4091.34,
"text": " some infinite dimensional algebras and this is just one of several extraordinary ideas he's come up with that almost nobody seems to know about. I think the problem is he comes up with ideas that are so complicated that people just don't understand them and so they don't get well known. But the relation of the monster to"
},
{
"end_time": 4149.531,
"index": 158,
"start_time": 4120.06,
"text": " True to the no ghost theorem and string theory originates with an idea of Igor Frenkel. He was the one who noticed that the no ghost theorem and string theory can be used to prove things in mathematics about the algebras. And what is the no ghost theorem? The no ghost theorem says very roughly that 26 is a critical dimension for string theory."
},
{
"end_time": 4178.609,
"index": 159,
"start_time": 4149.838,
"text": " So if you're writing down the equations of string theory, you can write them down in any number of dimensions. But in 26 dimensions, something very weird happens. So in quantum mechanics, you have something called a Hilbert space, and the lengths of vectors in Hilbert space have to be positive real numbers, otherwise it doesn't make sense."
},
{
"end_time": 4205.555,
"index": 160,
"start_time": 4179.343,
"text": " And if you try and construct this Hilbert space in string theory, and if you go above 26 dimensions, then these vectors no longer have lengths that are real numbers, and it doesn't make sense, and you can't really do quantum mechanics. And the No-Ghost theorem tells you that in 26 dimensions and below, all the vectors have lengths that are real numbers, so you can do quantum mechanics."
},
{
"end_time": 4223.131,
"index": 161,
"start_time": 4206.92,
"text": " So,"
},
{
"end_time": 4241.101,
"index": 162,
"start_time": 4223.899,
"text": " It turns out to be exactly right for dealing with the monster because the monstrous relation of the leech lattice, the leech lattice is 24 dimensions and 24 dimensions is very close to 26 dimensions and this turns out to be critical in explaining moonshine."
},
{
"end_time": 4268.251,
"index": 163,
"start_time": 4241.852,
"text": " What happens with those other two dimensions, the 26 minus 24? Those other two dimensions, you have to stick on a little two-dimensional. In order for the no-ghost theorem to work, you need to stick on a little two-dimensional space-time to something. If you want to fit around with the no-ghost theorem and the leach lattice, you need to stick on a little two-dimensional space-time to the leach lattice"
},
{
"end_time": 4298.626,
"index": 164,
"start_time": 4268.66,
"text": " And that gives you 26 dimensions. You mentioned that above 26 dimensions, you're no longer dealing with real numbers as the norm of Hilbert space. Yeah. So what are you dealing with? Complex numbers or octonions or something else? No, no, no. You can construct something that's sort of like a Hilbert space. The only problem is that the vectors have negative norm. In other words, the inner product of a vector with itself might be negative, which doesn't make sense in Euclidean space."
},
{
"end_time": 4324.821,
"index": 165,
"start_time": 4299.77,
"text": " Have you delved much into quaternions, actually in particular octonionic models of physics? I've looked very briefly at them for about five minutes and kind of gave up. Lots of people seem to get excited about them and they never really seem to lead anywhere."
},
{
"end_time": 4338.814,
"index": 166,
"start_time": 4325.299,
"text": " So they don't lead anywhere meaning like what do you mean that they don't produce a prediction or what? Well, as far as I know, no one has ever really managed to do anything very convincing relating octonians to physics. I mean,"
},
{
"end_time": 4369.104,
"index": 167,
"start_time": 4339.497,
"text": " Every five years someone comes up with a new theory of everything involving Octonians and they all seem to just fizzle out. Basically what I'm getting at is that I want to know what is meant by that it doesn't amount to anything. So it doesn't amount to anything in that it doesn't produce a specific prediction or it's just the field gets excited initially and then they leave it alone because they feel for whatever reasons, maybe sociologically, I don't know. So what is meant exactly that it's not interesting?"
},
{
"end_time": 4393.848,
"index": 168,
"start_time": 4370.367,
"text": " Well, as far as I know, none of them have actually worked yet. I mean, no one has actually managed to produce, as you say, no one has managed to produce a physical prediction that has been tested using an E8 model, sorry, using a model using the octurnians. There's also loop quantum gravity and string theory, which also don't have"
},
{
"end_time": 4421.903,
"index": 169,
"start_time": 4394.138,
"text": " predictions, but then what's different about them may be that they produce interesting mathematics and interesting coincidences and so forth, whereas with octonians there's not that. Well, yeah, I mean, octonians is a perfectly good piece of mathematics that does, you can do some things with octonians like construct the E8 and the algebra with it. It's just, my impression is it never really seems to lead to anything"
},
{
"end_time": 4444.241,
"index": 170,
"start_time": 4422.312,
"text": " really new and unexpected. String theory, by the way, does produce tested predictions. The only problem is these predictions aren't physical predictions, they're mathematical predictions. So it will produce, you know, it will produce predictions about things like numbers of curves on varieties and so on, and you can go off and test these"
},
{
"end_time": 4469.002,
"index": 171,
"start_time": 4445.452,
"text": " There's the ADS-CFT correspondence. Now, is that a similar ill-defined relationship like moonshine conjecture is not precisely defined, it's just saying there exists a relationship or is it extremely specific ADS-CFT correspondence?"
},
{
"end_time": 4480.964,
"index": 172,
"start_time": 4470.213,
"text": " I have no idea. This correspondence is something that is on my list of things I really ought to find out about but haven't got round to yet, so I'm going to plead ignorance on that one."
},
{
"end_time": 4500.845,
"index": 173,
"start_time": 4481.561,
"text": " Well, my next question was going to be about the role vertex operator algebra may have to play in the ADS CFD correspondence. So I will cross that off my list. Again, I will pass on that due to complete ignorance. I think the first time we spoke, I asked you, how is it that you spend your day? I'm going to re ask you that same question."
},
{
"end_time": 4524.155,
"index": 174,
"start_time": 4501.254,
"text": " Generally speaking, how is it that you spend your day? You wake up and then you do so-and-so. Is it the same each day and you go to bed at the same time? It's very interesting. It's the same every day. Every morning I realize what was wrong with what I did yesterday and every afternoon I try and I come up with a new solution to the problem. So you work pretty much each day? Yeah."
},
{
"end_time": 4554.718,
"index": 175,
"start_time": 4525.145,
"text": " Most of the time just finding out what's wrong with the previous ideas I had. You just keep on coming up with ideas and 99% of them don't work and every now and then you make a little bit of progress. You wake up with the realization that your ideas from the previous day had holes in them or you then in the morning working on them you realize that? All too often I realize what's wrong with them when I wake up. Somehow while you're asleep your mind seems to"
},
{
"end_time": 4574.428,
"index": 176,
"start_time": 4555.316,
"text": " be subconsciously picking up the errors you've made. And generally the errors that you make are of what kind? I don't think that, I mean... So here's an example, like a calculation error, you multiply 12 times 13 incorrectly."
},
{
"end_time": 4601.118,
"index": 177,
"start_time": 4575.674,
"text": " It wouldn't be calculation errors, it's sort of you optimistically guess what's going on and think, ah yes, if everything worked out like this then I would be able to solve my problem like that and you get very excited about it because everything seems to be fitting together and then when you go and check the details you find out that your optimistic assumptions do not actually hold and you have to try something else."
},
{
"end_time": 4626.613,
"index": 178,
"start_time": 4601.374,
"text": " There's this mathematician, and I'm blanking on the name, but he had a book about how mathematicians think. And it's a famous book, or How Mathematicians Solve Problems. It's from the 1900s, mid 1900s. Possibly Hadamard. Yes, that's right. Have you read that book? No, I think I was thinking I ought to have a look at it once. But again, it's one of the many things I never got around to."
},
{
"end_time": 4649.309,
"index": 179,
"start_time": 4628.609,
"text": " Can you give us an example of an error that you made that you feel like, oh, yeah, I shouldn't have made that error? A mathematical error. Yes. Yeah, yeah. Not talking about investment advice. That too. Mathematical error."
},
{
"end_time": 4680.23,
"index": 180,
"start_time": 4650.828,
"text": " I seem to make mathematical errors all the time. I mean, I sort of noticed whenever I give a lecture, I always make two or three errors during the lecture and so on. But I actually found some mathematical errors were actually quite useful. I mean, I remember there's one theorem I proved that I only proved it because I made an error that"
},
{
"end_time": 4706.135,
"index": 181,
"start_time": 4681.169,
"text": " I made an error in proving some results and was then able to use this result to classify lattices in some dimension and then went back and discovered that my proof of this result was just completely wrong. I thought I'd proved two things were equal and all I'd done was prove that one of them was less than or equal to the other. If I hadn't made this error, I would have never gone on to"
},
{
"end_time": 4726.749,
"index": 182,
"start_time": 4706.664,
"text": " use this result to classify lattices and once I'd classified the lattices I could go back and actually prove the result I'd been assuming. So somehow making an error made the result easy enough for me to complete it and once I'd completed it I could then go back and fix the error."
},
{
"end_time": 4755.981,
"index": 183,
"start_time": 4727.039,
"text": " You incorrectly prove something that turned out to be true, and then you use that true statement to prove something else fantastic, but then later realized, OK, yeah, yeah, yeah. And then, yeah. And once once I proved everything, I was then able to go back and fix the error. But without making the initial error, it would probably just been too difficult for me to do. So maybe making simplifying errors can actually be useful sometimes. Do you have any other examples of errors that ended up being salutary, producing something positive?"
},
{
"end_time": 4780.896,
"index": 184,
"start_time": 4756.527,
"text": " I can't think of any off the top of my head. I mean most errors are just sort of stupid and annoying like you make a sign error somewhere or something like that. I remember when we spoke last you said you were particularly embarrassed about how often you make errors in lectures because well you just don't want to make it and no one likes to make an error especially not publicly"
},
{
"end_time": 4808.183,
"index": 185,
"start_time": 4781.613,
"text": " And I'm curious, is that the case? Like, are you embarrassed? Because there's also the case like you have a Fields Medal, you're allowed to make certain errors. We understand you know, we know that you understand the material. So like, why would that embarrass you? It just does. YouTube videos are the worst. Whenever I make a YouTube video, I always get these comments pointing out these really dumb, stupid errors I made got so bad, I had to stop reading comments."
},
{
"end_time": 4836.51,
"index": 186,
"start_time": 4808.985,
"text": " Ah, okay. Yeah, here's something. I'm curious about a time, fairly recently, let's say in the past few months, where you've forgotten something basic, and then you're like, oh, I should have known that. Like, for instance, what is Ark Kos? Oh, what the heck is that? Oh yeah, that's what it is. Like, something basic. In an effort to humanize this god-like field medalist that's in front of the audience right now, what's an error that you've made that is fairly basic? It happens"
},
{
"end_time": 4865.196,
"index": 187,
"start_time": 4837.022,
"text": " All the time when I'm lecturing, I've noticed this before that in lectures someone can ask me some question that I think about and I haven't got a clue what the answer is and then at the end of the lecture it suddenly becomes obvious. I think it's something to do with that the higher parts of your brain just close down under stress and you sort of make stupid errors and are unable to fix them if there's any sort of"
},
{
"end_time": 4887.312,
"index": 188,
"start_time": 4865.776,
"text": " I have a question from a friend of mine, a fellow mathematician, and his name is Julian Prito. He says, you said that you're not good at time management, but surely there's modesty in that, as you've attained an unbelievable level of success that most other professional mathematicians don't achieve. So what do you attribute this success to?"
},
{
"end_time": 4916.442,
"index": 189,
"start_time": 4887.722,
"text": " In particular, how do you block out time for research without being distracted by admin work like responding to emails or even life tasks like grocery shopping? Asking my advice about time management is kind of like asking my advice about dating. I think you can probably get better advice from other people. As I said, I'm just not good at time management."
},
{
"end_time": 4947.551,
"index": 190,
"start_time": 4918.063,
"text": " How to deal with administrative tasks? Well, one theory I've heard from somebody else is the best way to deal with administrative tasks is to be so bad at them that nobody ever asks you to do them. I think the idea is that if someone asks you to do the washing up, you should be careful to break a couple of dishes every time and they'll soon stop asking you to wash up."
},
{
"end_time": 4975.52,
"index": 191,
"start_time": 4948.302,
"text": " So then what do you attribute your success to? I don't know, probably luck as much as anything. I was in the right place at the right time. I mean, I just happened to be working on the monster at a time and the right time. And by incredible luck, there was something interesting to find there. And my feeling is, you know, that there are"
},
{
"end_time": 5004.241,
"index": 192,
"start_time": 4976.169,
"text": " 100 similar problems I could have been working on and 99 of them there wouldn't have been anything interesting to discover and I just happened to be lucky enough to be working in the one area where there was this interesting thing waiting to be discovered. Do you think that the pure mathematics community needs more funding from public or private sources? And if so, like what can professors graduate students do to contribute to the financial health of their discipline?"
},
{
"end_time": 5029.889,
"index": 193,
"start_time": 5006.783,
"text": " I must admit I actually don't really know very much about funding of mathematics. My vague impression is there's enough funding for people who actually want to do math research to earn a reasonably comfortable living."
},
{
"end_time": 5057.278,
"index": 194,
"start_time": 5031.34,
"text": " Can you reflect back on your own mathematical life? What are the major milestones, let's say, from childhood till now? Were you always interested in math? And then what was, let's say, some significant event? And then what was, well, besides the Fields Medal, like much prior to that,"
},
{
"end_time": 5090.486,
"index": 195,
"start_time": 5061.732,
"text": " Well, when I was a teenager, between about 10 and 15, I think I was more into chess than mathematics. I had this sort of fantasy I was going to be world chess champion or something like that. And then sometime when I was 15 or 16, it suddenly dawned on me that I was nowhere near as good at chess as I'd been dreaming about and lost interest in it. Do you remember your rating, by the way?"
},
{
"end_time": 5121.254,
"index": 196,
"start_time": 5091.254,
"text": " I'm not sure I ever really had a rating. So when you were a teenager, you wanted to be a chess master, and then the realization came upon you that, okay, maybe this is not what I want to do, or what I should do, or what I'm cut out for, compared to other people in this domain, and then what happened? Well, I mean, I'd also been interested in mathematics and kind of ended up doing that instead."
},
{
"end_time": 5145.64,
"index": 197,
"start_time": 5122.278,
"text": " Partly by default, I just wasn't much good at anything else. I mean, I've never been much good at things like languages or sports or anything like that. What advice would you give your former self, let's say at the undergrad level, knowing what you know now, what would you change?"
},
{
"end_time": 5181.834,
"index": 198,
"start_time": 5154.684,
"text": " Sorry, nothing very obvious springs to mind. I'm not very good at giving people advice. The only thing I can think of is I remember there was something in the Hitchhiker's Guide to the Galaxy where there was this old woman asked for advice and she said, well, here are some books where I've written down every decision I've ever made in my life. All you have to do is whenever you want to make a decision, look through these books and do the opposite of whatever I did."
},
{
"end_time": 5204.036,
"index": 199,
"start_time": 5182.329,
"text": " What's an example of a professional blunder you've made that you're able to talk about?"
},
{
"end_time": 5239.838,
"index": 200,
"start_time": 5215.998,
"text": " I don't know. I don't seem to be very good at thinking of mistakes I have made. I think there must be some sort of psychological block about this. Certainly one mistake I've made several times is simply not writing things off and publishing them properly."
},
{
"end_time": 5256.34,
"index": 201,
"start_time": 5242.056,
"text": " I mean, I'm very lazy about writing up and publishing as you know, you were asking whether I've written a book and most mathematicians by my level have often written several books and I'm just too lazy to do this."
},
{
"end_time": 5285.128,
"index": 202,
"start_time": 5257.995,
"text": " Is pure laziness or is it that you feel like if you published it would get critiqued and you would rather not that? Is there something else in addition to laziness? It's partly laziness and partly perfectionism. Whenever I start writing a book, I always notice how bad it is and throw it away and try and start again. Are there results that are out there that you said, oh man, I came up with that two years ago, I should have published that? We don't have to be specific about the results, but has that happened multiple times?"
},
{
"end_time": 5314.002,
"index": 203,
"start_time": 5285.862,
"text": " Yeah, yeah, I mean, there have been plenty of times people have published results that I sort of was thinking maybe I should have done that a few years ago, but it was too lazy. Is another reason that you don't publish because you feel like the result is not significant enough? Well, there's a bit of that as well. I mean, there are certainly a lot of papers where you kind of wonder why the person bothered publishing it because it's"
},
{
"end_time": 5342.875,
"index": 204,
"start_time": 5314.838,
"text": " doesn't really do anything. There are large numbers of papers that are little more than homework exercises for grad students. Do you not only wonder why did you publish this, but why did that get accepted? Like, why was that even allowed to be published? Not only why did the person think it was publishable, but why did it end up getting published? Yeah, yeah, yeah, sure. I mean, there are lots. I mean, if you go to a math library, you'll find there are"
},
{
"end_time": 5370.811,
"index": 205,
"start_time": 5343.677,
"text": " Dozens of bookshelves full of journals, full of papers. Most of these papers have probably never been read by anybody other than the referee and the author, probably not even by the referee. Before I get to a last question by Julian Prito, I want to know, you mentioned you can't talk about what you're working on right now. You prefer to keep it private. Could you give us a tease? Like what area of mathematics is it in? Is it in algebraic topology? Is it in so-and-so?"
},
{
"end_time": 5398.899,
"index": 206,
"start_time": 5371.271,
"text": " No, if I even mention the area, this will confirm I've gone insane. So the area itself gives it away. Well, OK, that's a tease itself. I'll figure it out. This question comes from Julian Pridow. In your last interview with Borchards, Kurt, you asked if the monster group was connected to physics and Borchards said no. This is true that there's no evidence for it. However, can you ask Richard Borchards if he's aware to make linkages like Witton's and what he thinks of them?"
},
{
"end_time": 5425.947,
"index": 207,
"start_time": 5399.735,
"text": " We will never know until we have experiments, but I wonder what the feeling is. So hold on, I can give a bit more of an explanation. So he says, there is at present no evidence of the monster group in physics, but does your gut feeling tell you that there's one? That the monster group does have a special place in the universe that will one day be discovered. In particular, what do you think about Witten's proposal relating the number of quantum states of a black hole of a minimum size in 3D gravity? So this was a paper by Witten in 2007."
},
{
"end_time": 5448.148,
"index": 208,
"start_time": 5426.493,
"text": " If you'd like me to go over this 2007 three dimensional gravity reconsidered paper by Edward Witten in the same style as the crash course on physics, then let me know in the comments section as I'm debating whether or not to cover this paper or the vertex algebra paper and the monster by Richard Borchards. Let me know what you prefer. Yeah, well, I mean, that's an interesting paper. I just."
},
{
"end_time": 5473.2,
"index": 209,
"start_time": 5448.78,
"text": " Whether it's physics or not depends very much on what you mean by physics. I'd say it's a paper sort of inspired by physics, but it's not physics in the sense that it's something you can go and do experiments in our actual world about. Whether the monster eventually turns up in the theory of"
},
{
"end_time": 5500.879,
"index": 210,
"start_time": 5474.138,
"text": " in a grand theory of life, the universe and everything. Well, I mean, all I can say is I hope it will, but I mean, at the moment, there's no strong evidence for it. And the other problem is it's so big. I mean, it's only fits into one nine six eight eight four dimensional space, which makes it really hard to tie up with three or four dimensional space time."
},
{
"end_time": 5529.701,
"index": 211,
"start_time": 5501.732,
"text": " It could be a symmetry that is not a space-time symmetry, but some other type of symmetry. Well, yeah, maybe, you know, there's some, you can fantasize about maybe four-dimensional space-time has to have some vector bundle of some huge dimension lying over it, which has the monster tied up in it somehow. But yeah, I mean, it's possible. This is all completely speculative."
},
{
"end_time": 5554.821,
"index": 212,
"start_time": 5530.674,
"text": " I see your gut is leaning toward no, not not even leaning toward I don't know leaning toward no. Well, leaning towards I mean, it's I don't know. And it's predicting what's going to happen in math or physics in a few hundred years time is I'm not even going to try. Sure. And now the last question I have sent as a chat on"
},
{
"end_time": 5582.995,
"index": 213,
"start_time": 5555.23,
"text": " The next question is, what do you think about Dustin Clausen's and Peter Schall's attempt at inventing condensed mathematics, that is, replacing native topological spaces with stone spaces, which are homeomorphic to compact projective limits of discrete topological spaces, in order to reduce analytic slash algebraic topological problems to pure algebra?"
},
{
"end_time": 5612.739,
"index": 214,
"start_time": 5584.189,
"text": " Um, that's easy. I simply don't know enough about it to comment on it. I mean, it looks interesting idea, but I'm just not qualified to say anything about it. Professor, thank you so much for spending an hour and a half with me. I appreciate that. Again, it's always fun to speak with you. Oh, yeah, great talking to you too. So interesting challenge. It's just fun to connect. And it's an honor. Okay. Thanks. It's always fun to speak with you. Oh, yeah, great talking to you too."
},
{
"end_time": 5634.462,
"index": 215,
"start_time": 5613.524,
"text": " Even if I have to pass on most of your questions. Well, it's just fun to connect and it's an honor. If you eliminate all the times I was just dithering and failing to answer questions. Sure. It's probably not very interesting to watch me looking puzzled for five minutes and then saying I don't know."
},
{
"end_time": 5664.804,
"index": 216,
"start_time": 5635.009,
"text": " Yeah, well, I feel embarrassed when I am flummoxed at coming up with the question. And I also feel bad when I change course so drastically, like I'll ask you a math question, I'll ask you a personal question, then I ask you about water. And then I feel a bit bad that it's so disconnected. But it's good practice for me. Yeah, what do you mean? Well, having lots of random questions that I have to think of answers to off and on the spur of the moment is"
},
{
"end_time": 5680.333,
"index": 217,
"start_time": 5664.991,
"text": " You're more comfortable in the mathematical domain, you don't particularly like to introspect."
},
{
"end_time": 5707.432,
"index": 218,
"start_time": 5680.845,
"text": " find myself unable to think of anything. Oh, you mean closing down like you get a bit stressed, a bit too much pressure? Maybe something to do with that. You're asking things like were there what milestones were in my life and I just couldn't think of anything at all. It's just completely blanking out. I've noticed in the past I also have this problem that half the time I just sort of sit there"
},
{
"end_time": 5734.787,
"index": 219,
"start_time": 5708.353,
"text": " blankly saying nothing because I can't think of anything. What are the sorts of questions that you enjoy being asked? Well, technical mathematics questions I can usually answer, although the problem is I probably tend to get too technical. I mean, I start using words like isomorphism and variety and I'm not quite sure how most of your audience handles this."
},
{
"end_time": 5754.821,
"index": 220,
"start_time": 5735.64,
"text": " Well, luckily for us, we have a audience that's fairly technical, graduate level, and there are some professors and PhDs. Okay, grad students can probably cope with isomorphism and varieties. Yeah. Okay. Well, thank you, professor. Thank you. Oh, yeah. Well, thanks for the"
},
{
"end_time": 5778.473,
"index": 221,
"start_time": 5755.265,
"text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc."
},
{
"end_time": 5805.418,
"index": 222,
"start_time": 5778.473,
"text": " It shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. If you'd like to support more conversations like this, then do consider visiting theories of everything dot org. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you."
},
{
"end_time": 5844.906,
"index": 223,
"start_time": 5832.978,
"text": " Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?"
},
{
"end_time": 5863.029,
"index": 224,
"start_time": 5845.367,
"text": " Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
}
]
}
No transcript available.