Audio Player
Starting at:
Noam Chomsky: Buddhism, Ai, ChatGPT, Mind-Body
August 1, 2023
•
1:28:01
•
undefined
Audio:
Download MP3
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
191 sentences
10,092 words
Method: api-polled
Transcription time: 86m 47s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
This is Marshawn Beast Mode Lynch. Prize pick is making sports season even more fun. On prize picks, whether you're a football fan, a basketball fan, it always feels good to be ranked. Right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use. Pick two or more players. Pick more or less on their stat projections.
anything from touchdown to threes and if you write you can win big mix and match players from any sport on prize picks america's number one daily fantasy sports app prize picks is available in 40 plus states including california texas
Florida and Georgia. Most importantly, all the transactions on the app are fast, safe and secure. Download the PricePix app today and use co-Spotify to get $50 in lineups after you play your first $5 lineup. That's co-Spotify to get $50 in lineups after you play your first $5 lineup. PricePix. It's good to be right. Must be present in certain states. Visit PricePix.com for restrictions and details.
One question that I would like to see answered is whether there is any possibility for the human species to survive. Right now the answer is no.
It's an honor to introduce Professor Chomsky, one of the most influential figures in linguistics and the philosophy of language. By the way, I say introduce, but Chomsky has graced the Theories of Everything channel eight times before. This marks his ninth appearance. This channel may as well be called Theories of Gnome.
Some of the questions explored today are as follows. How does introspection influence our understanding of ourselves and our use of language? So the answer seems self-evident, though introspection has its own limitations. How far can speech or language take us in our quest for self-knowledge? What's the role of language in perception and can we even fully trust introspective insights?
What are the implications that language models like GPT have for our understanding of language acquisition and comprehension? One may think that language models that model human-like responses imply understanding. Is that clear? Does this perceived understanding reside in the syntactic level and the semantic level? Do these models contribute to our knowledge of language, of human language, our language? Or do they fall short because of inherent limitations? What would those be?
Can you ground moral philosophy? What does that mean to even ground it? And how does that connect with symbol grounding? By the way, there's something called the symbol grounding problem. This is talked about in the Terence Deacon episode. I consider the symbol grounding problem to be the hard problem of meaning.
My name is Kurt Jaimungal, and this is a channel called Theories of Everything, where we explore theories of everything, primarily from a physics perspective, but as well as attempting to understand the role consciousness has in nature. There are timestamps in the description for you to be able to revisit sections, as well as to contextualize by seeing effectively a table of contents.
Every Toe video has timestamps that are manually and meticulously written, as I always appreciate, as well as learn more, when creators take the time to do so. At approximately the 20 minute mark, there will be a brief sponsor message, as sponsors help support this podcast. The Toe podcast is also supported by patrons,
Thank you to all the patrons, every single one of you. When you donate, you allow me to work on this full time to put all my effort into tow, into theories of everything, into bringing some of the most brilliant, the brightest minds that exist to the forefront of thousands, even hundreds of thousands of people for free. Thank you. And if you'd like to contribute, then you can visit patreon.com slash Kurt Jai Mungal C U R T J A I M U N G A L.
You may notice that I'm speaking to Chomsky fairly slowly and pronunciatively. That's because Chomsky indicated to me that he's hard of hearing, and so a printout of the questions prior, as well as deliberate speech, is something that he'd prefer. Hence the question numbers as well.
If AI took over the decision-making power of the world's governments and energy systems, would it effectively become a benevolent dictator for the human species? And is this something we should want? Well, we have to ask ourselves what we mean by AI. That can mean anything, if you mean
some kind of AI that can be invented in a fantasy, say a science fiction story, of course it can happen, but in a science fiction story you can make anything happen that you like. If we mean anything that is remotely on the horizon, the question doesn't even arise.
I mean, there's nothing in artificial intelligence that is even in the range of such questions. Systems that exist are very narrow. They basically involve reproduction of what they see in massive amounts of scanning of data, all sorts of errors, serious errors.
In fact, I'm sure as you know, there was a recent petition initiated by Max Tegmark, physicist at MIT, signed by a couple of thousand of the most active proponents and advocates of these systems, wide range, calling for a moratorium on their development because of their
harmful effects. That's because of serious effects. I mean, there are also comical effects, like some of the systems are very error prone. So a friend of mine, a colleague, just for curiosity, looked up a question, asked a question about himself. He got back
disquisition, most of it biographical, more or less accurate. But it also had him married to a linguist he never met, don't particularly like each other, and they have two children. Somebody, not me, looked up something about me in computational science, sent it to me. It's, again, a lot more or less accurate than it had
me inventing one of the main programming languages, which is called Chomsky Normal Form. There is something called Chomsky Normal Form, but it has a very remote relationship to a theory of programming languages, but that's about it. But if you try to get into serious areas, this kind of error is not a joke.
Okay, we will move on. How powerful can the use of introspection as a tool for self-knowledge be? There are many claims of gurus and yogic types accessing universal awareness through meditation. How justified is this belief? Well, I can't comment on that belief. I don't know anything about it and have never seen any evidence for it. But we can
look at the case of introspection in areas which are areas of the sciences. So we can ask, how far does it get us? Not very far. So take the case of language, but in my own field of study, you can show pretty conclusively that our introspection into the nature of what we're doing
doesn't even come close to the mental operations that are taking place and using producing language and doing what we're doing now. The introspection carries us only as far as what's called inner speech, actually fragments of inner speech, if you pay attention. But what's called inner speech is not language. You can show that it's quite remote from its
part of the externalization of language in a particular sensory mode or medium, which is quite distinct from the internal mental processes that are taking place and the use of language for production and perception, and of course the nature of language. So we know that introspection is giving us a very superficial, partial
glimpse of some of the external forms of whatever's going on in our mind. And we know this is the same in many other areas, but not just introspection, even perception. So direct visual perception gives us a very misleading interpretation of what we're in fact seeing. Take, say, the moon illusion. The moon illusion gives you a
In what ways do you think Eastern philosophical traditions such as Buddhism or Taoism could contribute to our understanding of language and the mind? They can contribute.
all in favor of it, but I'm unaware of any examples. Do you happen to agree with the Buddhists that suggests suffering comes from desire? Not in my experience or what I've read about. Somebody's being tortured, they're suffering, but it's not coming from desire.
Professor, if you could have one question answered, what question would you want answered most, and why, of all the questions you could choose from? Well, the one question that I would like to see answered
is whether there is any possibility within our current system of institutional structures for the human species to survive. Right now the answer, likely answer is no, but maybe there's an answer that could show some way in which it's possible.
Professor, I was watching an interview with you from the 60s where someone was asking whether we learn language by learning the rules of the language and you pointed out that most people learn language unconsciously without instruction. You then added, no one knows all the rules of language and I'm curious if this is still true. If it is, what progress has been made and where is the largest gap in our knowledge of the rules?
Well, there's an enormous amount that's been learned since the 1960s. We had time, I could review it, but we have a much closer understanding of the basic properties. Notice that the question arises on two levels. When you talk about the rules of language, do you mean the rules of specific languages or the deeper question of the
Principles of Universal Grammar, the fundamental nature of the faculty of language. On individual languages, we know vastly more. There's been a huge explosion of research on typologically varied languages of a kind that had never taken place in history. So massive information about many languages of which had never been looked at, but there'd never been questions in this kind of depth.
So that's enormously expanded on the more fundamental question of the nature of what's called universal grammar. That means the theory of the innate faculty of language. There there's been quite considerable progress. You couldn't have guessed in the 1960s what we're talking about today. Are there gaps? Enormous gaps.
But that's because it's part of science. Take any science you like. There's enormous gaps. Take physics, most advanced science. Can't find 90% of what constitutes the universe. It's sort of a gap, if you like.
Okay, let's move on. This recursive property, merge, has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. Merge is an indispensable operation of a recursive system, which takes two syntactic objects A and B and forms a new object, the set of A and B. How is merge detected in interviews with people, or more generally, through examining examples of the use of writing or speech?
First of all, I should say that the formula there, G equals set AB, that's not actually merge, that's binary set formation. Merge is a case of binary set formation, which has many other properties, much to go into it. But the question is, how is merge detected in the interviews and so on? Well, about the same way that
the principles that the laws of motion are detected when you look at leaves blowing in the wind. You don't detect the laws of nature by looking at phenomena. That's why people do experiments. If you look to do interviews with people, you're not going to find out very much about how
genes provide information for proteins to be formed. That's not what science is about. If a science or inquiry has proceeded beyond the most primitive level, absolutely most primitive, you're not going to see the principles and actual events and phenomena of the world.
I mean, that's why people do experiments after all. You don't just look at the phenomena around you. You try to idealize, eliminate the irrelevant aspects. It takes, again, the moon illusion. There's no real explanation for it, but everyone rational assumes that the moon isn't larger at the horizon, and you can do experimentation which sharpens up what's actually happening.
But that's just true of everything. You're not going to find out anything of any significance just by looking at phenomena. Maybe you get some statistical regularities of things that are happening, but that's very far from an understanding of anything that's going on in any area, certainly here. It's kind of interesting that in areas like language, the question is asked, would never be asked in other fields.
Why is it asked about language or other aspects of human mental life? Well, there's an interesting background here. There once was, at one time, there were notions of metaphysical dualism, Cartesian dualism, two different substances, mind and body. Well, that's collapsed, but it's been replaced by something much more pernicious.
Metaphysical dualism was a scientific theory which had justification at the time. It turned out to be wrong, but most scientific theories do. What's been replaced by is a kind of methodological dualism which has no virtues. What it says is we should look at human mental processes differently than the way we look at the rest of the physical world. That's very common.
And this is an example of it. And nobody would ever ask a question like this about, say, biology or chemistry. But you do ask it about language. It's not making a criticism. This is common all through philosophy and so on, other fields. Looking at expecting to find things about language by observing phenomena we would never
This is a second part to the same question. Do the syntactic objects that merge together in a merge operation generically have subset categories, or when they are specific syntactic objects, do they have varying properties and sub-distinctions?
They have many complex properties. When you say you're merging a subject and a predicate, look into the details. It's not what we're seeing, but what underlies them, but something like a noun phrase and a verb phrase, let's say. They each have complex internal properties.
developed by the structure building operations of the language, which are all well beyond introspection. There is a strong argument, I think, that they are probably based on merge, but it takes work to show that.
Now the last question on merge from Kelly is, is the concept of merge and the underlying process of merge similar or different than other concepts which may or may not be related or credible such as concepts by Mark Turner within cognitive linguistics called conceptual blending and his more detailed description called conceptual integration networks? Well, to the extent that
conceptual blending has been made at all clear. They basically no connection. One more merge question. Before humans experienced the mutation that yielded the merge operation, did they possess a lexicon of conceptually rich items that functioned as complex calls somewhat like those used by current monkeys and primates? Well, there's a
There's a famous article that I often urge people to read by Richard Lewontin, one of the most important modern evolutionary biologists, passed away a couple years ago. It's an article in the last volume of a multi-volume encyclopedia published by MIT,
called Invitation to Cognitive Science, a lot of very valuable articles. Dick Lawontin was asked to write an article on evolution of cognition, and that bears on this question, what was going on in the early stages, and he gives a careful discussion. He finally says,
His final words basically are tough luck. There's no way by any known method of the study of evolution to find out what was going on.
Noam Chomsky's ideas on linguistics, philosophy, and economics are invaluable and he's always looking to the future. However, Chomsky is not the only one. Others have been working at the cutting edge for years and innovating on our behalf.
Masterworks is a Picasso in the world of investments. It's a platform that lets you invest in real fine art without needing millions. I mean like museum art, Warhol, Banksy, even Monet. Masterworks is that Higgs boson in your investment portfolio giving it mass.
We're talking awe-inspiring 740,000 plus users over 750 million invested in more than 225 SEC qualified offerings. They've sold 13 paintings so far with five of those sales happening just since we talked about them last in December. In fact, every painting to date has delivered a profit back to their investors.
Now I hear you, Kurt, I'm no art aficionado. Is it truly that simple? Well, Masterworks breaks these paintings into shares through a well-architected process with the SEC. And if they're selling a painting you're invested in, you get a share of the profits. Feel the brushstrokes of fortune as three of their recent sales have painted the canvas with 10, 13, and 35% net returns. Now here's the thing. Masterworks has a waitlist.
However, because you're a theories of everything listener, you get to quantum leap over the waitlist. Just click the link in the description. Alright, let's dive back into the work of another master, Noam Chomsky.
I've been lucky enough to partner with MyHeritage. MyHeritage is a leading global family history and DNA service that makes exploring your family history easier, more streamlined than ever. Their test kits are simple to use, taking only two minutes. It helps you discover your origins and find new relatives. MyHeritage also covers more regions than any other test.
Once you get your DNA results, you'll receive an ethnicity estimate, which supports over 42 ethnicities across 2114 regions. In fact, here's me unboxing it and doing the swab. Super, super simple. So I'm recording this right now, and I expect that the results say that I'm West Indian because I'm from Trinidad, and I think that's pretty much it. Let's see. Okay.
Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?
Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
My heritage has a promotion right now. Click the link in the description and use the coupon code
To find out what was going on a couple hundred thousand years ago.
I mean, in principle, it's not an unanswerable question, but we have no way to answer it, at least no known way. So the answer to the question is, how can you find out? How can we find out what was going on? We can look at the complex calls used by monkeys and other primates. We can study those. They don't even have a remote relationship to anything in human language.
And in fact, there have been very elaborate efforts to try to see if you can train chimpanzees closest to us to develop anything remotely like language and totally impossible as any biologist would expect 12 million years of evolutionary separation. Why should you have any connection? So as far as we know, the
particular rules and entities that enter into human language seem to be a unique human possession, which really shouldn't surprise us very much. There isn't any other species that's doing what you and I are now doing now. Theodore Dobzhansky, one of the great evolutionary biologists, once said that each species is unique.
And humans are the uniqueness of all. They have properties that are unknown elsewhere in the organic world. Language and cognition are the core ones.
Have you ever attempted to follow up with John Lilly's dolphin human communication project? Dolphins have a definite and complex language system that's been recorded and analyzed but not fully translated to a complete understanding. Also, apparently sperm whales have the most intricate language. How true is this? How does one define this complexity? Study it the same way as study anything else. First of all, we should be
cautious about the use of the word language. What do we mean by that? Whatever they have, as the questioner points out correctly, it's a communication project. Communication project. Every organism has means of communication bound to bacteria. Trees communicate. Communication is all over the place. But human language is not a communication system. It's used for communication. But
fundamental design is actually dysfunctional for communication is easily shown. So calling it a language is already a questionable term. It's a communication system. How do you study it? Same way you study any other communication system. Trees, for example, study the way roots interconnect to transmit signals
the neighboring trees protection mechanisms against parasites or something that's harming the tree, the way bacteria communicate. I look at the desert ants in my backyard traveling in a line and I notice that they bump into each other along the way. I presume they're communicating and an ant
A person studying ants would study their communication system. That has been done extensively for bees, for example. And you would do the same with some dolphins. We'll talk about a dolphin-human communication project. It's a little bit different. We can communicate with other animals. Like I have sitting under my desk right now two animals I can communicate with.
It's got nothing to do with language. I mean, I use language, but what they're going on in their heads is something totally different. They're detecting some noises that mean something to them. But you can like run outside the play or something. But all of these things can be studied in normal fashion, just without illusions. We shouldn't delude ourselves into thinking that
Something like a communication system is going to tell us something about the nature of language. They're all very interesting. Many of them are more have complexity that we don't have, like humans can't communicate as efficiently as honeybees do with regard to their particular concerns, like the distance quality of a flower.
Thank you. There has been a growing trend within philosophy, especially in the analytic tradition, that nowadays the main task of philosophy is to clarify questions and nurture sciences which are in their infancy while needed. Would you agree with this and do you think there are other roles for philosophy to play? Well, there is.
There are philosophers, very good ones, who take this position, like John Austin, one of the, in my view, most important modern philosophers. This was pretty much his view. But as he would have told you, it's nothing new. This is John Locke, for example. You look back to John Locke, he said his role is to try to clear the underbrush.
So that inquiry can then proceed without confusions and misunderstanding. Well, that's basically the same thing. Philosophy traditionally has nurtured science in a very obvious and important way. Philosophy and science weren't distinguished until fairly recently. In fact, if you study at Oxford today,
You study natural science, natural philosophy, and moral philosophy. Natural philosophy is what we call science. Till the mid-19th century, there was basically no distinction. The word science in its modern sense was introduced by William Ewell, I think, in around 1850 or so. Up until then, you couldn't tell who was a philosopher and who was a scientist.
Barclay could debate the accuracy of Newton's proofs, for example. But the sciences advanced sufficiently by the late 19th century, so you had to have special knowledge to become directly involved in the sciences. It wasn't just anybody could talk about them. There still were philosophers like, say, Bertrand Russell or
More modern times, people like Hilary Putnam who were very sophisticated in the sciences, but it's different interests and concerns. But I think philosophy always has nurtured new sciences, has a very significant role with regard to the emerging sciences, like take the cognitive sciences. That's pretty recent. We're not even in a Galilean stage, in my opinion.
And here, it's hard to distinguish philosophy and science, just as it was in the early scientific revolution or well into the 19th century. So clarifying concepts, clearing the underbrush, as Locke put it, very important aspect of philosophy. Of course, there are many other things, many other topics, like can we ground moral philosophy in some way? That's a different kind of topic.
What does it mean to ground moral philosophy and is this related to the symbol grounding problem? Not really. That's a different question. Symbol grounding is an interesting question, but a different one. The ground moral philosophy means to find some basis from which we can
Maybe deduce is too strong, but at least find some rational basis for what we take to be ethical behavior, ethical judgments, and so on. That's become a pretty concrete question in the last 30 or 40 years. There's very interesting work that was initiated by John Michail.
was a grad student in philosophy, did a thesis on this, then went on to write a book about it, which reconstructed a lot of John Rawls' work in terms of Rawls' initial proposals and efforts, which he later abandoned, to try to find something like a grammar of ethical judgment, and then
There was a lot of criticism of it. Michael goes through the criticism, I think undermines it quite effectively, and then proceeds with the project and went on to do, to open experimental work along with Elizabeth Spalke, a very good experimental psychologist, worked together on just trying to find universal moral principles by looking at children, different cultures and so on.
That work was later expanded by others, Mark Hauser and others, and to the extent that it succeeds, you get...Mathias Maumann, a philosopher, has done recently a very, I think, quite profound work on these issues. To the extent that it succeeds, you can hope to get a conception of
something like the faculty of language, the faculty of moral judgment, some innate system that has principles and a kind of a rational structure that allows one to draw from it conclusions about what our innate moral judgments are. Then comes the grounding problem
How do we know that's the right moral? These are the right moral judgments. How do we know? Suppose we knew what are the moral judgments that are inherent in our nature? Well, then comes a question. In fact, you can ask, is it a question? Is there anything beyond this? In fact, similar questions arise for epistemology, as in fact, Matthias Malmann, who I mentioned,
studied this, discussed this point. But if you think about epistemology, it's also based on, I'm talking about what Klein calls natural epistemology, epistemology in David Hume's sense. Epistemology is a science that seeks to find what Hume called the secret origins and principles of some
Well, I suppose you can study that. Let's take what looks like, to me at least, the most promising approach to it. Charles Sanders Peirce's conceptions of abduction never worked out very carefully, an open philosophical question.
His point, which was plausible, I think, is that there's something that allows us to, in a particular situation of having reached a certain level of understanding or comprehension of whatever we're studying, say the natural world, there's something in our minds that enables us
to put forth a limited set of potential hypotheses that might explain it. It's got to be a limited set or else we'd never get anywhere. And he gives some good arguments that that's the way the history of science has proceeded. I suppose he can make some sense of that, find out what it is that determines this limited set of principles that carries us up to the next stage of understanding.
secret springs and origins of our intellectual nature in human's terms, then would come the same grounding problem. How do we know these are the right ones? Maybe our intuitive mode of inquiry and investigation is not the one that leads to the truth. Same kind of question. And then you can ask, is it really a question?
What is the symbol grounding problem and do you see it as solved?
An infant learns words very rapidly at the peak moments of language acquisition, like two years old or so. An infant is picking up a word practically every waking hour, virtually no evidence, maybe one presentation. Look at the nature of the words carefully. They're very intricate meanings.
It's not you look at a tree and say tree, and that's the meaning. Nothing remotely like that. Very intricate, complex meanings. This was already known in classical Greece. You can now know a lot more about it. Well, then the question comes, how does this symbol, tree, river, house, dog and so on, what does it have to do with the external world?
That's a serious question. It doesn't just pick out elements in the world and show that that's false. That's much more complex, right? About what could be a tree, what could be a house, what persistence of the question that troubled Locke and Hume still troubles people. What about things that were our perception, persistence of objects when we don't perceive them, let's say?
existence of objects that we've never seen. What kind are they? What kind of properties does a person have that makes it the same person under radically different changes? Same for a river or a house or anything else. All of these are questions about simple grounding, most of them not very seriously investigated, though they could be.
One of the reasons they're not investigated is because of illusions, the illusions that there's a simple associative relationship between a symbol and something in the external world. So, child sees a tree, mother says, tree, okay, kid knows tree, doesn't work with them.
Can you expound on the below quotation? Science is a bit like the joke about the drunk who is looking under a lamppost for a key that he's lost on some other side of the street, because that's where the light is. It has no other choice. So this comes from Noam Chomsky's letter to the author 14th of June, 1993. Well, that's what we're stuck with. We look under the lamppost where there is some
The story is about a drunk who's looking under a lamppost, and somebody comes up and asks him, what are you looking for? And he says, I'm looking for a key. And the person asks him, where did you lose it? And he says, well, I lost it on the other side of the street. And you ask, well, why are you looking here then? Because that's where the light is. There's no light over there. That's what we're stuck with. We're looking where
Is the mind-body problem misconceived? If so, how so?
basically Cartesian. Descartes famously postulated res cogitans alongside of res estensa. So there's extended entity that's matter, physical, material. Then there's the mental world, which is separate from that.
and look for some connection between them. That was a perfectly reasonable scientific hypothesis, nothing mystical about it. Descartes, like all scientists of the period from Galileo through Newton and beyond, adopted what was called the mechanical philosophy, the idea that the world is a complex artifact, sort of like the
artifacts that skilled artisans were producing at the time, quite very extensively all over Europe. Skilled artisans were producing mechanical clocks that could do all sorts of complicated things. The fountains, the gardens at Versailles, as you walked through them, all kinds of things were happening.
things that looked like people playing a role in plays and so on. And then Galileo, his contemporaries and others, Leibniz, Newton concluded, well, that's what the world is, just a complicated system, but basically a mechanical ears, levers, things pushing and pulling each other and so on.
Descartes just routinely assumed that, everyone did, and then Descartes tried to show that his main scientific work was to try to show that, in fact, you could give a mechanical explanation for phenomena of the world, including a good deal of human understanding and behavior of sensation and perception. He thought he could give a mechanical
interpretation of that. But he observed that some things don't fall within that. In fact, one of his main examples in the Discourse on the Method was language. He said, if you look at what people are doing there, there's no mechanical interpretation for what you and I are now doing. It's maybe impelled by circumstances, but not compelled by them. You could choose differently, and you choose appropriately.
So somehow people are making appropriate, undetermined use of language in ways that we could call creative. It's a long history behind this. He wasn't the first to point it out. But then, like any scientist, he postulated a principle to account for this. That's the second principle for his cogatons. That's the classical nonbiotic problem.
Well, it was, the props were not under it by Newton, who showed, didn't say anything about the mind, but he showed that the mechanical system is not true, showed the world is not a mechanical system. That's his, one of the main contents of Newton's Principia is to undermine the show that Descartes proofs just didn't work.
It had no account of the universe in mechanical terms. In fact, there isn't any Newton-postulated principles that are outside the mechanical world. Newton himself regarded this as a total absurdity and didn't believe a word of it. He said, nobody of any scientific understanding can believe this, but it seems to be true. And that's why he never wrote a book on
on science, what would have been called at the time, principles of philosophy. He never wrote them. He wrote mathematical principles. That's just an account of what happens, but without any explanation. His famous comment, I make no hypotheses, is because he couldn't, he said, I have no physical explanation. I can just say, here's
the way it works. There's the mathematical principles. Well, that ended the mind-body problem in the classical sense, until somebody tells us what matter is. Nobody can tell that up to the present day, and there's no mind-body problem in the classical sense. This was understood, incidentally, by John Lotka immediately after the appearance of Principia.
pointed out that he put it in a theological framework, standard framework of the day. He said, just as Newton has shown, God assigned properties to matter that we cannot conceive of. Similarly, God may have super-added to matter principles of thought.
don't know what matter is, but whatever it turns out to be, the tides, the planets, so on, they follow it by principles we can't understand, but we can work with them, and maybe thought is the same. That was pursued very extensively through the 18th century.
by leading figures. It reached its peak with Joseph Priestley, a chemist-philosopher at the end of the 18th century with quite extensive studies of how thought can be a property of organized matter. Then it was pretty much forgotten, rediscovered in the late 20th century without any knowledge of the history, and it's now considered part of philosophy and neuroscience.
come straight out of the collapse of the mind-body problem. Well, is there another mind-body problem? Well, commonly these days, the mind-body problem is formulated in totally different terms, unrelated to the classical one. It's usually formulated in terms of first-person versus third-person conceptions of the world. So the first-person conception is
what I sense. The moon is big on the horizon. That's first person. The third person approaches what the scientist from the outside says about it. If you pursue that, you get different conclusions. Some people call that a mind-body problem. I think that's a very bad term for it. And I think that's just a mis-formulation of the way we should look at the problem. There's an interesting
interesting work on this, including a recent book by an Australian philosopher, a very good one, Peter Cizak. I think it's called the Cartesian Illusion or something like that. The Cartesian Amphithet or something like that. But
But something like that is often called the mind-body problem, which it shouldn't be, in my opinion. There was the scientific question, which essentially was resolved by Newton. There's no body, there's no matter, there's no physical, so there's no mind-body problem.
Thank you, sir. Would it be fair to argue that one reason why chat GPT and other large language models seem so human-like in their responses is that as a result of their training, they have inferred that is made sense of an underlying and imminent
semantic grammar to map the various sentences to their corresponding semantic underpinnings. If you'd like, you can watch this video by Sabine Hassenfelder, a physicist, arguing that chat GPT does indeed understand language. Does an artificially intelligent chatbot understand what it's chatting about? A year ago, I'd have answered this question with clearly not, but I've now arrived at the conclusion that the AIs that we use today do understand what they're doing, if not very much of it.
I'm not saying this just to be controversial. I actually believe it, I believe. I took a look at the first half of the video. I didn't follow it through, but she's a very good physicist, but I think she's asking meaningless questions. What she's interested in is, and she makes it very clear at the beginning, is understanding quantum physics. She's thinking of a famous comment by
Richard Feynman, which is leading quantum physics, in which he said, nobody understands quantum physics, but we know how to use it. And Hassenfelder's position is, if we know how to use it, we understand it. That's all that understanding is. Then she says, well, if Chet GPT uses language, then it understands it.
And then it goes on from there, and she talks about Chinese, Charles's Chinese room and so on. But that's basically the issue. Well, first of all, that wildly overestimates Chot VPT. It does nothing like it. But even if we accept that, it's just a terminological question about what we're going to call understand. Not very interesting. So suppose you go to a museum.
Go to any museum, you'll see art students carefully copying paintings by masters, painting by Van Gogh or Rembrandt or something. Well, what they're producing, will we call that creating a work of art? I mean, in a certain sense, yes, they're doing things I can't do.
There's a lot of creativity involved, but is that creating a work of art? You can call it that if you want, but it's not an interesting question. Same about understanding. If I copy a novel by Tolstoy, am I creating an artistic, am I making an artistic contribution? In a certain sense, yes, copied it, it's there. I'm using
Do I understand? Is it a work of art? Of course not. Well, that's chat box. Basically, all the large language models are basically complex versions of one or another kind of plagiarism. You want to call that understanding your business. It's like asking, does a submarine swim? You want to call that swimming, okay, but there's no substantive issue.
Have you seen the paper that's called Modern Language Models Refute Chomsky's Approach to Language, the one by Stephen Piantadosi? If so, what do you make of it? Well, I mean, unlike a lot of people who write about this, he does know about large language models, but the article makes absolutely no sense, has a minor problem and a major problem.
The minor problem is that it's beyond absurdity to think that you can learn anything about a two-year-old acquiring language on almost no evidence by looking at a bunch of supercomputers, scanning 50 terabytes of data, looking for statistical regularities and stringing them together.
To think that from that you can learn anything about an infant is so beyond absurdity that it's not even worth talking about. That's the minor problem. The major problem is that in principle, you can learn nothing in principle. Make it 100 terabytes, you know, use 20 supercomputers. Just bring out the in principle impossibility even worse.
for more clearly, for a very simple reason. These systems work just as well for impossible languages that children can't acquire, as they do for possible languages that children do acquire. Kind of as if a physicist came along and said, I got a great new theory. It counts for a lot of things that happen,
a lot of things that can't possibly happen, and I can't make any distinction among them. We would just laugh. I mean, any explanation of anything says, here's why things happen, here's why other things don't happen. You can't make that distinction. You're doing nothing. And that's irremediable.
built into the nature of the systems. So the more sophisticated they become at dealing with actual language, the more it's demonstrated that they're telling us nothing in principle about language, about learning, or about cognition. So there's a minor problem with this paper and a major one. The minor problem is the simple absurdity of thinking that
Yes, I heard you describe this as if a physicist came along and said, I have a theory and it's two words, anything goes. Well, that's basically the language models. You give it a system that designed in order to violate the principles of language. We'll do just as well. Maybe better.
because it can use simple algorithms that aren't used by natural language. So it's basically, like I said, or like I say, suppose some guy comes along with an improvement on the periodic table, says, I got a theory that includes all the possible elements, even those that haven't been discovered, and all kinds of impossible ones, and I can't tell any difference.
Are there any recent developments or complete description of Chomsky's inclusive condition? Further, what is the standing of the bare-phrase structure theory? Are there serious attempts made in its development? Is it still of interest for syntacticians?
of interest. But of course, it's been much developed since then. That was 30 years ago. The inclusiveness condition by now, you don't have to bother saying. It's just a consequence of the... If you look at any form of growth and development, whatever it is, arms and legs, language, anything, there's basically going to be three factors involved.
One of them is whatever the internal structure is, ultimately the innate structure. That's one factor. Some kind of external data which triggers the internal systems, partially shapes them. Third, laws of nature, crucial. In the case of language, the relevant laws of nature are principles of computational efficiency.
We can think of those as laws of nature. Well, by now, the last couple of decades, there's been extensive work showing how principles of computational efficiency determine a very large part of the outcome of the growth and acquisition process. And the inclusiveness condition just fell to the side as a consequence of these principles.
The bare-phrase structure was an early attempt to separate out many different factors involved in syntactic structure that's now extended quite considerably. So it's still there, but it's the basis for much more sophisticated developments which have much wider empirical branches as well.
Professor, can you explain what intentionality is to the audience, as well as myself, and what your views are on Searle's Chinese Room experiment are? Intentionality, this is with a T, not an S, notice. Intentionality with an S is sometimes mixed up with it. It's totally different. It has to do with aboutness. So what am I talking about when I say the sauna setting?
If I say I crossed the river, what am I talking about? That's an intentional and very complex question. It goes back to the pre-Socratics. We have a lot to say. It's like the grounding problem. The Chinese room experiment, I think, is just a bit of confusion. I think the answer to the problem had already been given by
manner, not explicit. He wasn't talking about understanding, but about thinking. It's about the same. He said, people think maybe dolls and spirits. What that meant is we use the word thinking to refer to what people do. Like other words, it has a kind of open texture, so it's not precisely determined what
So maybe we'll extend the use to things that are kind of like people. Dolls, that's what he meant by dolls and spirits. Well, we don't extend it to rooms. We don't talk about rooms thinking. That's like saying we don't say that submarines swim. There's no substantive issue. It's not the way the word is used. We don't use the word understand for rooms.
We use it for people and things sort of like people. I don't think there's any more involved than that. That's just a technological point. If we look into it further, if you developed a system for the room, let's say a robot which looked like humans, and you were able to build into the robot,
all of the rules and principles that are coded in our brains. We know some of them, not all of them. If you could do something like that, then you could ask the question, is the robot thinking? In fact, that question was asked in the 17th century. The 17th century, after Descartes, some of his followers, in particular Jacques de Cordonois,
Partesian philosopher formulated what's now called the Turing test, but in a much more scientific way. He said, suppose there's some organism that looks like us and is able to respond to any question or statement that we formulate in the way that humans do.
no matter how hard we make the problem. Then he said, we would have good reason to think that it has a mind like ours. Notice that he was talking scientifically, metaphysically. Has a mind means a thing. This is like a litmus test for acidity. I want to find that. Remember, the idea was there is an entity, the mind, separate from body.
We want a test to find out when it's there, like a litmus test. And he formulated what's basically Turing's imitation game to say, here's a litmus test we could use for an organism that looks like us, incorporating Wittgenstein's insight. Well, that was scientifically reasonable.
Speaking of metaphysics, what are your thoughts on extending generative grammar into metaphysics, such as the CTMU, which is the cognitive theoretic model of the universe by Christopher Langan? Like asking, how can we extend
The theory of genetic formulation of proteins, genetic determination of proteins into a theoretical model of the universe. It's not the right kind of question. These are theories of particular and organic entities. The theory of genetics, RNA,
Formation of proteins, protein folding. Those are studies of particular organic entities. Generative grammar is the study of another particular organic entity, something coded into our brains, which is unique to humans and apparently common to humans. So, I mean, it has a theoretical structure. And of course, the theoretical structure
could be something like the theoretical structure for other questions, like I wouldn't call it a cognitive theoretical model of the universe, but a model of the universe as a theoretical structure. Well, all theoretical structures have something in common, like they have a deductive character. The consequences have to follow from the premises and so on. But I
Professor, what are your views on solipsism and the role of consciousness, observer, in creating reality? Is this similar to the problem of anything goes?
Our particular way of observing reality is a concrete phenomenon in the actual world. It's not anything good. Take the moon illusion again. My conscious experience, can't get over it, can't overcome it, is that the moon is big at the horizon. With all you might know, the greatest astronomer in the world will still see that.
That's our conscious experience. Well, is it creating reality? No. Reality is not that the mind is that the moon is bigger. Discover that in other ways. Therefore, nobody believes that the moon is bigger, even though that's all that we see and there's no explanation for it. Still don't believe it because there's so much counter evidence. So you don't create reality that way.
Now there's a sense in which our immediate experience creates reality. It's the kind of issue that's raised in a very interesting way by Nelson Goodman, an outstanding philosopher, in his work on what he calls star-making. So he begins by asking the question, take a look at the constellation, say Orion.
Is the constellation a thing in the world, or is it a construction of our mind? What he calls a version of the world. It's a construction of our mind. It's a version. Then he goes on to say, well, what about the stars that constitute Orion? Are they things in the world, or are they just a version? He concludes they're a version. We could talk about
What we call about talk about stars, we can talk about in other ways, like concentrations of energy and the general flux of energy in the universe and so on. Then it goes on to say, well, is there anything that isn't just a version? Is there any reason to ask what do we gain by postulating a reality behind the versions? Then goes interesting discussions. There's
A book on this was his introduction commentary by a number of other philosophers, Hilary Putnam, Israel Schaeffler, and a couple of others, I think Carl Hampel. Those are interesting questions. I can't give a simple answer to them. But there is some kind of sense in which our immediate perceptions are the basis for
developing their notion of reality. Actually, Russell talked about this over a century ago, about a century ago, when he was saying that, if you think about it, he says, our highest level of confidence is in our immediate sensory
our immediate consciousness. That's our highest level of confidence. And then most of the rest of what we do in inquiry is trying to find an explanation for it. Part of the explanation we find is that our consciousness misleads us, tells us things that aren't true. That's one of the things we discover when we try to find an explanation for
There's an increasing number of claims that the approach to universal grammar doesn't hold up to Bayesian modeling. Is universal grammar still the proper way to view human language usage? Universal grammar is the proper way by definition. Universal grammar is defined
as the correct theory, whatever it is, of the innate language faculty. So it can't be wrong because it's true by definition. We don't know what it is. You can try to find it, but whatever it is, whatever the true theory is of the innate language faculty, that's universal grammar by definition and the modern usage of the term.
Bayesian modeling doesn't address this question. Bayesian modeling is concerned with how beliefs, theories are acquired. That's a different question. I don't think it's particularly helpful, but that's a different question altogether. So it's kind of asking, how do we acquire
The theory of universal grammar. I don't think that's the way it happens. I think it's normal evolutionary processes. In what ways do you think AGI should be implemented? And in what ways does its implementation scare you? Well, there's no implementation that can scare me because there's no implement. There's no theory. There's nothing there.
I mean, AGI is a goal of the early pioneers of artificial intelligence. People I knew, in fact, you go back to the 1950s, people like Herbert Simon, Al Newell, early Marvin Minsky, were concerned with trying to see if you could use what they called
understand the nature of intelligence, of learning, of thinking, and so on. It's basically part of cognitive science, asking, can we use the computational devices and theories that were coming into existence at the time as ways of approaching scientific questions? That would be AGI, but that's been pretty much abandoned. In fact, it's sometimes ridiculed as
Good old fashioned AI or something. Now we do different things, basically engineering projects which don't ask these questions. Who are your linguistic successors that are on the right path toward developing your linguistic theories? Also, what is the generator that generates grammar? Generative grammar is just the term generative is just the normal
mathematical terms. You have an axiom system for arithmetic, say. It generates an infinite number of proofs, geometrical objects, which are well-formed proofs. That's generation. Nothing different in generative grammar. You can't ask what generates it. The system is one that has a finite number. It's a finitary
and infinite output. That's generation and technical sense. There's plenty of very fine younger linguists doing outstanding work on this. It could be unfair. If you look at articles of mine, I list many of them in the Acknowledgements as collaborators and doing independent work. That's only a small sample.
Is our language capacity identical or related to our arithmetic capacity? That's a very interesting question, which also has interesting historical background. The question, as far as the question itself is concerned, our current conception of universal grammar has the property
that if you take the simplest possible form of it, take the basic structure building operation, say merge, you take the simplest form of that, absolutely simplest, and you limit any generative procedure, any computational system is going to have
rules and atomic elements, elements to which it applies for language. These will be the lexical elements, smallest meaning-bearing elements, kind of word-like, but not really words. So if you take the simplest computational procedure, cut it down to its limits, reduce the lexicon to one element, you get the basis for arithmetic. You get the successor function and addition.
essentially, then you can easily tweak it to get knowledge of arithmetic. So in that sense, it could be, we don't know, but it could be that our knowledge of arithmetic is just an offshoot of the language faculty. It's formally possible because our knowledge of arithmetic can be formulated as the ultimate simplest version
Now that goes back to a very interesting debate at the origins of the theory of evolution. Darwin and Wallace, co-founders of theory of evolution, had a debate and discussion about what seemed to them correctly as a serious paradox. They assumed
correctly as it turns out. They didn't have the evidence, but they assumed that all humans have basically knowledge of arithmetic. And they asked, how could this possibly be? It had never been used in evolutionary history, so it couldn't have been selected. So how can it possibly be? Is there something other than natural selection? Darwin
held to the idea that somehow it could have been selected. Wallace argued there's some other factor in evolution beyond natural selection. We now know there are many other factors, but they didn't know that. Now, maybe there's an answer, maybe, to the Darwin-Wallace debate. Conceivably, arithmetic is just
either an offshoot of the language faculty or an instantiation of whatever rewiring of the brain yielded the language faculty. Maybe it also simultaneously yielded the minimal system, which yields pretty much close to yielding knowledge of arithmetic. There's also similar questions about the
Basis for Music, the Basis for Morality, John Michail's work, an interesting work by Geoffrey Wadamall, Mark Houser, Ian Roberts, discussing these questions.
Some would argue that the statement, you can't know anything for certain, demonstrates a nihilistic skepticism toward information and reason. The question arises whether knowledge is ultimately linked to trust and thus is practically equivalent to faith. It's been well understood since the 17th century, the collapse of Cartesian foundationalism. It was by then very clear
that in the empirical world, you can't reach certainty. It's impossible. Hume then expanded on this, and by now it's just common understanding. In the empirical science, you can search for the best theory you can find, but you can't show that it's true. That's the fact of life. Does that lead to
Nihilism? I don't see why. We can get better theories and worse theories. That's all we have to do. Does that undermine information and reason? No. Is it equivalent to faith? Not really, because we don't have faith in the best theory. We're open-minded. Maybe there's a better one coming along. But
We have reason to believe that this is the best theory of the ones that are available. That's not faith. That's reason. It's reason to say this is the best theory of the ones anybody's been able to come up with. But it's not faith to say it must be the right theory. So this is all within the bounds of reason. No place for nihilistic skepticism.
Thank you. Thank you so much, sir. It's always a pleasure. And I hope you enjoyed your time. Thank you for spending one and a half hours with me and the theories of everything channel. Thank you. Okay. It's your ninth time your ninth time on this channel. Thank you.
The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes.
Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms, just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying, so how about instead re-listening on those platforms?
iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 7775,
"audio_duration_seconds": 5207.43,
"completed_at": "2025-12-01T00:54:32Z",
"segments": [
{
"end_time": 20.896,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
},
{
"end_time": 36.067,
"index": 1,
"start_time": 20.896,
"text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 36.34,
"text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 89.189,
"index": 3,
"start_time": 66.169,
"text": " This is Marshawn Beast Mode Lynch. Prize pick is making sports season even more fun. On prize picks, whether you're a football fan, a basketball fan, it always feels good to be ranked. Right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use. Pick two or more players. Pick more or less on their stat projections."
},
{
"end_time": 104.599,
"index": 4,
"start_time": 89.189,
"text": " anything from touchdown to threes and if you write you can win big mix and match players from any sport on prize picks america's number one daily fantasy sports app prize picks is available in 40 plus states including california texas"
},
{
"end_time": 126.186,
"index": 5,
"start_time": 104.838,
"text": " Florida and Georgia. Most importantly, all the transactions on the app are fast, safe and secure. Download the PricePix app today and use co-Spotify to get $50 in lineups after you play your first $5 lineup. That's co-Spotify to get $50 in lineups after you play your first $5 lineup. PricePix. It's good to be right. Must be present in certain states. Visit PricePix.com for restrictions and details."
},
{
"end_time": 136.578,
"index": 6,
"start_time": 126.323,
"text": " One question that I would like to see answered is whether there is any possibility for the human species to survive. Right now the answer is no."
},
{
"end_time": 157.108,
"index": 7,
"start_time": 138.131,
"text": " It's an honor to introduce Professor Chomsky, one of the most influential figures in linguistics and the philosophy of language. By the way, I say introduce, but Chomsky has graced the Theories of Everything channel eight times before. This marks his ninth appearance. This channel may as well be called Theories of Gnome."
},
{
"end_time": 178.951,
"index": 8,
"start_time": 157.108,
"text": " Some of the questions explored today are as follows. How does introspection influence our understanding of ourselves and our use of language? So the answer seems self-evident, though introspection has its own limitations. How far can speech or language take us in our quest for self-knowledge? What's the role of language in perception and can we even fully trust introspective insights?"
},
{
"end_time": 205.589,
"index": 9,
"start_time": 178.951,
"text": " What are the implications that language models like GPT have for our understanding of language acquisition and comprehension? One may think that language models that model human-like responses imply understanding. Is that clear? Does this perceived understanding reside in the syntactic level and the semantic level? Do these models contribute to our knowledge of language, of human language, our language? Or do they fall short because of inherent limitations? What would those be?"
},
{
"end_time": 221.51,
"index": 10,
"start_time": 205.589,
"text": " Can you ground moral philosophy? What does that mean to even ground it? And how does that connect with symbol grounding? By the way, there's something called the symbol grounding problem. This is talked about in the Terence Deacon episode. I consider the symbol grounding problem to be the hard problem of meaning."
},
{
"end_time": 240.213,
"index": 11,
"start_time": 221.51,
"text": " My name is Kurt Jaimungal, and this is a channel called Theories of Everything, where we explore theories of everything, primarily from a physics perspective, but as well as attempting to understand the role consciousness has in nature. There are timestamps in the description for you to be able to revisit sections, as well as to contextualize by seeing effectively a table of contents."
},
{
"end_time": 257.022,
"index": 12,
"start_time": 240.213,
"text": " Every Toe video has timestamps that are manually and meticulously written, as I always appreciate, as well as learn more, when creators take the time to do so. At approximately the 20 minute mark, there will be a brief sponsor message, as sponsors help support this podcast. The Toe podcast is also supported by patrons,"
},
{
"end_time": 283.336,
"index": 13,
"start_time": 257.022,
"text": " Thank you to all the patrons, every single one of you. When you donate, you allow me to work on this full time to put all my effort into tow, into theories of everything, into bringing some of the most brilliant, the brightest minds that exist to the forefront of thousands, even hundreds of thousands of people for free. Thank you. And if you'd like to contribute, then you can visit patreon.com slash Kurt Jai Mungal C U R T J A I M U N G A L."
},
{
"end_time": 312.039,
"index": 14,
"start_time": 283.336,
"text": " You may notice that I'm speaking to Chomsky fairly slowly and pronunciatively. That's because Chomsky indicated to me that he's hard of hearing, and so a printout of the questions prior, as well as deliberate speech, is something that he'd prefer. Hence the question numbers as well."
},
{
"end_time": 339.019,
"index": 15,
"start_time": 312.039,
"text": " If AI took over the decision-making power of the world's governments and energy systems, would it effectively become a benevolent dictator for the human species? And is this something we should want? Well, we have to ask ourselves what we mean by AI. That can mean anything, if you mean"
},
{
"end_time": 368.422,
"index": 16,
"start_time": 340.794,
"text": " some kind of AI that can be invented in a fantasy, say a science fiction story, of course it can happen, but in a science fiction story you can make anything happen that you like. If we mean anything that is remotely on the horizon, the question doesn't even arise."
},
{
"end_time": 396.971,
"index": 17,
"start_time": 369.309,
"text": " I mean, there's nothing in artificial intelligence that is even in the range of such questions. Systems that exist are very narrow. They basically involve reproduction of what they see in massive amounts of scanning of data, all sorts of errors, serious errors."
},
{
"end_time": 427.261,
"index": 18,
"start_time": 397.892,
"text": " In fact, I'm sure as you know, there was a recent petition initiated by Max Tegmark, physicist at MIT, signed by a couple of thousand of the most active proponents and advocates of these systems, wide range, calling for a moratorium on their development because of their"
},
{
"end_time": 455.213,
"index": 19,
"start_time": 427.654,
"text": " harmful effects. That's because of serious effects. I mean, there are also comical effects, like some of the systems are very error prone. So a friend of mine, a colleague, just for curiosity, looked up a question, asked a question about himself. He got back"
},
{
"end_time": 482.568,
"index": 20,
"start_time": 456.732,
"text": " disquisition, most of it biographical, more or less accurate. But it also had him married to a linguist he never met, don't particularly like each other, and they have two children. Somebody, not me, looked up something about me in computational science, sent it to me. It's, again, a lot more or less accurate than it had"
},
{
"end_time": 511.834,
"index": 21,
"start_time": 483.046,
"text": " me inventing one of the main programming languages, which is called Chomsky Normal Form. There is something called Chomsky Normal Form, but it has a very remote relationship to a theory of programming languages, but that's about it. But if you try to get into serious areas, this kind of error is not a joke."
},
{
"end_time": 541.254,
"index": 22,
"start_time": 513.285,
"text": " Okay, we will move on. How powerful can the use of introspection as a tool for self-knowledge be? There are many claims of gurus and yogic types accessing universal awareness through meditation. How justified is this belief? Well, I can't comment on that belief. I don't know anything about it and have never seen any evidence for it. But we can"
},
{
"end_time": 567.466,
"index": 23,
"start_time": 541.971,
"text": " look at the case of introspection in areas which are areas of the sciences. So we can ask, how far does it get us? Not very far. So take the case of language, but in my own field of study, you can show pretty conclusively that our introspection into the nature of what we're doing"
},
{
"end_time": 597.995,
"index": 24,
"start_time": 567.995,
"text": " doesn't even come close to the mental operations that are taking place and using producing language and doing what we're doing now. The introspection carries us only as far as what's called inner speech, actually fragments of inner speech, if you pay attention. But what's called inner speech is not language. You can show that it's quite remote from its"
},
{
"end_time": 627.722,
"index": 25,
"start_time": 598.473,
"text": " part of the externalization of language in a particular sensory mode or medium, which is quite distinct from the internal mental processes that are taking place and the use of language for production and perception, and of course the nature of language. So we know that introspection is giving us a very superficial, partial"
},
{
"end_time": 657.398,
"index": 26,
"start_time": 628.131,
"text": " glimpse of some of the external forms of whatever's going on in our mind. And we know this is the same in many other areas, but not just introspection, even perception. So direct visual perception gives us a very misleading interpretation of what we're in fact seeing. Take, say, the moon illusion. The moon illusion gives you a"
},
{
"end_time": 685.964,
"index": 27,
"start_time": 657.892,
"text": " In what ways do you think Eastern philosophical traditions such as Buddhism or Taoism could contribute to our understanding of language and the mind? They can contribute."
},
{
"end_time": 712.108,
"index": 28,
"start_time": 686.988,
"text": " all in favor of it, but I'm unaware of any examples. Do you happen to agree with the Buddhists that suggests suffering comes from desire? Not in my experience or what I've read about. Somebody's being tortured, they're suffering, but it's not coming from desire."
},
{
"end_time": 743.541,
"index": 29,
"start_time": 715.299,
"text": " Professor, if you could have one question answered, what question would you want answered most, and why, of all the questions you could choose from? Well, the one question that I would like to see answered"
},
{
"end_time": 770.896,
"index": 30,
"start_time": 744.445,
"text": " is whether there is any possibility within our current system of institutional structures for the human species to survive. Right now the answer, likely answer is no, but maybe there's an answer that could show some way in which it's possible."
},
{
"end_time": 798.592,
"index": 31,
"start_time": 771.852,
"text": " Professor, I was watching an interview with you from the 60s where someone was asking whether we learn language by learning the rules of the language and you pointed out that most people learn language unconsciously without instruction. You then added, no one knows all the rules of language and I'm curious if this is still true. If it is, what progress has been made and where is the largest gap in our knowledge of the rules?"
},
{
"end_time": 827.449,
"index": 32,
"start_time": 799.616,
"text": " Well, there's an enormous amount that's been learned since the 1960s. We had time, I could review it, but we have a much closer understanding of the basic properties. Notice that the question arises on two levels. When you talk about the rules of language, do you mean the rules of specific languages or the deeper question of the"
},
{
"end_time": 857.875,
"index": 33,
"start_time": 827.961,
"text": " Principles of Universal Grammar, the fundamental nature of the faculty of language. On individual languages, we know vastly more. There's been a huge explosion of research on typologically varied languages of a kind that had never taken place in history. So massive information about many languages of which had never been looked at, but there'd never been questions in this kind of depth."
},
{
"end_time": 887.295,
"index": 34,
"start_time": 858.422,
"text": " So that's enormously expanded on the more fundamental question of the nature of what's called universal grammar. That means the theory of the innate faculty of language. There there's been quite considerable progress. You couldn't have guessed in the 1960s what we're talking about today. Are there gaps? Enormous gaps."
},
{
"end_time": 909.377,
"index": 35,
"start_time": 887.773,
"text": " But that's because it's part of science. Take any science you like. There's enormous gaps. Take physics, most advanced science. Can't find 90% of what constitutes the universe. It's sort of a gap, if you like."
},
{
"end_time": 938.353,
"index": 36,
"start_time": 909.923,
"text": " Okay, let's move on. This recursive property, merge, has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. Merge is an indispensable operation of a recursive system, which takes two syntactic objects A and B and forms a new object, the set of A and B. How is merge detected in interviews with people, or more generally, through examining examples of the use of writing or speech?"
},
{
"end_time": 967.09,
"index": 37,
"start_time": 939.718,
"text": " First of all, I should say that the formula there, G equals set AB, that's not actually merge, that's binary set formation. Merge is a case of binary set formation, which has many other properties, much to go into it. But the question is, how is merge detected in the interviews and so on? Well, about the same way that"
},
{
"end_time": 997.039,
"index": 38,
"start_time": 967.91,
"text": " the principles that the laws of motion are detected when you look at leaves blowing in the wind. You don't detect the laws of nature by looking at phenomena. That's why people do experiments. If you look to do interviews with people, you're not going to find out very much about how"
},
{
"end_time": 1024.838,
"index": 39,
"start_time": 997.551,
"text": " genes provide information for proteins to be formed. That's not what science is about. If a science or inquiry has proceeded beyond the most primitive level, absolutely most primitive, you're not going to see the principles and actual events and phenomena of the world."
},
{
"end_time": 1054.411,
"index": 40,
"start_time": 1025.247,
"text": " I mean, that's why people do experiments after all. You don't just look at the phenomena around you. You try to idealize, eliminate the irrelevant aspects. It takes, again, the moon illusion. There's no real explanation for it, but everyone rational assumes that the moon isn't larger at the horizon, and you can do experimentation which sharpens up what's actually happening."
},
{
"end_time": 1083.951,
"index": 41,
"start_time": 1054.855,
"text": " But that's just true of everything. You're not going to find out anything of any significance just by looking at phenomena. Maybe you get some statistical regularities of things that are happening, but that's very far from an understanding of anything that's going on in any area, certainly here. It's kind of interesting that in areas like language, the question is asked, would never be asked in other fields."
},
{
"end_time": 1115.265,
"index": 42,
"start_time": 1087.705,
"text": " Why is it asked about language or other aspects of human mental life? Well, there's an interesting background here. There once was, at one time, there were notions of metaphysical dualism, Cartesian dualism, two different substances, mind and body. Well, that's collapsed, but it's been replaced by something much more pernicious."
},
{
"end_time": 1145.879,
"index": 43,
"start_time": 1116.8,
"text": " Metaphysical dualism was a scientific theory which had justification at the time. It turned out to be wrong, but most scientific theories do. What's been replaced by is a kind of methodological dualism which has no virtues. What it says is we should look at human mental processes differently than the way we look at the rest of the physical world. That's very common."
},
{
"end_time": 1175.52,
"index": 44,
"start_time": 1146.715,
"text": " And this is an example of it. And nobody would ever ask a question like this about, say, biology or chemistry. But you do ask it about language. It's not making a criticism. This is common all through philosophy and so on, other fields. Looking at expecting to find things about language by observing phenomena we would never"
},
{
"end_time": 1208.268,
"index": 45,
"start_time": 1179.48,
"text": " This is a second part to the same question. Do the syntactic objects that merge together in a merge operation generically have subset categories, or when they are specific syntactic objects, do they have varying properties and sub-distinctions?"
},
{
"end_time": 1235.896,
"index": 46,
"start_time": 1208.865,
"text": " They have many complex properties. When you say you're merging a subject and a predicate, look into the details. It's not what we're seeing, but what underlies them, but something like a noun phrase and a verb phrase, let's say. They each have complex internal properties."
},
{
"end_time": 1255.162,
"index": 47,
"start_time": 1236.613,
"text": " developed by the structure building operations of the language, which are all well beyond introspection. There is a strong argument, I think, that they are probably based on merge, but it takes work to show that."
},
{
"end_time": 1282.739,
"index": 48,
"start_time": 1255.828,
"text": " Now the last question on merge from Kelly is, is the concept of merge and the underlying process of merge similar or different than other concepts which may or may not be related or credible such as concepts by Mark Turner within cognitive linguistics called conceptual blending and his more detailed description called conceptual integration networks? Well, to the extent that"
},
{
"end_time": 1310.384,
"index": 49,
"start_time": 1283.49,
"text": " conceptual blending has been made at all clear. They basically no connection. One more merge question. Before humans experienced the mutation that yielded the merge operation, did they possess a lexicon of conceptually rich items that functioned as complex calls somewhat like those used by current monkeys and primates? Well, there's a"
},
{
"end_time": 1338.37,
"index": 50,
"start_time": 1311.425,
"text": " There's a famous article that I often urge people to read by Richard Lewontin, one of the most important modern evolutionary biologists, passed away a couple years ago. It's an article in the last volume of a multi-volume encyclopedia published by MIT,"
},
{
"end_time": 1364.77,
"index": 51,
"start_time": 1338.763,
"text": " called Invitation to Cognitive Science, a lot of very valuable articles. Dick Lawontin was asked to write an article on evolution of cognition, and that bears on this question, what was going on in the early stages, and he gives a careful discussion. He finally says,"
},
{
"end_time": 1379.002,
"index": 52,
"start_time": 1366.049,
"text": " His final words basically are tough luck. There's no way by any known method of the study of evolution to find out what was going on."
},
{
"end_time": 1396.323,
"index": 53,
"start_time": 1379.923,
"text": " Noam Chomsky's ideas on linguistics, philosophy, and economics are invaluable and he's always looking to the future. However, Chomsky is not the only one. Others have been working at the cutting edge for years and innovating on our behalf."
},
{
"end_time": 1415.111,
"index": 54,
"start_time": 1396.323,
"text": " Masterworks is a Picasso in the world of investments. It's a platform that lets you invest in real fine art without needing millions. I mean like museum art, Warhol, Banksy, even Monet. Masterworks is that Higgs boson in your investment portfolio giving it mass."
},
{
"end_time": 1440.862,
"index": 55,
"start_time": 1415.111,
"text": " We're talking awe-inspiring 740,000 plus users over 750 million invested in more than 225 SEC qualified offerings. They've sold 13 paintings so far with five of those sales happening just since we talked about them last in December. In fact, every painting to date has delivered a profit back to their investors."
},
{
"end_time": 1469.889,
"index": 56,
"start_time": 1440.862,
"text": " Now I hear you, Kurt, I'm no art aficionado. Is it truly that simple? Well, Masterworks breaks these paintings into shares through a well-architected process with the SEC. And if they're selling a painting you're invested in, you get a share of the profits. Feel the brushstrokes of fortune as three of their recent sales have painted the canvas with 10, 13, and 35% net returns. Now here's the thing. Masterworks has a waitlist."
},
{
"end_time": 1482.381,
"index": 57,
"start_time": 1469.889,
"text": " However, because you're a theories of everything listener, you get to quantum leap over the waitlist. Just click the link in the description. Alright, let's dive back into the work of another master, Noam Chomsky."
},
{
"end_time": 1505.503,
"index": 58,
"start_time": 1483.951,
"text": " I've been lucky enough to partner with MyHeritage. MyHeritage is a leading global family history and DNA service that makes exploring your family history easier, more streamlined than ever. Their test kits are simple to use, taking only two minutes. It helps you discover your origins and find new relatives. MyHeritage also covers more regions than any other test."
},
{
"end_time": 1526.442,
"index": 59,
"start_time": 1505.503,
"text": " Once you get your DNA results, you'll receive an ethnicity estimate, which supports over 42 ethnicities across 2114 regions. In fact, here's me unboxing it and doing the swab. Super, super simple. So I'm recording this right now, and I expect that the results say that I'm West Indian because I'm from Trinidad, and I think that's pretty much it. Let's see. Okay."
},
{
"end_time": 1548.882,
"index": 60,
"start_time": 1526.596,
"text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?"
},
{
"end_time": 1567.09,
"index": 61,
"start_time": 1549.343,
"text": " Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
},
{
"end_time": 1603.183,
"index": 62,
"start_time": 1582.824,
"text": " My heritage has a promotion right now. Click the link in the description and use the coupon code"
},
{
"end_time": 1628.865,
"index": 63,
"start_time": 1603.183,
"text": " To find out what was going on a couple hundred thousand years ago."
},
{
"end_time": 1655.555,
"index": 64,
"start_time": 1629.462,
"text": " I mean, in principle, it's not an unanswerable question, but we have no way to answer it, at least no known way. So the answer to the question is, how can you find out? How can we find out what was going on? We can look at the complex calls used by monkeys and other primates. We can study those. They don't even have a remote relationship to anything in human language."
},
{
"end_time": 1685.094,
"index": 65,
"start_time": 1656.323,
"text": " And in fact, there have been very elaborate efforts to try to see if you can train chimpanzees closest to us to develop anything remotely like language and totally impossible as any biologist would expect 12 million years of evolutionary separation. Why should you have any connection? So as far as we know, the"
},
{
"end_time": 1714.531,
"index": 66,
"start_time": 1685.947,
"text": " particular rules and entities that enter into human language seem to be a unique human possession, which really shouldn't surprise us very much. There isn't any other species that's doing what you and I are now doing now. Theodore Dobzhansky, one of the great evolutionary biologists, once said that each species is unique."
},
{
"end_time": 1726.374,
"index": 67,
"start_time": 1714.923,
"text": " And humans are the uniqueness of all. They have properties that are unknown elsewhere in the organic world. Language and cognition are the core ones."
},
{
"end_time": 1755.93,
"index": 68,
"start_time": 1727.193,
"text": " Have you ever attempted to follow up with John Lilly's dolphin human communication project? Dolphins have a definite and complex language system that's been recorded and analyzed but not fully translated to a complete understanding. Also, apparently sperm whales have the most intricate language. How true is this? How does one define this complexity? Study it the same way as study anything else. First of all, we should be"
},
{
"end_time": 1786.067,
"index": 69,
"start_time": 1756.442,
"text": " cautious about the use of the word language. What do we mean by that? Whatever they have, as the questioner points out correctly, it's a communication project. Communication project. Every organism has means of communication bound to bacteria. Trees communicate. Communication is all over the place. But human language is not a communication system. It's used for communication. But"
},
{
"end_time": 1811.374,
"index": 70,
"start_time": 1786.92,
"text": " fundamental design is actually dysfunctional for communication is easily shown. So calling it a language is already a questionable term. It's a communication system. How do you study it? Same way you study any other communication system. Trees, for example, study the way roots interconnect to transmit signals"
},
{
"end_time": 1841.254,
"index": 71,
"start_time": 1814.855,
"text": " the neighboring trees protection mechanisms against parasites or something that's harming the tree, the way bacteria communicate. I look at the desert ants in my backyard traveling in a line and I notice that they bump into each other along the way. I presume they're communicating and an ant"
},
{
"end_time": 1871.715,
"index": 72,
"start_time": 1841.903,
"text": " A person studying ants would study their communication system. That has been done extensively for bees, for example. And you would do the same with some dolphins. We'll talk about a dolphin-human communication project. It's a little bit different. We can communicate with other animals. Like I have sitting under my desk right now two animals I can communicate with."
},
{
"end_time": 1900.401,
"index": 73,
"start_time": 1872.534,
"text": " It's got nothing to do with language. I mean, I use language, but what they're going on in their heads is something totally different. They're detecting some noises that mean something to them. But you can like run outside the play or something. But all of these things can be studied in normal fashion, just without illusions. We shouldn't delude ourselves into thinking that"
},
{
"end_time": 1926.442,
"index": 74,
"start_time": 1901.22,
"text": " Something like a communication system is going to tell us something about the nature of language. They're all very interesting. Many of them are more have complexity that we don't have, like humans can't communicate as efficiently as honeybees do with regard to their particular concerns, like the distance quality of a flower."
},
{
"end_time": 1955.333,
"index": 75,
"start_time": 1927.073,
"text": " Thank you. There has been a growing trend within philosophy, especially in the analytic tradition, that nowadays the main task of philosophy is to clarify questions and nurture sciences which are in their infancy while needed. Would you agree with this and do you think there are other roles for philosophy to play? Well, there is."
},
{
"end_time": 1985.026,
"index": 76,
"start_time": 1957.773,
"text": " There are philosophers, very good ones, who take this position, like John Austin, one of the, in my view, most important modern philosophers. This was pretty much his view. But as he would have told you, it's nothing new. This is John Locke, for example. You look back to John Locke, he said his role is to try to clear the underbrush."
},
{
"end_time": 2012.619,
"index": 77,
"start_time": 1985.674,
"text": " So that inquiry can then proceed without confusions and misunderstanding. Well, that's basically the same thing. Philosophy traditionally has nurtured science in a very obvious and important way. Philosophy and science weren't distinguished until fairly recently. In fact, if you study at Oxford today,"
},
{
"end_time": 2041.63,
"index": 78,
"start_time": 2013.114,
"text": " You study natural science, natural philosophy, and moral philosophy. Natural philosophy is what we call science. Till the mid-19th century, there was basically no distinction. The word science in its modern sense was introduced by William Ewell, I think, in around 1850 or so. Up until then, you couldn't tell who was a philosopher and who was a scientist."
},
{
"end_time": 2069.224,
"index": 79,
"start_time": 2042.09,
"text": " Barclay could debate the accuracy of Newton's proofs, for example. But the sciences advanced sufficiently by the late 19th century, so you had to have special knowledge to become directly involved in the sciences. It wasn't just anybody could talk about them. There still were philosophers like, say, Bertrand Russell or"
},
{
"end_time": 2100.128,
"index": 80,
"start_time": 2070.299,
"text": " More modern times, people like Hilary Putnam who were very sophisticated in the sciences, but it's different interests and concerns. But I think philosophy always has nurtured new sciences, has a very significant role with regard to the emerging sciences, like take the cognitive sciences. That's pretty recent. We're not even in a Galilean stage, in my opinion."
},
{
"end_time": 2130.316,
"index": 81,
"start_time": 2100.657,
"text": " And here, it's hard to distinguish philosophy and science, just as it was in the early scientific revolution or well into the 19th century. So clarifying concepts, clearing the underbrush, as Locke put it, very important aspect of philosophy. Of course, there are many other things, many other topics, like can we ground moral philosophy in some way? That's a different kind of topic."
},
{
"end_time": 2157.261,
"index": 82,
"start_time": 2130.708,
"text": " What does it mean to ground moral philosophy and is this related to the symbol grounding problem? Not really. That's a different question. Symbol grounding is an interesting question, but a different one. The ground moral philosophy means to find some basis from which we can"
},
{
"end_time": 2187.244,
"index": 83,
"start_time": 2160.623,
"text": " Maybe deduce is too strong, but at least find some rational basis for what we take to be ethical behavior, ethical judgments, and so on. That's become a pretty concrete question in the last 30 or 40 years. There's very interesting work that was initiated by John Michail."
},
{
"end_time": 2214.189,
"index": 84,
"start_time": 2187.978,
"text": " was a grad student in philosophy, did a thesis on this, then went on to write a book about it, which reconstructed a lot of John Rawls' work in terms of Rawls' initial proposals and efforts, which he later abandoned, to try to find something like a grammar of ethical judgment, and then"
},
{
"end_time": 2244.224,
"index": 85,
"start_time": 2214.445,
"text": " There was a lot of criticism of it. Michael goes through the criticism, I think undermines it quite effectively, and then proceeds with the project and went on to do, to open experimental work along with Elizabeth Spalke, a very good experimental psychologist, worked together on just trying to find universal moral principles by looking at children, different cultures and so on."
},
{
"end_time": 2274.428,
"index": 86,
"start_time": 2244.906,
"text": " That work was later expanded by others, Mark Hauser and others, and to the extent that it succeeds, you get...Mathias Maumann, a philosopher, has done recently a very, I think, quite profound work on these issues. To the extent that it succeeds, you can hope to get a conception of"
},
{
"end_time": 2301.22,
"index": 87,
"start_time": 2275.094,
"text": " something like the faculty of language, the faculty of moral judgment, some innate system that has principles and a kind of a rational structure that allows one to draw from it conclusions about what our innate moral judgments are. Then comes the grounding problem"
},
{
"end_time": 2331.22,
"index": 88,
"start_time": 2302.415,
"text": " How do we know that's the right moral? These are the right moral judgments. How do we know? Suppose we knew what are the moral judgments that are inherent in our nature? Well, then comes a question. In fact, you can ask, is it a question? Is there anything beyond this? In fact, similar questions arise for epistemology, as in fact, Matthias Malmann, who I mentioned,"
},
{
"end_time": 2360.23,
"index": 89,
"start_time": 2331.408,
"text": " studied this, discussed this point. But if you think about epistemology, it's also based on, I'm talking about what Klein calls natural epistemology, epistemology in David Hume's sense. Epistemology is a science that seeks to find what Hume called the secret origins and principles of some"
},
{
"end_time": 2389.36,
"index": 90,
"start_time": 2361.254,
"text": " Well, I suppose you can study that. Let's take what looks like, to me at least, the most promising approach to it. Charles Sanders Peirce's conceptions of abduction never worked out very carefully, an open philosophical question."
},
{
"end_time": 2413.422,
"index": 91,
"start_time": 2389.565,
"text": " His point, which was plausible, I think, is that there's something that allows us to, in a particular situation of having reached a certain level of understanding or comprehension of whatever we're studying, say the natural world, there's something in our minds that enables us"
},
{
"end_time": 2442.722,
"index": 92,
"start_time": 2413.985,
"text": " to put forth a limited set of potential hypotheses that might explain it. It's got to be a limited set or else we'd never get anywhere. And he gives some good arguments that that's the way the history of science has proceeded. I suppose he can make some sense of that, find out what it is that determines this limited set of principles that carries us up to the next stage of understanding."
},
{
"end_time": 2472.892,
"index": 93,
"start_time": 2443.746,
"text": " secret springs and origins of our intellectual nature in human's terms, then would come the same grounding problem. How do we know these are the right ones? Maybe our intuitive mode of inquiry and investigation is not the one that leads to the truth. Same kind of question. And then you can ask, is it really a question?"
},
{
"end_time": 2503.012,
"index": 94,
"start_time": 2473.37,
"text": " What is the symbol grounding problem and do you see it as solved?"
},
{
"end_time": 2531.357,
"index": 95,
"start_time": 2503.712,
"text": " An infant learns words very rapidly at the peak moments of language acquisition, like two years old or so. An infant is picking up a word practically every waking hour, virtually no evidence, maybe one presentation. Look at the nature of the words carefully. They're very intricate meanings."
},
{
"end_time": 2561.34,
"index": 96,
"start_time": 2532.21,
"text": " It's not you look at a tree and say tree, and that's the meaning. Nothing remotely like that. Very intricate, complex meanings. This was already known in classical Greece. You can now know a lot more about it. Well, then the question comes, how does this symbol, tree, river, house, dog and so on, what does it have to do with the external world?"
},
{
"end_time": 2591.357,
"index": 97,
"start_time": 2562.415,
"text": " That's a serious question. It doesn't just pick out elements in the world and show that that's false. That's much more complex, right? About what could be a tree, what could be a house, what persistence of the question that troubled Locke and Hume still troubles people. What about things that were our perception, persistence of objects when we don't perceive them, let's say?"
},
{
"end_time": 2618.183,
"index": 98,
"start_time": 2591.92,
"text": " existence of objects that we've never seen. What kind are they? What kind of properties does a person have that makes it the same person under radically different changes? Same for a river or a house or anything else. All of these are questions about simple grounding, most of them not very seriously investigated, though they could be."
},
{
"end_time": 2639.855,
"index": 99,
"start_time": 2618.814,
"text": " One of the reasons they're not investigated is because of illusions, the illusions that there's a simple associative relationship between a symbol and something in the external world. So, child sees a tree, mother says, tree, okay, kid knows tree, doesn't work with them."
},
{
"end_time": 2668.609,
"index": 100,
"start_time": 2640.742,
"text": " Can you expound on the below quotation? Science is a bit like the joke about the drunk who is looking under a lamppost for a key that he's lost on some other side of the street, because that's where the light is. It has no other choice. So this comes from Noam Chomsky's letter to the author 14th of June, 1993. Well, that's what we're stuck with. We look under the lamppost where there is some"
},
{
"end_time": 2699.326,
"index": 101,
"start_time": 2669.616,
"text": " The story is about a drunk who's looking under a lamppost, and somebody comes up and asks him, what are you looking for? And he says, I'm looking for a key. And the person asks him, where did you lose it? And he says, well, I lost it on the other side of the street. And you ask, well, why are you looking here then? Because that's where the light is. There's no light over there. That's what we're stuck with. We're looking where"
},
{
"end_time": 2732.295,
"index": 102,
"start_time": 2702.739,
"text": " Is the mind-body problem misconceived? If so, how so?"
},
{
"end_time": 2761.544,
"index": 103,
"start_time": 2733.575,
"text": " basically Cartesian. Descartes famously postulated res cogitans alongside of res estensa. So there's extended entity that's matter, physical, material. Then there's the mental world, which is separate from that."
},
{
"end_time": 2790.998,
"index": 104,
"start_time": 2762.056,
"text": " and look for some connection between them. That was a perfectly reasonable scientific hypothesis, nothing mystical about it. Descartes, like all scientists of the period from Galileo through Newton and beyond, adopted what was called the mechanical philosophy, the idea that the world is a complex artifact, sort of like the"
},
{
"end_time": 2821.391,
"index": 105,
"start_time": 2791.493,
"text": " artifacts that skilled artisans were producing at the time, quite very extensively all over Europe. Skilled artisans were producing mechanical clocks that could do all sorts of complicated things. The fountains, the gardens at Versailles, as you walked through them, all kinds of things were happening."
},
{
"end_time": 2848.439,
"index": 106,
"start_time": 2821.869,
"text": " things that looked like people playing a role in plays and so on. And then Galileo, his contemporaries and others, Leibniz, Newton concluded, well, that's what the world is, just a complicated system, but basically a mechanical ears, levers, things pushing and pulling each other and so on."
},
{
"end_time": 2878.439,
"index": 107,
"start_time": 2849.019,
"text": " Descartes just routinely assumed that, everyone did, and then Descartes tried to show that his main scientific work was to try to show that, in fact, you could give a mechanical explanation for phenomena of the world, including a good deal of human understanding and behavior of sensation and perception. He thought he could give a mechanical"
},
{
"end_time": 2908.046,
"index": 108,
"start_time": 2879.002,
"text": " interpretation of that. But he observed that some things don't fall within that. In fact, one of his main examples in the Discourse on the Method was language. He said, if you look at what people are doing there, there's no mechanical interpretation for what you and I are now doing. It's maybe impelled by circumstances, but not compelled by them. You could choose differently, and you choose appropriately."
},
{
"end_time": 2936.8,
"index": 109,
"start_time": 2908.439,
"text": " So somehow people are making appropriate, undetermined use of language in ways that we could call creative. It's a long history behind this. He wasn't the first to point it out. But then, like any scientist, he postulated a principle to account for this. That's the second principle for his cogatons. That's the classical nonbiotic problem."
},
{
"end_time": 2966.886,
"index": 110,
"start_time": 2937.756,
"text": " Well, it was, the props were not under it by Newton, who showed, didn't say anything about the mind, but he showed that the mechanical system is not true, showed the world is not a mechanical system. That's his, one of the main contents of Newton's Principia is to undermine the show that Descartes proofs just didn't work."
},
{
"end_time": 2994.411,
"index": 111,
"start_time": 2967.381,
"text": " It had no account of the universe in mechanical terms. In fact, there isn't any Newton-postulated principles that are outside the mechanical world. Newton himself regarded this as a total absurdity and didn't believe a word of it. He said, nobody of any scientific understanding can believe this, but it seems to be true. And that's why he never wrote a book on"
},
{
"end_time": 3022.995,
"index": 112,
"start_time": 2995.845,
"text": " on science, what would have been called at the time, principles of philosophy. He never wrote them. He wrote mathematical principles. That's just an account of what happens, but without any explanation. His famous comment, I make no hypotheses, is because he couldn't, he said, I have no physical explanation. I can just say, here's"
},
{
"end_time": 3052.193,
"index": 113,
"start_time": 3023.899,
"text": " the way it works. There's the mathematical principles. Well, that ended the mind-body problem in the classical sense, until somebody tells us what matter is. Nobody can tell that up to the present day, and there's no mind-body problem in the classical sense. This was understood, incidentally, by John Lotka immediately after the appearance of Principia."
},
{
"end_time": 3079.65,
"index": 114,
"start_time": 3052.961,
"text": " pointed out that he put it in a theological framework, standard framework of the day. He said, just as Newton has shown, God assigned properties to matter that we cannot conceive of. Similarly, God may have super-added to matter principles of thought."
},
{
"end_time": 3107.005,
"index": 115,
"start_time": 3084.974,
"text": " don't know what matter is, but whatever it turns out to be, the tides, the planets, so on, they follow it by principles we can't understand, but we can work with them, and maybe thought is the same. That was pursued very extensively through the 18th century."
},
{
"end_time": 3136.459,
"index": 116,
"start_time": 3107.346,
"text": " by leading figures. It reached its peak with Joseph Priestley, a chemist-philosopher at the end of the 18th century with quite extensive studies of how thought can be a property of organized matter. Then it was pretty much forgotten, rediscovered in the late 20th century without any knowledge of the history, and it's now considered part of philosophy and neuroscience."
},
{
"end_time": 3167.517,
"index": 117,
"start_time": 3138.387,
"text": " come straight out of the collapse of the mind-body problem. Well, is there another mind-body problem? Well, commonly these days, the mind-body problem is formulated in totally different terms, unrelated to the classical one. It's usually formulated in terms of first-person versus third-person conceptions of the world. So the first-person conception is"
},
{
"end_time": 3197.176,
"index": 118,
"start_time": 3168.285,
"text": " what I sense. The moon is big on the horizon. That's first person. The third person approaches what the scientist from the outside says about it. If you pursue that, you get different conclusions. Some people call that a mind-body problem. I think that's a very bad term for it. And I think that's just a mis-formulation of the way we should look at the problem. There's an interesting"
},
{
"end_time": 3223.609,
"index": 119,
"start_time": 3197.722,
"text": " interesting work on this, including a recent book by an Australian philosopher, a very good one, Peter Cizak. I think it's called the Cartesian Illusion or something like that. The Cartesian Amphithet or something like that. But"
},
{
"end_time": 3241.834,
"index": 120,
"start_time": 3224.411,
"text": " But something like that is often called the mind-body problem, which it shouldn't be, in my opinion. There was the scientific question, which essentially was resolved by Newton. There's no body, there's no matter, there's no physical, so there's no mind-body problem."
},
{
"end_time": 3258.882,
"index": 121,
"start_time": 3242.432,
"text": " Thank you, sir. Would it be fair to argue that one reason why chat GPT and other large language models seem so human-like in their responses is that as a result of their training, they have inferred that is made sense of an underlying and imminent"
},
{
"end_time": 3288.387,
"index": 122,
"start_time": 3258.882,
"text": " semantic grammar to map the various sentences to their corresponding semantic underpinnings. If you'd like, you can watch this video by Sabine Hassenfelder, a physicist, arguing that chat GPT does indeed understand language. Does an artificially intelligent chatbot understand what it's chatting about? A year ago, I'd have answered this question with clearly not, but I've now arrived at the conclusion that the AIs that we use today do understand what they're doing, if not very much of it."
},
{
"end_time": 3316.715,
"index": 123,
"start_time": 3288.712,
"text": " I'm not saying this just to be controversial. I actually believe it, I believe. I took a look at the first half of the video. I didn't follow it through, but she's a very good physicist, but I think she's asking meaningless questions. What she's interested in is, and she makes it very clear at the beginning, is understanding quantum physics. She's thinking of a famous comment by"
},
{
"end_time": 3339.94,
"index": 124,
"start_time": 3317.159,
"text": " Richard Feynman, which is leading quantum physics, in which he said, nobody understands quantum physics, but we know how to use it. And Hassenfelder's position is, if we know how to use it, we understand it. That's all that understanding is. Then she says, well, if Chet GPT uses language, then it understands it."
},
{
"end_time": 3370.282,
"index": 125,
"start_time": 3341.664,
"text": " And then it goes on from there, and she talks about Chinese, Charles's Chinese room and so on. But that's basically the issue. Well, first of all, that wildly overestimates Chot VPT. It does nothing like it. But even if we accept that, it's just a terminological question about what we're going to call understand. Not very interesting. So suppose you go to a museum."
},
{
"end_time": 3399.514,
"index": 126,
"start_time": 3371.715,
"text": " Go to any museum, you'll see art students carefully copying paintings by masters, painting by Van Gogh or Rembrandt or something. Well, what they're producing, will we call that creating a work of art? I mean, in a certain sense, yes, they're doing things I can't do."
},
{
"end_time": 3429.957,
"index": 127,
"start_time": 3400.094,
"text": " There's a lot of creativity involved, but is that creating a work of art? You can call it that if you want, but it's not an interesting question. Same about understanding. If I copy a novel by Tolstoy, am I creating an artistic, am I making an artistic contribution? In a certain sense, yes, copied it, it's there. I'm using"
},
{
"end_time": 3459.77,
"index": 128,
"start_time": 3430.35,
"text": " Do I understand? Is it a work of art? Of course not. Well, that's chat box. Basically, all the large language models are basically complex versions of one or another kind of plagiarism. You want to call that understanding your business. It's like asking, does a submarine swim? You want to call that swimming, okay, but there's no substantive issue."
},
{
"end_time": 3489.497,
"index": 129,
"start_time": 3460.845,
"text": " Have you seen the paper that's called Modern Language Models Refute Chomsky's Approach to Language, the one by Stephen Piantadosi? If so, what do you make of it? Well, I mean, unlike a lot of people who write about this, he does know about large language models, but the article makes absolutely no sense, has a minor problem and a major problem."
},
{
"end_time": 3517.398,
"index": 130,
"start_time": 3490.486,
"text": " The minor problem is that it's beyond absurdity to think that you can learn anything about a two-year-old acquiring language on almost no evidence by looking at a bunch of supercomputers, scanning 50 terabytes of data, looking for statistical regularities and stringing them together."
},
{
"end_time": 3545.776,
"index": 131,
"start_time": 3518.029,
"text": " To think that from that you can learn anything about an infant is so beyond absurdity that it's not even worth talking about. That's the minor problem. The major problem is that in principle, you can learn nothing in principle. Make it 100 terabytes, you know, use 20 supercomputers. Just bring out the in principle impossibility even worse."
},
{
"end_time": 3572.346,
"index": 132,
"start_time": 3546.664,
"text": " for more clearly, for a very simple reason. These systems work just as well for impossible languages that children can't acquire, as they do for possible languages that children do acquire. Kind of as if a physicist came along and said, I got a great new theory. It counts for a lot of things that happen,"
},
{
"end_time": 3600.094,
"index": 133,
"start_time": 3572.807,
"text": " a lot of things that can't possibly happen, and I can't make any distinction among them. We would just laugh. I mean, any explanation of anything says, here's why things happen, here's why other things don't happen. You can't make that distinction. You're doing nothing. And that's irremediable."
},
{
"end_time": 3628.558,
"index": 134,
"start_time": 3600.64,
"text": " built into the nature of the systems. So the more sophisticated they become at dealing with actual language, the more it's demonstrated that they're telling us nothing in principle about language, about learning, or about cognition. So there's a minor problem with this paper and a major one. The minor problem is the simple absurdity of thinking that"
},
{
"end_time": 3660.145,
"index": 135,
"start_time": 3631.442,
"text": " Yes, I heard you describe this as if a physicist came along and said, I have a theory and it's two words, anything goes. Well, that's basically the language models. You give it a system that designed in order to violate the principles of language. We'll do just as well. Maybe better."
},
{
"end_time": 3686.135,
"index": 136,
"start_time": 3660.623,
"text": " because it can use simple algorithms that aren't used by natural language. So it's basically, like I said, or like I say, suppose some guy comes along with an improvement on the periodic table, says, I got a theory that includes all the possible elements, even those that haven't been discovered, and all kinds of impossible ones, and I can't tell any difference."
},
{
"end_time": 3714.377,
"index": 137,
"start_time": 3687.295,
"text": " Are there any recent developments or complete description of Chomsky's inclusive condition? Further, what is the standing of the bare-phrase structure theory? Are there serious attempts made in its development? Is it still of interest for syntacticians?"
},
{
"end_time": 3745.282,
"index": 138,
"start_time": 3715.367,
"text": " of interest. But of course, it's been much developed since then. That was 30 years ago. The inclusiveness condition by now, you don't have to bother saying. It's just a consequence of the... If you look at any form of growth and development, whatever it is, arms and legs, language, anything, there's basically going to be three factors involved."
},
{
"end_time": 3776.374,
"index": 139,
"start_time": 3746.374,
"text": " One of them is whatever the internal structure is, ultimately the innate structure. That's one factor. Some kind of external data which triggers the internal systems, partially shapes them. Third, laws of nature, crucial. In the case of language, the relevant laws of nature are principles of computational efficiency."
},
{
"end_time": 3805.094,
"index": 140,
"start_time": 3776.903,
"text": " We can think of those as laws of nature. Well, by now, the last couple of decades, there's been extensive work showing how principles of computational efficiency determine a very large part of the outcome of the growth and acquisition process. And the inclusiveness condition just fell to the side as a consequence of these principles."
},
{
"end_time": 3831.186,
"index": 141,
"start_time": 3805.828,
"text": " The bare-phrase structure was an early attempt to separate out many different factors involved in syntactic structure that's now extended quite considerably. So it's still there, but it's the basis for much more sophisticated developments which have much wider empirical branches as well."
},
{
"end_time": 3859.684,
"index": 142,
"start_time": 3831.954,
"text": " Professor, can you explain what intentionality is to the audience, as well as myself, and what your views are on Searle's Chinese Room experiment are? Intentionality, this is with a T, not an S, notice. Intentionality with an S is sometimes mixed up with it. It's totally different. It has to do with aboutness. So what am I talking about when I say the sauna setting?"
},
{
"end_time": 3888.387,
"index": 143,
"start_time": 3861.067,
"text": " If I say I crossed the river, what am I talking about? That's an intentional and very complex question. It goes back to the pre-Socratics. We have a lot to say. It's like the grounding problem. The Chinese room experiment, I think, is just a bit of confusion. I think the answer to the problem had already been given by"
},
{
"end_time": 3923.575,
"index": 144,
"start_time": 3893.882,
"text": " manner, not explicit. He wasn't talking about understanding, but about thinking. It's about the same. He said, people think maybe dolls and spirits. What that meant is we use the word thinking to refer to what people do. Like other words, it has a kind of open texture, so it's not precisely determined what"
},
{
"end_time": 3952.875,
"index": 145,
"start_time": 3923.797,
"text": " So maybe we'll extend the use to things that are kind of like people. Dolls, that's what he meant by dolls and spirits. Well, we don't extend it to rooms. We don't talk about rooms thinking. That's like saying we don't say that submarines swim. There's no substantive issue. It's not the way the word is used. We don't use the word understand for rooms."
},
{
"end_time": 3981.834,
"index": 146,
"start_time": 3953.353,
"text": " We use it for people and things sort of like people. I don't think there's any more involved than that. That's just a technological point. If we look into it further, if you developed a system for the room, let's say a robot which looked like humans, and you were able to build into the robot,"
},
{
"end_time": 4011.305,
"index": 147,
"start_time": 3982.295,
"text": " all of the rules and principles that are coded in our brains. We know some of them, not all of them. If you could do something like that, then you could ask the question, is the robot thinking? In fact, that question was asked in the 17th century. The 17th century, after Descartes, some of his followers, in particular Jacques de Cordonois,"
},
{
"end_time": 4040.265,
"index": 148,
"start_time": 4011.715,
"text": " Partesian philosopher formulated what's now called the Turing test, but in a much more scientific way. He said, suppose there's some organism that looks like us and is able to respond to any question or statement that we formulate in the way that humans do."
},
{
"end_time": 4068.131,
"index": 149,
"start_time": 4040.606,
"text": " no matter how hard we make the problem. Then he said, we would have good reason to think that it has a mind like ours. Notice that he was talking scientifically, metaphysically. Has a mind means a thing. This is like a litmus test for acidity. I want to find that. Remember, the idea was there is an entity, the mind, separate from body."
},
{
"end_time": 4096.527,
"index": 150,
"start_time": 4068.882,
"text": " We want a test to find out when it's there, like a litmus test. And he formulated what's basically Turing's imitation game to say, here's a litmus test we could use for an organism that looks like us, incorporating Wittgenstein's insight. Well, that was scientifically reasonable."
},
{
"end_time": 4126.903,
"index": 151,
"start_time": 4096.937,
"text": " Speaking of metaphysics, what are your thoughts on extending generative grammar into metaphysics, such as the CTMU, which is the cognitive theoretic model of the universe by Christopher Langan? Like asking, how can we extend"
},
{
"end_time": 4155.469,
"index": 152,
"start_time": 4128.78,
"text": " The theory of genetic formulation of proteins, genetic determination of proteins into a theoretical model of the universe. It's not the right kind of question. These are theories of particular and organic entities. The theory of genetics, RNA,"
},
{
"end_time": 4186.084,
"index": 153,
"start_time": 4156.544,
"text": " Formation of proteins, protein folding. Those are studies of particular organic entities. Generative grammar is the study of another particular organic entity, something coded into our brains, which is unique to humans and apparently common to humans. So, I mean, it has a theoretical structure. And of course, the theoretical structure"
},
{
"end_time": 4216.271,
"index": 154,
"start_time": 4186.92,
"text": " could be something like the theoretical structure for other questions, like I wouldn't call it a cognitive theoretical model of the universe, but a model of the universe as a theoretical structure. Well, all theoretical structures have something in common, like they have a deductive character. The consequences have to follow from the premises and so on. But I"
},
{
"end_time": 4247.21,
"index": 155,
"start_time": 4217.466,
"text": " Professor, what are your views on solipsism and the role of consciousness, observer, in creating reality? Is this similar to the problem of anything goes?"
},
{
"end_time": 4276.459,
"index": 156,
"start_time": 4248.729,
"text": " Our particular way of observing reality is a concrete phenomenon in the actual world. It's not anything good. Take the moon illusion again. My conscious experience, can't get over it, can't overcome it, is that the moon is big at the horizon. With all you might know, the greatest astronomer in the world will still see that."
},
{
"end_time": 4307.108,
"index": 157,
"start_time": 4277.329,
"text": " That's our conscious experience. Well, is it creating reality? No. Reality is not that the mind is that the moon is bigger. Discover that in other ways. Therefore, nobody believes that the moon is bigger, even though that's all that we see and there's no explanation for it. Still don't believe it because there's so much counter evidence. So you don't create reality that way."
},
{
"end_time": 4336.152,
"index": 158,
"start_time": 4307.875,
"text": " Now there's a sense in which our immediate experience creates reality. It's the kind of issue that's raised in a very interesting way by Nelson Goodman, an outstanding philosopher, in his work on what he calls star-making. So he begins by asking the question, take a look at the constellation, say Orion."
},
{
"end_time": 4365.623,
"index": 159,
"start_time": 4337.21,
"text": " Is the constellation a thing in the world, or is it a construction of our mind? What he calls a version of the world. It's a construction of our mind. It's a version. Then he goes on to say, well, what about the stars that constitute Orion? Are they things in the world, or are they just a version? He concludes they're a version. We could talk about"
},
{
"end_time": 4393.268,
"index": 160,
"start_time": 4365.93,
"text": " What we call about talk about stars, we can talk about in other ways, like concentrations of energy and the general flux of energy in the universe and so on. Then it goes on to say, well, is there anything that isn't just a version? Is there any reason to ask what do we gain by postulating a reality behind the versions? Then goes interesting discussions. There's"
},
{
"end_time": 4422.415,
"index": 161,
"start_time": 4394.241,
"text": " A book on this was his introduction commentary by a number of other philosophers, Hilary Putnam, Israel Schaeffler, and a couple of others, I think Carl Hampel. Those are interesting questions. I can't give a simple answer to them. But there is some kind of sense in which our immediate perceptions are the basis for"
},
{
"end_time": 4445.742,
"index": 162,
"start_time": 4423.131,
"text": " developing their notion of reality. Actually, Russell talked about this over a century ago, about a century ago, when he was saying that, if you think about it, he says, our highest level of confidence is in our immediate sensory"
},
{
"end_time": 4472.142,
"index": 163,
"start_time": 4446.237,
"text": " our immediate consciousness. That's our highest level of confidence. And then most of the rest of what we do in inquiry is trying to find an explanation for it. Part of the explanation we find is that our consciousness misleads us, tells us things that aren't true. That's one of the things we discover when we try to find an explanation for"
},
{
"end_time": 4502.585,
"index": 164,
"start_time": 4473.148,
"text": " There's an increasing number of claims that the approach to universal grammar doesn't hold up to Bayesian modeling. Is universal grammar still the proper way to view human language usage? Universal grammar is the proper way by definition. Universal grammar is defined"
},
{
"end_time": 4530.862,
"index": 165,
"start_time": 4503.251,
"text": " as the correct theory, whatever it is, of the innate language faculty. So it can't be wrong because it's true by definition. We don't know what it is. You can try to find it, but whatever it is, whatever the true theory is of the innate language faculty, that's universal grammar by definition and the modern usage of the term."
},
{
"end_time": 4559.531,
"index": 166,
"start_time": 4531.664,
"text": " Bayesian modeling doesn't address this question. Bayesian modeling is concerned with how beliefs, theories are acquired. That's a different question. I don't think it's particularly helpful, but that's a different question altogether. So it's kind of asking, how do we acquire"
},
{
"end_time": 4586.749,
"index": 167,
"start_time": 4560.145,
"text": " The theory of universal grammar. I don't think that's the way it happens. I think it's normal evolutionary processes. In what ways do you think AGI should be implemented? And in what ways does its implementation scare you? Well, there's no implementation that can scare me because there's no implement. There's no theory. There's nothing there."
},
{
"end_time": 4615.145,
"index": 168,
"start_time": 4587.807,
"text": " I mean, AGI is a goal of the early pioneers of artificial intelligence. People I knew, in fact, you go back to the 1950s, people like Herbert Simon, Al Newell, early Marvin Minsky, were concerned with trying to see if you could use what they called"
},
{
"end_time": 4647.398,
"index": 169,
"start_time": 4618.916,
"text": " understand the nature of intelligence, of learning, of thinking, and so on. It's basically part of cognitive science, asking, can we use the computational devices and theories that were coming into existence at the time as ways of approaching scientific questions? That would be AGI, but that's been pretty much abandoned. In fact, it's sometimes ridiculed as"
},
{
"end_time": 4676.118,
"index": 170,
"start_time": 4648.37,
"text": " Good old fashioned AI or something. Now we do different things, basically engineering projects which don't ask these questions. Who are your linguistic successors that are on the right path toward developing your linguistic theories? Also, what is the generator that generates grammar? Generative grammar is just the term generative is just the normal"
},
{
"end_time": 4703.626,
"index": 171,
"start_time": 4676.766,
"text": " mathematical terms. You have an axiom system for arithmetic, say. It generates an infinite number of proofs, geometrical objects, which are well-formed proofs. That's generation. Nothing different in generative grammar. You can't ask what generates it. The system is one that has a finite number. It's a finitary"
},
{
"end_time": 4732.807,
"index": 172,
"start_time": 4705.845,
"text": " and infinite output. That's generation and technical sense. There's plenty of very fine younger linguists doing outstanding work on this. It could be unfair. If you look at articles of mine, I list many of them in the Acknowledgements as collaborators and doing independent work. That's only a small sample."
},
{
"end_time": 4761.647,
"index": 173,
"start_time": 4733.37,
"text": " Is our language capacity identical or related to our arithmetic capacity? That's a very interesting question, which also has interesting historical background. The question, as far as the question itself is concerned, our current conception of universal grammar has the property"
},
{
"end_time": 4788.353,
"index": 174,
"start_time": 4762.244,
"text": " that if you take the simplest possible form of it, take the basic structure building operation, say merge, you take the simplest form of that, absolutely simplest, and you limit any generative procedure, any computational system is going to have"
},
{
"end_time": 4818.541,
"index": 175,
"start_time": 4788.677,
"text": " rules and atomic elements, elements to which it applies for language. These will be the lexical elements, smallest meaning-bearing elements, kind of word-like, but not really words. So if you take the simplest computational procedure, cut it down to its limits, reduce the lexicon to one element, you get the basis for arithmetic. You get the successor function and addition."
},
{
"end_time": 4848.848,
"index": 176,
"start_time": 4819.36,
"text": " essentially, then you can easily tweak it to get knowledge of arithmetic. So in that sense, it could be, we don't know, but it could be that our knowledge of arithmetic is just an offshoot of the language faculty. It's formally possible because our knowledge of arithmetic can be formulated as the ultimate simplest version"
},
{
"end_time": 4879.189,
"index": 177,
"start_time": 4849.77,
"text": " Now that goes back to a very interesting debate at the origins of the theory of evolution. Darwin and Wallace, co-founders of theory of evolution, had a debate and discussion about what seemed to them correctly as a serious paradox. They assumed"
},
{
"end_time": 4908.985,
"index": 178,
"start_time": 4879.701,
"text": " correctly as it turns out. They didn't have the evidence, but they assumed that all humans have basically knowledge of arithmetic. And they asked, how could this possibly be? It had never been used in evolutionary history, so it couldn't have been selected. So how can it possibly be? Is there something other than natural selection? Darwin"
},
{
"end_time": 4938.148,
"index": 179,
"start_time": 4909.855,
"text": " held to the idea that somehow it could have been selected. Wallace argued there's some other factor in evolution beyond natural selection. We now know there are many other factors, but they didn't know that. Now, maybe there's an answer, maybe, to the Darwin-Wallace debate. Conceivably, arithmetic is just"
},
{
"end_time": 4967.568,
"index": 180,
"start_time": 4939.07,
"text": " either an offshoot of the language faculty or an instantiation of whatever rewiring of the brain yielded the language faculty. Maybe it also simultaneously yielded the minimal system, which yields pretty much close to yielding knowledge of arithmetic. There's also similar questions about the"
},
{
"end_time": 4984.735,
"index": 181,
"start_time": 4968.319,
"text": " Basis for Music, the Basis for Morality, John Michail's work, an interesting work by Geoffrey Wadamall, Mark Houser, Ian Roberts, discussing these questions."
},
{
"end_time": 5015.691,
"index": 182,
"start_time": 4985.794,
"text": " Some would argue that the statement, you can't know anything for certain, demonstrates a nihilistic skepticism toward information and reason. The question arises whether knowledge is ultimately linked to trust and thus is practically equivalent to faith. It's been well understood since the 17th century, the collapse of Cartesian foundationalism. It was by then very clear"
},
{
"end_time": 5043.114,
"index": 183,
"start_time": 5016.357,
"text": " that in the empirical world, you can't reach certainty. It's impossible. Hume then expanded on this, and by now it's just common understanding. In the empirical science, you can search for the best theory you can find, but you can't show that it's true. That's the fact of life. Does that lead to"
},
{
"end_time": 5068.336,
"index": 184,
"start_time": 5043.78,
"text": " Nihilism? I don't see why. We can get better theories and worse theories. That's all we have to do. Does that undermine information and reason? No. Is it equivalent to faith? Not really, because we don't have faith in the best theory. We're open-minded. Maybe there's a better one coming along. But"
},
{
"end_time": 5097.739,
"index": 185,
"start_time": 5069.189,
"text": " We have reason to believe that this is the best theory of the ones that are available. That's not faith. That's reason. It's reason to say this is the best theory of the ones anybody's been able to come up with. But it's not faith to say it must be the right theory. So this is all within the bounds of reason. No place for nihilistic skepticism."
},
{
"end_time": 5114.189,
"index": 186,
"start_time": 5098.166,
"text": " Thank you. Thank you so much, sir. It's always a pleasure. And I hope you enjoyed your time. Thank you for spending one and a half hours with me and the theories of everything channel. Thank you. Okay. It's your ninth time your ninth time on this channel. Thank you."
},
{
"end_time": 5139.411,
"index": 187,
"start_time": 5115.026,
"text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes."
},
{
"end_time": 5157.432,
"index": 188,
"start_time": 5139.411,
"text": " Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
},
{
"end_time": 5178.166,
"index": 189,
"start_time": 5157.432,
"text": " Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms, just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying, so how about instead re-listening on those platforms?"
},
{
"end_time": 5207.432,
"index": 190,
"start_time": 5178.166,
"text": " iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough."
}
]
}
No transcript available.