Audio Player
Starting at:
Terrence Deacon: Origins of Life, Consciousness, Entropy, and Sentience
December 22, 2022
•
3:08:31
•
undefined
Audio:
Download MP3
ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.
Transcript
Enhanced with Timestamps
426 sentences
26,416 words
Method: api-polled
Transcription time: 185m 56s
So now, so let's talk about the problem of life then. The origins of life has to be a transition in which you use, which the living system is using this process that's self-destructive to keep the self-destructive processes from being self-destructive. But it has to use them to do this. How is that even possible?
Okay, I just finished editing this podcast. This is one of the most deep technical talks. The first 20 minutes are difficult to follow. And you may wonder where is it going? And how is it related to the topic at hand? What you'll notice that by the end, every tendril from every argument gets tied together neatly and nicely. So look through the timestamps of what you're interested in is specific subjects or watch until the end all of it where it plays out like a beautiful orchestra or movie where all the storylines start to intersect toward the end.
Today we talk about consciousness, why Darwinian evolution is incomplete, and the symbolic grounding problem, which is another aspect of the hard problem of consciousness, which I call the hard problem of meaning. Professor Terence Deakin is a neuroanthropologist who's taught at Harvard for eight years and is currently a professor at the University of California, Berkeley.
He has an extraordinarily philosophical mind. He has studied under Chomsky, Noam Chomsky, even though he has major disagreements with him. And part of his aim is tackling the evolution of human cognition as well as the hard problem of consciousness, which in his view is not so hard. Terrence will be coming on again for a part two. So if you have questions, then do leave them in the comment section because I'll be calling from there. I or and my wife read every single comment.
My name is Kurt Jaimungal, and I have a background in mathematical physics, mainly in the theoretical end with unified theories. And this channel is dedicated to the exploration of the variegated terrain of theories of everything from primarily a physics perspective, but as well as exploring the role that consciousness has to the fundamental laws of nature, provided that those laws are even knowable to us. And if they're not because of some variants of girdles and completeness theorem, then why? Why does that have any application to physics?
And there's uncommon goods.
the former two rocket money and masterworks you'll hear at approximately 20 minutes and then again at 40 minutes for roughly 60 seconds each and as for brilliance it's also 60 seconds which is occurring presently if you're familiar with toll you're familiar with brilliance but for those who don't know brilliance is a place where you go to learn math science and engineering through these bite-sized interactive learning experiences for example and i keep saying this i would like to do a podcast on information theory
particularly Chiara Marletto, which is David Deutsch's student, has a theory of everything that she puts forward called constructor theory, which is heavily contingent on information theory. So I took their course on random variable distributions and knowledge and uncertainty.
in order
It would be unnatural to define it in any other manner. Visit brilliant.org slash toe. That is T O E to get 20% off the annual subscription. And I recommend that you don't stop before four lessons. I think you'll be greatly surprised that the ease at which you can now comprehend subjects you previously had a difficult time grokking. At some point, I'll also go through the courses and give a recommendation in order. Thank you and enjoy this highly anticipated and
Far overdue conversation with Terrence Deacon. Professor, I'm super, super, super excited to have you on. I feel like this is long overdue. The more that I've researched you, the more that I realized, man, this is a guest that I should have had a year ago, and I should have prepared for a year. And what I mean by that is I prepare for guests for weeks. For most people, I come up with a question every minute or so or
Every 30 seconds of watching a lecture of theirs online and maybe one question per paragraph if I'm reading a paper, whereas for you it's fivefold. So it's like questions occur so much that I'm scrawling and I told you I couldn't collect them all into one place. So anyway, thank you so much for coming on. No problem. I love to have conversations. Open conversations like this are the best. So this channel is
Dedicated to understanding fundamental reality whatever that means. So why don't you give a brief overview of your insights your research with respect to that topic with respect to consciousness with respect to The large questions that loom over once moments of existential dread So take us through how your views have developed over time. Where have they been and how do they change? I
So I'll do it in a sort of autobiographical sense that in the 1970s as a student, I was very much interested in the burgeoning computer science world in systems theory as a general theory of how the world works, how the universe works.
and was of a student of Gregory Bateson, one of my long-term influences over the years, who was
a very eclectic anthropologist and biologist whose father, William Bateson, was sort of the father of genetics at the beginning of the 20th century. And Gregory Bateson was very much interested in information and communication and had spent the 1940s and 50s associated with a number of people who were studying the then growing field of information theory, communication theory, and systems theory. And as a result, set me off
But by the end of the 1970s, I, by accident, mostly by accident, not being interested in philosophy at all, really, I was mostly interested in the sciences, came across the work, as I say, by accident, almost of philosopher of the late 19th century, Charles Sanders Peirce. And he was writing also about what today we might call information and communication. But he had developed a
an unusual way of thinking about it that basically had mostly fallen out of the field of science and communication and particularly out of linguistics by the 1970s. And it's a field that he called semiotics.
And as now since then has grown to be a much more widespread field with lots of interest, though mostly in the humanities. And that's always disturbed me because I think it's really a fundamental approach to the sciences. And that was person's interest. Back when he was writing a semiotic theory is basically like a theory of information only bigger in the following sense that he was interested in what
Most theorists of mind have been interested before and since, and that is this question about mental representation. You know, how are things represented? When something is represented, of course, it's not there. It's something standing in forward in some ways. But it doesn't help us understand what that actual physical relationship is. And one of the challenges ever since in philosophy has been to try to make sense of that.
Peirce's way of approaching it was to say, no, there's actually three different ways in which this is done. And he was recognizing that there are representations by likeness, iconic, he called it, representations by correlation or contiguity or part-whole relationships, causal relationships of some form, he called indexical relationships, and other kind of relationships that words, for example, have to what they represent, which is
based on some kind of agreement of how to interpret sounds, but there's nothing about the words we produce in most cases that is linked to what they represent, right? We sometimes gloss this over by saying it's arbitrary. One thing that's happened in linguistics, and one of the things that got me interested in language and the origins of language, was why were the only species that does this? Were the only species that does this meaning does language? Does language, or in fact what I'll
eventually say is that does what we would call symbolic communication. That is something distinct from iconic and indexical communication.
The question is, for me at the time, who became interested eventually in the neuroscientists, I spent most of my career as a neuroscientist, studying mostly development of brains and how they've evolved. And my original question was, well, so what happened if language is so unusual, and it takes a very unusual kind of cognition to accomplish it? It must have evolved at some point. And therefore, there should be
in effect a reflection of this in the structure of the human brain and how it's different from non-human brains. So I got into the neurosciences to try to find it. In fact my PhD work was tracing connections in monkey brains to try to find out what's fundamentally different about human brains that makes language possible. And this is where in interaction I became interested in Noam Chomsky, took courses from him. We never quite agreed about this in part because of course he's coming from it
coming to language from a formal point of view and I was coming to language from a biological point of view. Where we came into conflict was this an accident of evolution, some kind of magical thing that happened to the brain, a miracle that we don't understand and it happened all at once and language pops in suddenly, unfortunately doesn't explain very much.
If you want to say, well, language is, you know, there's some algorithm that makes it possible, and that's in the brain. As a biologist, you want to explain, okay, how is it organized? What parts of the brain are involved? What parts of the brain change between chimpanzees and ourselves to make it not only possible, but easy? We acquire languages easily. I mean, even though, you know,
multiple languages become more and more difficult as we get older. Nevertheless, we're pretty good at it. And, and no other species can even get close. Maybe not even start, although I think there's some, some evidence that they could start, but to get very far almost not at all. So there had to be something about the brain. And we knew that brains got bigger. That's one thing, but bigger is a pretty generic feature. And I spent a lot of time studying
why human brains are in fact larger than other primate brains, for example, and why primate brains are on average larger for their body size than any other mammals. And we've now shown that, in fact, the size story is really a red herring in lots of interesting ways. And we can come back to talk about that. But as a result, I became very much interested in this. And my PhD work basically showed that there were no new parts.
The human brain, although bigger, has more neurons, has no new kinds of structures in it that do this. It was a real paradox. So that basically I was looking at the connections between areas that we know in the human brain are really involved critically in this areas in the brain called Broca's area and Veronica's area. And
I was looking at the connections between them, but doing so in monkey brains, using the techniques at the time that allowed us to trace this out. And what I found was that everything in the monkey brain predicted what we should expect in a human brain. It wasn't until a couple of decades later with connectivity being demonstrated in the human brain using these various connectome technologies that we've developed,
basically tractography it's called these days, in which we use fMRI-like techniques to map out roughly what the connections are. In fact, I could do it microscopically, of course, in other species. We could look at actual synapses and axons. We're just sort of simulating that even now. But nevertheless, what it shows is that the connections that I was able to demonstrate in monkey brains
between areas that had the same cellular structure that we find in human brains. They were all there, but monkeys can't do anything like what we can do. And so it really sort of drove me in other directions. And it drove me to study what's behind brain evolution, which is brain development.
How it is that the genes that determine what segments of the nervous system partition into different kinds of cells and how those cells send out their output branches, their axons to connect to other cells in the brain.
And so by studying this, it shifted my attention towards a technique that would help me understand how that might be in different species. So all through the 1990s, I shifted my effort to studying how this happens in development by using fetal neural transplantation. And what I was doing at this time was to take
immature fetal cells from one species brain, transplant them into a different species brain, and I had markers that I could see the one kind of animal's cell in another kind of brain. So I could actually look at how individual neurons actually found their targets. And what I found is that in fact, and we started out by transplanting pig cells, pig embryonic cells into rat brains,
where we could actually see the pig cell in this environment of rat and see how the connections were made. It turns out that the pig cells found appropriate targets in the rat brain. Not only that, if the rat brain had been previously damaged, so it lost some function, transplanted pig cells could restore some of that function. We did this with this kind of Parkinson's model. All of this sort of began to make the question even more difficult in some respects.
But what I had found is that in the enlargement of the brain, it changes the relative way that connections get established. And think about it in terms of our own voting systems here. We have what we call gerrymandering going on, in which you select a group that are going to compete against another group, but you select it in such a way so that one always wins over the other. Well, it turns out that quantity, of course, matters.
in the development of connections within the brain. There's a significant overproduction of axons and connections in neurons, in fact, that then compete for connecting to other neurons. And that competition is driven by a kind of a Hebbian-like process as a process in which the standards
saying that we use for students is that neurons that fire together wire together. That is what happens is that there's a competition in which synchrony is actually an aid and that's because there's actually a kind of it's almost like a mini Darwinian process where you know Gerald Edelman decades ago called this neural Darwinism but basically that that that they get fed
the receiving cell, if it fires in synchrony with the inputs, it gives back a little bit of a growth factor that keeps those axons strong and those that are not in synchrony just don't get it. So it's basically very much like natural selection. There's a kind of a food competition going on here. But in any case, what happens is as the brain enlarges and some parts enlarge differentially with respect to other parts, it biases that competition in interesting ways.
And that biased competition actually restructures the connectivity, not in an absolute sense, but in a relative sense. Some things are more connected than others. Some things are less connected than others. And this, I think, has explained how language has become possible. I like to describe it that the demands of language, the demands of symbolic communication have actually
played a role in restructuring the brain that is generation upon generation brains that did better at this were reproduced and those that did worse at it were not. And that means that the very fact of using initially very crude kind of symbolic communication over the course of about 2 million years has played a significant role in restructuring the human brain.
So that over time we become very good at it not because we have some special language device or some special algorithm that does it but because we have hundreds of little tweaks.
that in all different ways have made it easy, whether it's having to do with how our hearing works, how our attention is shifted, all these various features that are slight biases just make it really easy for us to do this. So actually the title of my mid-90s book was the symbolic species, referring to a species that is adapted to the symbolic world, like you might say seals are aquatic species, we're symbolic species.
We've adapted to living in a world of symbols. And I think of it sort of like a beaver in a beaver dam. Beavers over the course of millions of years have been damming up rivers and streams to create an aquatic environment. Beavers have become aquatic mammals because of what they've done to their environment. We've become symbolic mammals, symbolic primates, because what we've done to our environment
we've adapted to that environment by changing our bodies and changing our behavioral capacities, like beavers have adapted to an aquatic environment over the course of millions of years, flat tails, web feet, the capacity to hold their breath in interesting ways, and of course to build dance. All of this falls under the
the term niche construction, that is, we've constructed a niche, we do in every generation, we build this symbolic cultural niche that we have to adapt to. And that something like that has been going on, at least according to the work I've done, I would guess that has been going on for at least 2 million years, slowly, slowly modifying this. So this is the prelude to where I want to go next, because this is where it's going to come back to your question about, you know, what's fundamental to the world?
So one of the things that this said to me is that somehow communicating symbolically requires a very different kind of interpretive process than communicating by likenesses, icons, or indices. So when I scream with pain, that's an index. It's physically correlated, physiologically correlated.
With this state, and that means that every other human being who has this same connection actually immediately can interpret this because they interpret it the same way. We laugh and we sob, and because we all do it the same way and it's specifically connected with a particular emotional state, we understand it pretty well. It's an index, like a fever is an index, because of its physical correlation. On the other hand,
We also produce iconic communication. That is, you know, coloration. I like to think about moths that have eye spots on them. A moth is sort of camouflaged sitting on a tree until a bird flies by and maybe would eat it. But if it disturbs it, the moth opens up its wings and there's these eyes, big eyes looking back at a bird.
Of course that's going to play a role in affecting the bird's communication, the bird's interpretation of what's going on. It might be dangerous. Maybe I shouldn't eat this one. So what's happened is of course it's communicating iconically by likeness. Brains are really good at
Figuring out what something is by virtue of what it's like because they have lots of other experiences. With TD Early Pay, you get your paycheck up to two business days early, which means you can grab last second movie tickets in 5D Premium Ultra with popcorn, extra large popcorn,
TD Early Pay. Get your paycheck automatically deposited up to two business days early for free. That's how TD makes Payday unexpectedly human.
Most of us share in the problem of being so digitally active that we overconsume or overproduce content, which in turn leads to an overwhelming amount of monthly monetary subscription. This channel's theme is exploration, at least one of the main themes, and it's difficult to peregrinate when we're handling the more routine and monotonous items. This is exactly why we're partnering with today's sponsor, Rocket Money, formerly known as Truebill. Rocket Money automates the process of financial saving.
And tackles the various subscriptions that inevitably pile up for all of us, like Netflix, phone bills, Patreon subscriptions that aren't a toll, etc. Rocket Money displays all of your subscriptions, tells you where you're losing money, and how to cancel them with a single click of a button. Most of us think that we spend approximately $80 a month on subscriptions, but it turns out that the average figure is in excess of $200.
Get rid of useless subscriptions with Rocket Money now. Go to rocketmoney.com slash everything. Seriously, it could save you hundreds per year. That's rocketmoney.com slash everything. Cancel your unnecessary subscriptions right now at rocketmoney.com slash everything.
If you want to avoid humdrum, basic, and commonly monotonous gifts this year, then Uncommon Good is your secret weapon. Uncommon Goods is here to make your holiday shopping stress-free by scouring the globe for the most remarkable and unparalleled gifts for everyone. Whether you're shopping for Secret Santa or for gifts for your entire family,
Uncommon Goods knows precisely what they want. When you shop at Uncommon Goods, you're supporting artists and small independent businesses. These fine products are often made in small batches, so it's best to shop now before a certain item that you like happens to sell out. Uncommon Goods looks for products that are high in quality, unique, and often handmade or made in the U.S. They have some of the most meaningful and out-of-the-ordinary gifts anywhere. From art and jewelry to kitchen, home and bar, Uncommon Goods has something for everyone.
So, not the same lackluster gifts that you can find just anywhere. Additionally, with every purchase made at Uncommon Goods, they give back $1 to a non-profit partner of your choice. So far, they've donated more than $2.5 million. To get 15% off your next gift, go to uncommongoods.com slash everything. That's uncommongoods.com slash everything for 15% off. Don't miss out on this limited time offer, uncommongoods.com slash everything.
About the moth example, if it was the eyes of the moth that it revealed, so somehow it had eyes on its wings, would that then be indexical? Because it would be literal? So in a sense, now it becomes both. And this is what I'm going to talk about in a few minutes. And that is there's a hierarchy here. One depends on the other in interesting ways. If it wasn't the fact that birds already had a way to interpret eyes,
and recognizing that particularly large eyes looking straight ahead are very likely to be a predator, like an owl or a cat of some kind. It wouldn't have any effect at all. But now there's two things going on. Now, when the bird sees this, it now triggers maybe some behavior. Be wary of this. That's an indexical relationship.
But also when the moth opens its wings suddenly, it's also indexical because now the bird knows that this is not just bark I've been looking at, but there's something else there. So if that's an index, so there's both indices and icons available here. But nervous systems, since their beginning, and I would say life since its beginning, has been about discerning iconic and indexical relationships. So, you know, think about
microorganism like E. coli, you know, it has to identify things that are edible and non edible. It iconically recognizes that that sugar like molecules and molecules that are breakdown products of something organic might in fact be a food source.
They have a kind of a likeness to each other. Their likeness is because these are things by virtue of whatever traits they carry, whatever molecular features they carry, are interpreted by the receptor sites on this microorganism as a food. Food has a likeness. There's a certain trait that makes them alike. But now,
It also, if it's swimming around and it's moving around, the gradient of glucose might increase. That becomes an index that there's a diffusion away from some source. So it's a good idea to swim up that gradient of higher concentration.
That becomes an index. So even as simple as a bacterium has iconic and indexical means. This is the way information is interpreted, not just carried, but interpreted. You have to have a system that interprets it. Obviously, if sweet things are not what you're looking for, then you're not going to have that tendency.
We have a lot of these, of course. We're attracted to certain things and repelled by certain things, and they're icons and indices of various kinds. We can also produce conventional icons and indices. I mean, think about the stick figures on restroom doors, for example. They're both iconic in a crude sense, in part because of a convention that we have. I mean, girls don't wear dresses anymore, skirts anymore, and yet that's typically the figure in the West that we still have on restroom doors.
It's there conventionally, but also because it's supposed to be iconic. But the fact that it's affixed to a particular door is indexical. It says that this indicates that behind this door, there's supposed to be males versus females. So icons and indices are there everywhere. The question is, how do we develop a capacity to represent something if the sign vehicle we're using has no relationship
to what it represents. That's the language problem. And for years, and this begins, I would say, hundreds of years back in time, we've just sort of assumed that, well, this is just a convention. We just agree that it does. The problem is that conventions just don't happen by accident. The problem is, yes, a miracle could happen and give us language.
and allow us to have all the same ways of doing it. Or, like any other convention, any other social habit, it takes communication to set it up. So think about any other thing that we agree upon to do, a habit that we have socially. To establish that convention, we have to communicate about it.
Now, that means you can't use a convention to produce a convention unless you already have a first convention. How do you get the first one? How do you get language when there's no language, when there's no other conventional form of communication? And that is just watch young children. They recognize things that are like each other. They communicate to others by sharing objects of a certain kind. They point or reach
That is, pointing is an index. Objects that you can play with, or food that's always food, or drink that's always drink, each instance is iconic of each other. So right away, children are born like other species with lots of iconic and indexical capacity. And as the first year goes by, and they develop this unique way of indicating that only we humans have, which is pointing with our fingers and hands,
And looking in particular directions and watching somebody else's gaze. These are all indices. They're directly correlated with something that's there. So children build up this conventionality slowly in the first year of life. They have lots of information.
about communication. In order to build up a means of communicating that lacks that, in other words, to develop the conventionality, they also have to use iconic and indexical abilities. So one of the real challenges in understanding the brain then, is first of all to understand that whenever you build up conventional communication, I like to call this displaced reference. It's because nothing about the sign vehicle gives it away.
It's only the sharing of interpretive capacities that makes it work. To share it, of course, we have to go through all of these processes. We're really good at it. My work has begun to pursue this even at the molecular level. We were just talking about bacteria a minute ago, but it turns out that we see the same thing going on even at the molecular level. Think about the simple example of
how DNA produces proteins that have interesting three-dimensional structures. DNA has a sequence of nucleotides. Every three of them we call a codon. It's interesting we use the metaphor of a code here. Think about Morse code. Morse code is an arbitrary correlation between dots and dashes and letters to conventions.
and we've now linked them together with a conventional association because dots and dashes aren't letters. So we have exactly the kind of thing that we're struggling with here, a conventional or symbolic form of communication. A codon matches to an anticodon in a messenger RNA molecule so that in going from DNA to the messenger RNA it's laid out in
iconic ways. That is, the messenger RNA now is a reflection of an exact, you know, reverse isomorphism of the DNA that it took that information from. Its nucleotide sequence corresponds exactly to the one that's produced by the DNA molecule. Then taken into a ribosome, that structure also made up of DNA, I mean of RNA and protein,
then takes in another kind of RNA molecule, transfer RNA molecule, which has on one end of it three nucleotides, which will match again in what we call anti-codon fashion. That is the reverse three that now attaches it iconically to the messenger RNA molecule. But on the other end of the transfer RNA. Why is it iconically attaches? Because it's a mirror image.
So we might say it covariates. It's a covariance relationship. And what we mean by it, there's a similarity. In this case, there's an inverse similarity. So in the DNA molecule, the A's, G's, C's and T's match up with
They're opposite. So G and C bind together, A and T bind together. In the messenger RNA, you replace one of them with another one called uracil, which is a slightly related nucleotide. And then, so now we've got three things that bind together. Let's see how we can do it by showing. So you've got RNA molecule binding to the DNA molecule. It's now carries a mirror image. In fact,
Its opposite side would be the DNA molecule, but now it then carries over and binds to, here I've got my RNA molecule coming across here, and it binds to another RNA molecule, the transfer RNA molecule.
So the three bases in the DNA molecule map to three bases in the messenger RNA molecule, which map to three bases in the transfer RNA molecule, which is now, since this is the reverse, the mirror image, now it binds again and you've got the original image back. So what you have is you've maintained the form, the form or what I would call the constraint, the sequence constraint has been maintained.
So we would say that by covariance relationship, information has been transferred. Okay, okay. So constraint is the same as information in this case or no? So what I'm going to show you is how they're the same, but it turns out there's a hierarchic relationship that we have to keep in mind in order to show this. This was lost in information theory originally in the 1940s and 50s.
and it needs to be brought back and I'll explain why in a minute. But the idea is that now there's a covariance relationship. We've passed this covariance from one molecule to another molecule to a third molecule. That transfer RNA molecule has also got at the other end of it an amino acid and each three codons on one end is matched to a specific amino acid on the other end. That amino acid is now brought together
these three attached to the next three to the next three different amino acids correspond to the different codon and now by bringing together different transfer RNA molecules you're also bringing together different amino acids so that now what's happening there's a continuity relationship or a correlation relationship
that correlation relationship is now also correlating amino acids which tend to get stuck together and they produce a long string of amino acids that is coded for in the sequence in the DNA molecule. So now we have gone from a DNA molecule by a covariance relationship and to the RNA molecule which has a correlational relationship and neighboring transfer RNA molecules now provide
amino acids with a correlational relationship. So by virtue of taking similarity relationship, covariance relationship, and then a correlation relationship, we've now allowed DNA molecule which has no relationship to proteins as molecules.
Proteins are made up of totally different molecules in a string that tend to fold up because of their electrical and hydrophobic and hydrophilic potentials into three dimensional structures that a linear structure has passed its information on through these multiple steps to a totally different kind of molecule, which can now interact with other proteins and other kinds of molecules to produce most of the work of the cell.
What's happened is the reason we call this a codon relationship is there's a code-like relationship between them. That is, what we've done is we've figured out
in biology now, evolution has figured this out so to speak, how to take a linear relationship in one kind of molecule and create a particular kind of three-dimensional molecule made up of totally different kinds of atoms that is made up of amino acids as opposed to nucleotides and its function is determined by its three-dimensional structure by how it's folded.
So, in effect, there's almost no relationship between them. This is why thinking of it as a code is not crazy. It's arbitrary. And yet the arbitrary has been produced by this multi-step process that links covariance with correlation in a way that allows this to happen.
I like to call covariance relationships and correlation relationships grounded. They're grounded because the vehicle that carries it is carrying some feature in common with what it represents, with the information that it's carrying. So if it's a likeness relation, if it's a covariance relationship, it's carrying some kind of formal feature forward into another kind of molecule. Whereas if it's just physically correlated, struck together,
not by virtue of likeness, but by simply by virtue of being next to each other, being correlated, being attached. That also is grounded, physically grounded. One is formally grounded, and one is physically grounded. But by stacking one on top of the other, it now produces the kind of distance between where you started and where you end. So that now there's a what I call displaced reference,
The information in the DNA molecule is there in the protein, but in a totally different form. There's been a continuity of information transfer, but a complete loss of groundedness at the physical level. And yet there's an informational groundedness because you've transferred the same correlations. As a result, there can be selection that affects proteins and how they interact
And that'll select for particular sequences of genes that produce those useful proteins, even though they're totally separate kinds of molecules. It adds one further feature that I think is also important to think about it in terms of the mathematics of all of this. That is, the protein now has a three dimensional structure.
but DNA also has a three-dimensional structure. It's twist, and its twist is slightly different depending on which nucleotides are in that sequence. Now, my watch is talking to me just a minute. Let me set it down here. It's all right. We'll repeat that last part, the DNA. So the DNA sequence is of nucleotides, but it turns out that we think about the DNA molecules as a twist, as a helix.
It turns out that the helix is slightly differently twisted depending on the sequence. So it's not a uniform twist all the way up. So depending on the actual rung in the ladder or depending on a sequence of rungs? No, depending on the sequence of rungs. The sequence of rungs make it more tightly twisted or more loosely twisted depending on what the sequence is. And that means that there's a three-dimensional structure to the DNA molecule.
So there's information that's passed from the DNA to the protein.
And then there's also structural information in terms of the zeros and ones that we ordinarily think of. So those are the rungs. And then there's also structural information in the sense that certain sequences of those zeros and ones physically produce a different structure in the DNA. And it's not to say necessarily that the protein then copies that, but it somehow relates to it. No, the proteins don't copy. I want to be clear. What this says is that because the protein molecule is a totally different kind of molecule, but it has three dimensional shape.
What that means is that some proteins, we call them transcription factors, actually bind to DNA by fitting with a twist. And since that twist is determined by the sequence, then what's happened is that the proteins identify a region of the DNA molecule. And as a result, those proteins can play a role in upregulating or downregulating the expression
of genes. So now you have recursion possible. So what's happened by virtue of this displacement from one kind of information, that is sequence information, zero and one information, you might think about it, to three dimensional physical information, because they are both embodied physically, they're capable of interacting, but now on a totally different basis.
Now the protein can regulate the DNA that produces proteins. By virtue of this displacement of reference, it's possible to now have that displaced molecule regulate the kind of molecule that produced it. This is recursion. But notice that this is a kind of interesting recursion. It's very similar to the Gurdelian recursion.
What you need to have, even a simple paradox like a liar's paradox, is that you can't stop interpreting because it refers to itself. The liar's paradox, the simplest one is, of course, this statement is a lie or this statement is false. Once you interpret it, you have to reapply it back to the statement originally. You can't stop interpreting because if it's a lie, then it must not be true.
But if it must not be true, then it's a lie, then it's a lie, so it must be true. The classic circularity here. This is the basis in a much more complicated version of Gödel's incompleteness proof for mathematics. That is, basically, you have to have some way that something can refer to itself. It has to have this kind of recursion.
The point here is that that recursion is possible with DNA and proteins because of this displacement. DNA and RNA have to be linked to each other by likeness relationships. But DNA and proteins have totally different reasons for having the physical structures they have. But as a result, now you can have modified DNA indirectly through proteins.
This hierarchy of communication in very simple genetics is what makes possible the complexity of bodies and cells. You've got to have this kind of recursion because now the DNA can play a role through communications through other interactions with the cell because the proteins are now carrying information also about their interaction with other proteins, maybe even with the world. That information can now modify how genes are produced and expressed. So
The complexity becomes possible. People in semiotics have talked about this as scaffolding, that is you're building up new levels of information that can now be a basis for building up yet higher order information. So in fact this relationship I've just talked about is what makes it possible to have
complicated multi-celled bodies that have repeated parts. Think about the multiple legs on insects or the multiple ribs or the multiple vertebrae in our bodies. This is the result of exactly this kind of transcription factor effect. That is proteins that produce genes that modify other genes. Well, it turns out that by a series of other features,
genes have duplicated and varied, and you're producing this in a sort of theme and variation fashion. Once you have recursion, you can do sort of theme and variation effects, higher order effects. And this is what produces the sort of the what you might call the theme and variation repeated body parts that makes plants and animals possible. So in effect, what I became very interested in, and this is really by the end of the 1990s,
was not only that I'm finding that there's a lot of indirect causality that makes this possible in the brain and how it must have been possible neurologically, but it now tells me that there's a way that this is relevant to biology and maybe something as universal as mathematics itself, that this same hierarchical relationship, how you get recursion, how you make recursion possible,
has to do with this, what I call, I describe this sort of characteristically as one, two, three, repeat. One, the iconicity, that is the covariation. Two, correlation, physical correlation in some respects. And three, displacement. Once you've displaced, you can now go back and have that displaced reference modify one. One, two, three, repeat is giving you this capacity for recursion.
Now, one of the things that I realized at this point is that it also helped me understand why Noam Chomsky and I were having so much trouble agreeing about what he called universal grammar. When I wrote in the mid 1990s about the evolution of language, mostly from the point of view of what's changed in the brain, I was saying that, look, universal grammar has a genetically predisposed feature is
evolutionarily essentially impossible. It's impossible for the same reason that we don't have innate words. For something to be selected in the course of evolution, it has to first of all be consistently repeated again and again and again in the world. Its relationship to something that's relevant to life has to also be a strong correlation. It's got to have these same features repeated again and again and again. That's iconic.
Having a correlation with something relevant like survival value, food, predator defense, whatever. That's a physical correlation. That has to be there true. But then to develop the evolution of this, this selection has to be repeated again and again. And the substrate in the body that's now going to put those together, to hold them together, that is the adaptation.
has to be produced by this recursive process happening again and again in the course of evolution. Words, for example, the reason we don't have innate words is simple, and that is words change their meaning pretty rapidly. They don't persist generation upon generation for thousands of years. In just a thousand years, a language that has split can become totally non-inter-translatable.
And we've seen that, of course, in the history of Europe for the most part. But the same thing is true in evolution, but much, much larger. There's the constancy must go on for hundreds of generations to really make a dent in the biology of the organism. So words, of course, don't have this feature. They change too fast. Their association with things in the world is too fleeting and flexible. And finally, they don't have much of a reproductive consequence.
In order for mutations to have any kind of effect at all, they have to be the same over and over and over again, and they have to have a reproductive consequence. Words don't have this. And so we have to learn words socially. We have to transmit them from person to person in social groups. In fact, you have to have a larger social group to remember all that information, to hold it collectively. Now think about grammar.
Grammar is even more abstract than words. Grammar has fewer constraints, in part because it's not directly correlated with things in the world. At least my word dog and my word drink have something physically correlated in the world. It's just not very stable and regular over the course of evolutionary time, so it can't evolve biologically.
As the Toe Project grows, we get plenty of sponsors coming. And I thought, you know, this one is a fascinating company. Our new sponsor is Masterworks. Masterworks is the only platform that allows you to invest in multi-million dollar works of art by Picasso, Banksy, and more. Masterworks is giving you access to invest in fine art,
which is usually only accessible to multi-millionaires or billionaires, the art that you see hanging in museums can now be partially owned by you. The inventive part is that you don't need to know the details of art or investing. Masterworks makes the whole process straightforward with a clean interface and exceptional customer service.
They're innovating as more traditional investments suffer. Last month, we verified a sell which had a 21.5% return. So for instance, if you were to put $10,000 in you would now have 12,000. Welcome to our new sponsor Masterworks and the link to them is in the description. Just so you know, there's in fact a waitlist to join their platform right now. However,
Toe listeners can skip the waitlist and get priority access to their new offerings by clicking the link in the description or going to masterworks.com and using the promo code Toe. That is T-O-E. That's masterworks.com promo code T-O-E or click the link in the description. Just to make this distinction between indexical, symbolic, and iconic more clear for people, when you said the word dog, that's symbolic. And when you went like this drink, that's symbolic, but also indexical because you showed something.
Exactly. And I'm going to, this is what's going to get us back to finally back to syntax. Why I think after reading it, writing the symbolic. I want to make it clear syntax is a synonym for grammar in this case. Grammar and syntax just refers mostly to relative positions of things in a sequence. So we can talk about the syntax of, of symbols in a mathematical equation.
the syntaxes, how you have to move them around, who can be next to who, who can't be next to who, who can modify who, and so on. What constitutes a well-formed sentence? Yeah, exactly. Okay. So, grammar is the specific kind of constraints on that, that language has. So, grammar can tell you there's certain kinds of words that are content words.
certain kinds of words that play the role of a verb or a noun or a pronoun. Because of their kind, they have to be connected with other words in a particular way. You can't just randomly throw words together and have them work. They have to have both a sequence that's maybe characteristically different in different languages. But most languages have words that play the role of a verb or a noun, for example.
Those are Chomsky's universals, and there are many of them. Chomsky talks about lots of syntactical operations that he says are going to be universal around the world. He developed this interestingly enough in the late 1950s, early 1960s, by virtue of his knowledge of something else that I thought was just brilliant about his realization back then, which was
comparing how you analyze grammar and syntax to a Turing machine. Not to a Turing machine as a machine, but the abstract nature of a Turing machine that is an algorithm, a specifically described sequence of operations. A Turing machine logic can be laid out as a replacement logic. String A can be replaced by String B.
That is a rewrite rule. That's what the Turing machine is about. It's about having specific rewrite rules. So you read a particular symbol and you rewrite another symbol somewhere else. You have a rule for that. Chomsky realized in his book Syntactic Structures, written in the early 60s, that you could describe syntax and grammar like a Turing machine with a series of rewrite rules.
That was a brilliant recognition. Now, what's happening at this point in time in history, of course, is computing is finally being figured out. We're working out the abstract nature of computing. Chomsky is struggling with the abstract nature of language. And he says, wow, this is interesting. Here's both the value and the problem. The value is that a Turing machine can describe any
completely describable operation. The power of a Turing machine is its universal. We talk about this as a universal Turing machine. You're beginning to see the connection here. Now, if we think of grammar and syntax as being producible by a Turing machine, by a universal machine,
then that suggests that grammar and syntax itself might be universal and that there's a particular Turing machine in the brain of every human being that produces its grammar and syntax. And that once we figure out what words mean, what they refer to in the world, we can now just turn on our universal Turing machine
It's not universal anymore. It's a specific Turing machine that has specific rules for rewrite, and you can produce grammar and syntax. That becomes the original and I think brilliant discovery that Noam Chomsky makes in the early 1960s. But of course, it now that it begs another question, where does it come from? Where does this Turing machine come from? Well,
Maybe it's in the brain. And of course, we begin thinking about brains as computers. About this time, we begin to use this sort of computational metaphor to talk about thought, a very powerful tool for cognitive science for studying brains in a much more systematic way, and our behaviors and our cognition and so on. But now, if you think about brains as a machine, as a computing machine, then it suggests that, well, maybe an algorithm evolved,
So people began thinking, really, it took about 10, 15, 20 years for people beginning to move in a direction that Chomsky had already begun to assess, which is that if we think about cognition as computation, then we can think about language as run by a Turing machine in the brain, an algorithm, and that that algorithm could have evolved.
So that it's a very straightforward, good argument, logical argument about this. The only problem is the one I just mentioned before this, and that is what makes something evolvable? Grammar and syntax and even words are not evolvable biologically. They have to be evolvable socially. But that's a problem.
And it's a problem because if it's justifiable socially, you might think, well, there should be all kinds of different grammars, all kinds of languages that are totally unlike each other. Chomsky was clearly right that languages have a lot of things in common. They have a lot of syntactical constraints that you can't do certain things and you have to do certain other things. Everything, like a sentence, for example, has to have at least two parts.
And this in fact became an insight that helped me think this through. Why do sentences have to have more than one word? We do have a few things that communicate, what we call holophrastic. That means a single word, a single announcement is to communicate something. The classic example is sitting in a crowded theater and yelling fire.
Suddenly, everybody knows that it indicates that someone thinks that there's fire here, immediately present. It's an index in the sense that it says that physically it's correlated with my producing this sound. But this is interesting. Think about the word loud, okay?
Loud brings to mind, by association, soft, sweet, garish, you know, a whole bunch of associations can come up with it. But Loud doesn't refer to anything particular by itself. Noam Chomsky, of course, has struggled with this as linguists have for generations. Words don't exactly refer to specific things, with the exception of proper nouns. The name Chicago refers to a specific city.
But city does not. Now let's go back to the word loud. Loud doesn't refer to anything particular. But how about this? Loud! Now it does, right? It was correlated with a loud sound.
Now that correlation says that loud doesn't just refer to some abstract relationship, it actually refers to something physical in the world. I could have said clapping my hands is loud. The phrase clapping my hands does the same thing as this, because it's correlated in the sentence with the word loud. Clapping my hands is an index, but the correlation of the word
spoken in quick succession with the sound of clapping my hands, also is indexical. It's a correlation. So what's happened is that in order for words to refer specifically to things, they need to be coupled with an index.
And that's, of course, what pointing does for acquiring language in the first place. It's an index that allows children to correlate a sound with something in the world. How would a blind person? Yeah, well, so a blind person, you can't point. Now you have to touch. But touching is, of course, a version of pointing. Pointing is a physical correlation just with some distance. Touching is physical correlation without distance. To really communicate, you have to do it that way.
But what it basically says is that the reason that all languages have something like a sentence, where there's multiple kinds of words that have different functions, is that you've got to have these two semiotic functions together. Otherwise, symbols, abstract things, can't refer. So this again, replays this relationship in a bunch of different ways. I've now described it in terms of sentences,
But describe it in terms of DNA and proteins and describe it in terms of mathematics. And it is referring to meaning or reference or what? No, this relationship is one, two, three, repeat relationship. How you get displaced reference, how you can get something that refers to something that it's not, even though it has nothing physically in common with it, nothing formally in common with it. So words have its referential capability, but this is, of course, also the DNA story.
DNA doesn't have anything in common with the weather, with the kind of food you have to eat, radically displaced away from it. And yet the information can be shared between bodies and behaviors and DNA molecules. This incredible difference by virtue of this continuity that's established. This is what I say is a little bit like mathematics.
Here's how I want to characterize mathematics versus language. Mathematics, you're required not to have confusion. You've got to keep things separate, distinct. Distinction is the absolute most important thing. You can't run a mathematical operation where one becomes five, where some value that you fixed is unfixed.
Okay, so precision. Precision and precise. It has to be absolutely precise. You have no freedom to vary. That's a constraint that says all of my referential operations are constrained to maintain this one thing in common. In language, the constraints have to be weaker. They have to be weaker for a number of reasons. What we want is, of course, not to have confusion.
not to confound things, not to have ambiguity, but we can survive with some degree of ambiguity. We can't do that in mathematics. We have to get rid of that ambiguity. This is why Gödel's incompleteness proof was so problematic, so troublesome for people, because it said that you can't have a system that completely gets rid of ambiguity,
and have it closed and complete. It has to be either, if it's capable of getting rid of ambiguity, it has to be open. If it gets rid of ambiguity, it's simple. So, so, so Gödel basically shows that you can't get rid of it completely. And I think semiotics basically tells us this, you can't get rid of it completely, but you can build on it. And you can make it more and more complex. So,
To get us back to the mathematical comparison to language, language has to also minimize ambiguity. But of course, ambiguity is only so troublesome as the problem you're having to face, the particular pragmatic details. In fact, here I am trying to communicate this to you and to an audience. You maybe don't have to get all the details.
You may be able to build up the details as we talk more and more and more, understand what the words refer to, how it works, in a sense. In this process, even two sentences later, you've forgotten the sentence that came before that. You've translated it into something totally different than words, into maybe images, to maybe sort of feelings in your body, whatever. We've extracted a gist. Yeah.
In mathematics, that means we can make it as hard to penetrate as we need, so long as our manipulations, the operations we use, verify discreteness, don't allow A to equal B if A and B are not the same. That means that mathematics is symbolic communication that is highly constrained
but we don't have to interpret it right away. It may take years or never to understand a particular equation, a particular level of mathematical complexity, because we can look at it, reflect on it, look at it, reflect on it, try things out, maybe transfer the equations into some physical-like relationship, analogy, like a graph, for example, to try to understand what it means and how it works.
In language, of course, we have to do it in real time. We don't have the luxury. So the linguistic capacity also has a constraint, but the constraints are much weaker, much looser. And they're just simply limited to what I want to accomplish now, what needs to be transmitted at this moment. Oftentimes, the details are not going to matter. So this is the case of language in the case of genetics.
It's somewhere in between. It's not as precise as mathematics. There's lots of folding abnormalities that can happen.
In fact, one of the things that happens with translating from DNA to messenger RNA in cells like yours and mine, eukaryotic cells, the RNA molecules are carrying a lot of non-protein information, what we call introns, strings within the DNA molecule that are just simply not carrying information that will play any role in the eventual protein.
And so in eukaryotic cells we have a structure called a spliceosome, a complicated structure made up of five RNA molecules that themselves have three-dimensional structure because of the way they stick together and fold and literally dozens of different protein molecules that can bind and disconnect and rebind. The spliceosome splices out the introns
The RNA molecule is full of noise, segments that are not going to be useful. The spliceosome figures out which ones are not going to be useful, cuts them out, and re-splices the RNA molecule back together. So that the eventual RNA molecule that gets sent to the ribosome to turn into a protein, to generate a protein, has now been cleaned up, spliced out all the useless stuff. To have a system that has this kind of noise in it means that
In fact, there's a little bit of flux in this system. Things aren't exactly perfect. And so in that respect, you know, it's a little bit between mathematics and language in that respect. But the same constraints are necessary. You need the same three ways of representing things. One, two and three. Those are universals. And that means that it's going to be also true for language and for mathematics.
Mathematics is going to have to respect those features as well. The equal sign in an equation is about covariance, about iconicity. This equation is the same as this equation so long as you do the right operations. That's what the equal sign is saying. Having operations next to the variables. So how is this all related to meaning and even sentience? Right, so it absolutely is.
Maybe we'll go to sentience first because that might help a little bit. That is, you might call it the hard problem. I don't think it's a hard problem. I think it's a counterintuitive problem. And I think that's the challenge. In the same way that what I just described is probably counterintuitive because we think about codes so naturally. Codes do it all. The problem is a code already assumes representation.
A code doesn't give you representation. The dots and dashes and the letters already represent something. The dots and dashes represent letters. The letters represent parts of a sentence, parts of a word. We've already assumed that we know the meaning for that so that the Morse code carries the meaning because the meaning was established outside of the code relationship. When you think about, and this is I think the problem that I find with linguistics in general,
because linguistics has accepted the code relationship between words and meanings. It has a number of problems. One of the problems is that in a code, there's a one-to-one matching between something and something else. That is, you know, three dots means something. Okay, it's the S. It matches specifically one-to-one.
But in a word, of course, we know that words don't match one to one except for proper nouns, proper names. Words map not to a thing, but oftentimes to something we often call a meaning or a sense of something. That's not like a code. And so one of the first problems is we're beginning to treat language as though it's a code, although it's not a code. So code is devoid of meaning.
So the code relationship, the stand-in relationship, where something can stand in for something else, is itself not about meaning something. It's just about standing in for something else. The question is, how does it stand in for something else? It stands in for something else for somebody that interprets it. So that obviously Morse code by itself
If not interpreted, it's just sounds or dots or blips coming across some line. The same thing is with words, of course. If I don't understand the language, it's just sound. It doesn't have any reference. The question is, how does that reference get established? How does the meaning get established?
Going back to the beginning of the 20th century, end of the 19th century, the philosopher Gottlob Frege distinguished two aspects of reference. He talked about it as sense and reference, or in German, Sinn und Bedeutung. Sense is the sense of the meaning of something. And it's also been called the intention of something.
And the reference is something in the world that refers to. He called that the bidoitum or the reference, sometimes called the extension of something. Can you give an example? Yeah, I'll give an example. And this comes directly from Gottlob Frege, a brilliant idea. He said that, look, we have the names Hesperus and Phosphorus for the morning star and the evening star.
But in fact, they have two different meanings, morning star and evening star, and the names talk about something, they actually describe something. Although phosphorus and Hesperus are just names for what they describe. They're names for something in particular, but it turns out they're names for the planet Venus. The actual extension, the physical extension is the same thing.
even though the words are different, and they refer to different senses. The sense of Morning Star and the sense of Evening Star have a different sense. But what they refer to in the world is a thing, a particular thing, one. So what he says is that sense and reference are different. And when we talk about meaning, we oftentimes confuse these. And we confuse it with something even more complicated, usefulness or significance or value.
What's the meaning of what I've been saying? It's not just the words, it's not just a reference. It's not just something in the world. It's actually, is it useful? Is it helpful? Is it valuable? The meaning for something. So the word meaning is really ambiguous because it brings all these things together. So I actually don't like to use the word. I don't like to use the word because what's the meaning of the white line down the middle of the road?
Well, it doesn't have a meaning like a word. It doesn't have a definition. And it doesn't refer to anything particular. It actually indicates where you should drive. It's an index. It's a conventionalized index. Now, you have to know what that index is, what you have to know what the convention was. And for the most part, it takes language to figure that out. But
In many respects, it's not that meaning is telling you something, it's that we want to distinguish those things that we think have to do with meaning. And this is where this iconic indexical and symbolic feature help out, because symbols can have all of those features. So a word can have value,
usefulness, a particular new neologism in science that can be very useful because it picks out something that wasn't picked out before that we need to understand and distinguish. The word itself can refer to some general type of things, a description of things. So the word symbol,
unfortunately is used both to refer to tokens like the letters that show up on a screen and a religious artifact. The word symbol is itself very sort of multifaceted and ambiguous, but in context, of course, we
specify what it means. But in fact, using the word symbol to talk about an alphanumeric character is actually sneaking something in. And let me give a sense of this. An alphanumeric character is something we've agreed on to be an arbitrary sign to code for a particular sound in a phonetic language.
Therefore, it's playing a code-like or symbolic-like role. It's displaced from what it refers to. It has no characteristics in common. On the other hand, if my computer screen starts to spew at random all kinds of alphanumeric characters, they're not symbols. They don't refer to anything symbolically. They're actually
They indicate that there's something going wrong in my computer. Which is interesting because in that case the noise is a form of a signal. That's exactly right. And what this tells us in information theory is that the distinction between noise and signal is not intrinsic. It's something that has to do with interpretation.
Is that always the case? Now, is there a non-trivial example? So for instance, if a computer is not working, then we would see the fact that it's not working as a form of noise. So you mentioned just some pixels on the screen, and then we say that... That noise is symbolic or is referential to the repairman in that case. Right. A particular kind of noise now can carry information, but the same thing is true. Think about, you know, in simple Shannonian information theory.
You have a signal that's got a certain level of noise. What he argued is that adding redundancy to some level can help us discern what signal what's noise because the signal will be redundantly repeated in each repetition, whereas the noise produced by an independent source will not be repeated. And so we can basically figure out where it comes from. But if the noise itself has structure,
then that can carry information about the source of that noise. If the noise is not purely random, it can be confused with a signal, but in fact that non-randomness is telling us something about where it comes from. And one of the things I like to say about information theory, and this is just
brought back to us with the people like Landauer and others who developed computational theories and quantum theories of information, is that anything that carries information is a physical thing. That is, it's either some energetic or some material object, or maybe some process. But that means that it has a certain degree of entropy.
as a certain redundancy and a certain lack of redundancy to its structure. And it's particularly the constraint that is the redundancy that usually carries the information. And that can be picked out. That's what Shannon has told us when he says that by increasing the redundancy of a noisy signal, we can discern what's signal and what's noise.
Now, what we're doing is we're saying, I don't know what it's signal about, but it's redundant, so it's carrying more information because of its redundancy. It's actually eliminating some uncertainty, which is Shannon's ultimate measure, but notice that the Shannon theory of information is just about the medium itself. When we say that
Alphanumeric characters spewed on my screen at random. I call them symbols showing up on my screen. I'm using symbol in the Shannonian sense, talking about the medium itself. But here's the interesting thing. The medium itself has iconic and indexical relationships to the world that it's referring to. So one of the stories about where do we get information? Why does redundancy of a signal provide us information about something?
because the redundancy, the constraint on that signal is there because not all degrees of freedom, not fully randomness was in that signal. A physical system that is not at its lowest energy state, that is at highest entropy state, is there because it was pushed away from its highest entropy state. Work had to be done. Something has prevented it from going to its
lowest energy highest entropy state. But what that tells you is that if the entropy of a signal, both the Shannon entropy and the physical entropy are reduced, it's because it's in relation to something else that is producing that constraint. Work has been done to either prevent it from its highest entropy or
Push it away from full entropy. So there's some redundancy in it. Every scientific tool uses this trick. In other words, this is how we get indexical information about the world, how an experiment which manipulates the world produces certain regularities or irregularities. And it's those that are carrying the information. They do so because communication is physical.
Because it's physical, all of this abstract stuff we've been talking about has another piece to it. That is, it's always grounded in the world. And to unground it, to use a kind of communication that's code-like, we need to go through these steps to unground it. Language is ungrounded in a sense. Right, okay, so this is what's known as the symbolic grounding problem, correct? The symbol grounding problem, right. Please, like, reiterate, what is the symbol grounding problem and also the definition of grounding?
The symbol grounding problem, also sometimes called the empty symbol problem, is that how is it that a word or a particular machine state in your artificial intelligence machine is somehow linked to the world, is grounded to the world, that its reference has been fixed or established?
So there's now an informational link so that we know what this feature is, the symbol, this arbitrary sound or squiggle, how it's linked to something in the world. We know how we do it. We say, OK, Morse code, I'm going to call three dots S. I'm going to say that. But of course, if I want to explain how it starts,
How do I get reference in the first place? How do I get symbolic reference? How do I get two things that are unrelated to now become fixed in their relationship? So that whenever you see one, you think the other one. Can you give another example? Well, so... Like cat. Go ahead. You suggest something and we'll talk it through. How is the word cat a problem? Like we say cat, we mean cat.
Right. So the word cat for most people in the world has nothing to do with this, this mammal, this small furry mammal. Right. It's kind of a little carnivore that's a pet quietly judges you. Right. Right. So the how do you acquire it? If you're a child, this donut doesn't have language yet. How do you acquire that word? Well, you have to first of all, you have to have some familiarity with cats or maybe just familiarity with four legged animals.
And somebody shows you a cat, points to it, cat, and kids also have another bias that helps them with language. We tend to copy each other's sounds. So they say cat, they point. Now, a dog walks by a day later and they point and they say cat. Now, the question is, what do they see? They've referred to something, they've made an iconic guess.
The sound is the same, cat sounds the same, and there's some similarity between this first object and the second object. Somebody says, no, that's a dog. Okay, now the infant has a problem. What's the difference between cats and dogs? You've got whenever the word dog comes up, I see a four-legged creature. Whenever the word cat comes up, I see a four-legged creature. Now I have to say, okay, what distinguishes them? What's non-iconic between them?
and the the words because they're iconically used cat cat cat cat every time it's repeated it suggests that i need to look for a correlated object that has also got similar features dog dog dog dog dog it's iconic of each itself the sounds are iconic each time it's produced and it's there's you've got to figure out what's in common each time and what's different in each time what's in common is of course iconic what's
What things stand out that indicates that this is not that? What are the different features? So what we're doing in the process of acquiring word meanings is using this indexical iconic bridge to eventually say that now I can develop this habit of thinking that only these objects, these four-legged small purring objects are related to the word cat.
So what's happened is that you're building up this interpretive habit. And the interpretive habit is built up by the scaffolding of likeness and correlation. So where's the problem? Where's the problem in the symbol grounding problem? Sounds like this is working fine. Children acquire... This is working fine for kids, right? This is actually how you ground symbols. The question is,
If you start from given symbols and given things in the world, how do they get grounded? If you ignore these features, there is no way that they can be grounded. Now let's think about computing.
How is it that you ground, think about even more sophisticated computing like machine learning in which we use neural nets, for example, to identify cats on the internet. How do they do it? Well, what we do is, of course, we show them thousands and thousands of images of cats and we strengthen and weaken certain links in this system so that eventually it converges so that
What's happened is that each cat has some things in common. Each image has some things in common. Each image has some things that are different from other kinds of images. And what's happening is that each of those in common, we strengthen certain synapses, connections, correlations, and we weaken others.
So again, even there, we're using this iconic and indexical logic to zero in to produce so that the final state of the machine, it pumps out the word cat when we show it a cat picture. Okay. Now, does it understand that's what the cat means? That's what the word means. This is the problem of sentience, of course. Is the machine sentient
of the relationship between this typographical sequence and these pictures. We're beginning to discover now that, in fact, they do it very different than we do it, first of all, which is one of the reasons why we need thousands and thousands of examples to make it work. But the other thing we're realizing is that, in effect, this is just an arbitrary correlation. The correlation
is with us. We've decided. We've done the job of pruning the network. We already know the interpretation. The interpretation there was there ahead of time. We just wanted to build it up. We have to build it up in the same way, but
The interpretation is outside of this process. The machine doesn't know that these three letters have to do with an animal out there, because as soon as we show it animals, unless it sees the animal and sees it in a perspective that's sort of like what the machine sees, it's going to get it wrong.
And there's all kinds of ways, of course, we figured out how to strengthen this using moving objects, moving objects in front of backgrounds, that kind of thing is changing the foreground background a lot, you know, all these ways that we strengthen that process. But again, the machine is just got the physical correlation. Nothing else. The question is, what is that something else? And that's of course, we're after we talk about minds and brains. The key to this story that I've
led up to that really followed all my work in the nervous system and language. When I began to see that it had some relevance to molecular biology and therefore evolution, I realized that it also probably had a relationship to how the physical world has produced this stuff. The one, two, three repeat story turns out to be much more general than I ever thought.
And so by the middle of the 2000s, by 2010 and 2011 and 2012, when I produced this book Incomplete Nature, I had begun to think that this was a real problem that was much more general, relevant to physics, chemistry, biology, as well as brains and evolution. And the question was, how is it that creatures like us
that have ends in mind, that are teleological, that have purposes, that can have meanings. If you think about meanings, there can be good meanings and bad meanings, right meanings and wrong meanings, incorrect solutions and correct solutions. There's no correct chemistry and incorrect chemistry. There's no correctness or incorrectness or good or bad physics. There's good or bad representations of physics, or good and bad predictions about chemistry.
But actual chemistry and actual physics, there's no normative feature. There's no right, wrong, good, bad. But that's something that every organism has. There's good things or bad things even for viruses, but not for water. So one of the questions in, it's a deep philosophical question to some extent, but how is it that the material world, the physical world has produced creatures
like us, like bacteria, that are normative, that actually divide the world into good or bad, useful or unuseful. Notice that that's the other aspect of meaning. Meaning is about value. And one of the deepest problems in philosophy is where does value come from? What I was realizing is that there's an analogous problem here. An analogous problem that's not
semiotic in all the ways I talked about, but the constraints that produce semiotics have something to do with this problem as well, with the problem of how a totally new kind of property or relationship being about something can come into the world. Although we can talk about distributions in the physical world in terms of information,
how much information is contained on your hard disk, how much information is expressed on a piece of paper, or how much information is destroyed as a black hole sucks in material from the outside world. That's just a measure of differences. It's not information that's about anything. We make it about things. It has to be interpreted to be about something.
We're gonna have to wait because my dog's gonna bark for a few minutes here. You can hear the dog in the background. Yeah, but it's fine. It's minute. Okay. Yeah, because zoom has denoising. Good. That's wonderful. I love it. So the situation here is about this. This is a phrase that's sort of banded around by physicists. Excuse me by
by philosophers to talk about something that's different than what the physicists want to talk about information. Information can be about something, but aboutness is a relationship of something present to something that's not present, something that's absent. And this is one of the things I realized that what was going on in this move from iconic and indexical to symbolic to displaced reference is that now there's no connection. We now have an absent relationship.
There's no features that's in common with the symbol and what it represents. To be clear, when you say that it's absent, so again, the word cat, we can talk about cat without there being a cat. OK, that's right. That's right. And it still refers to them. And in fact, I can say, you know, I'm going to gift you with a cat to adopt next week and it will have some consequences. There will be a real cat.
Is this the same as being hypothetical or is hypothetical just a subset of what you're referring to? Just one variant. Every aboutness relationship we think about requires interpretation. There are physical features in the world that can afford it sometimes, make it easier. So the fact that a photograph of a cat or the shadow of a cat
has features in common, is an affordance that if I'm sensitive to it, I can interpret it as being about it, referring to it, pointing to it, being an index of it. Iconic and indexical relationships are something physically affords that interpretive process, makes it easier for us to interpret it. Because I already have information about that correlation.
by past experience. I have information about that covariance from past experience. So that when I see one of my favorite examples is windsocks. Windsocks are an index of the strength and direction of the wind. So they're an indicator.
material feature. I mean, lots of pictures I could show as well. We'd like to go through that because what it is, it's set up typically in airports and heliports because you want to know the direction and the strength of the wind. And a windsock is basically a conical material object, like a flag in some respects, but it's got an open end on the point end of the conical part so that wind blows through it.
and it's on a rotatable structure so that the wind will blow it and the straighter it is, the harder the wind has been blowing. If it's not blowing so hard, the windsock is sort of partially extended and of course it's extended in a particular direction. So it's a great device for showing the direction and the strength of the wind. Why? Even if you've never seen a windsock before,
And you're looking at it only online. You would actually be able to interpret what it's about. Why? Because you've had the experience of looking at clothes blowing in the wind, of looking at leaves blowing at the wind, looking at your hat blowing off, all of these things. And you have these things. So when you look out at this windsock, even if you don't feel or have any experience of the wind,
It brings to mind a lot of like experiences, iconic similar experiences. But each of those experiences in your memory is associated also with another experience, the experience of having wind blowing. So when I'm out watching clothes blowing in the wind or watching leaves blowing in the wind, I'm experiencing the wind. So what I see now, looking at this windsock, brings to mind both similar sorts of things
which a material which should be hanging straight down is flopping around hanging out that brings to mind these other experiences, which now are correlated with not the same of as but correlated with the experience that I've had of wind.
Now each of those correlations, so each of those images remembering the blowing weeds, the blowing clothes, my hat blowing off, all of these features have that in common as well. So there's another likeness. So each of their correlations with wind is a likeness as well, a likeness between wind experiences. So it allows us to look immediately at this windsock and say, okay,
This is telling me that the wind is blowing strongly from this direction because I have all those other experiences but notice that they were put together instantly probably within a fraction of a second as you looked at this for the first time by virtue of the fact that it brought up iconic and indexical relations that built upon each other
to allow you to see this as a conventional model of something. The iconic features, these are just in your memory, so what this tells us is this is how perception is building up this complicated causal relationship that we see out there. So to see this hierarchy, this constructive hierarchy, began to bother me and it bothered me in the following sense.
the study of life. One of the things that's been really interesting in the last half century is beginning to realize that life, as Erwin Schrödinger suggested, is
A process that's far from equilibrium, maintains itself far from equilibrium. I'm William Gouge, a Vuri Collaborator and professional ultrarunner from the UK. I love to tackle endurance runs around the world, including a 55 day, 3064 mile run across the US. So I know a thing or two about performance wear. When it comes to relaxing, I look for something ultra versatile and comfy. The Ponto Performance Jogger from Vuri is perfect for all of those things.
It's the comfiest jogger I've ever worn. And the Dreamknit fabric is wild, always reach for them over other joggers. Check them out in the Dreamknit collection by going to www.beurie.com. That's www.beurie.com. Where new customers can receive 20% off their first order. Plus enjoy free shipping in the US on orders over $75 and free returns. Exclusions applied, visit the website for full terms and conditions.
Can you define that term far from equilibrium? So at equilibrium, it's where a system will fall to, so to speak, degrade to, if left alone. That is, what happens is the system will fall to its lowest energy state until it's prevented from falling into a lower energy state. In terms of entropy, where we oftentimes think about this, a system that's well mixed
is at its lowest state. It can't be more mixed. Once you pour your cream into your coffee and you stir it well enough, it's well mixed and it's almost impossible to go backwards, to unmix it. That's at its highest energy state. So a room temperature mug would be at equilibrium? So a room temperature mug is now, you might say that its temperature, the liquid is now, the temperature in the liquid is now well mixed with its environment.
So it's at equilibrium, that is, there's no shift, there's no asymmetry between them. The same thing is true of the coffee in your mug that you stirred. What happens, or the ink dropped into water, the stirred and becomes equally distributed, no part is significantly different than any other part. Whereas when the coffee is first hot, or when I first pour in the cream, now there's a separate gradient of difference between the two parts.
Okay, okay. So the hot cup is far from equilibrium with respect to colder room.
So why is the terminology called far from equilibrium rather than just saying not in equilibrium? If, for example, think about my refrigerator. My refrigerator does work to keep its internal relationship far from equilibrium with respect to the environment. We usually talk about far from equilibrium as things being having really high gradients of difference.
So being away from equilibrium, most of thermodynamics was developed in what we call near equilibrium conditions. Conditions where things are a little bit off from equilibrium, but will tend to fall towards equilibrium. I say fall towards equilibrium. Is that a precise term to be near equilibrium versus far from equilibrium or does it depend on the context? Of course it's near and far are of course not quantitative terms.
So when we say near equilibrium, we're dealing with a whole range of processes in which the gradient of difference is not huge. And there's a reason for that because when the gradient is large, some very unusual things can happen. And this has produced a whole series of studies that have really been going on for more than a century now about unusual features that happen far from equilibrium.
So an example are shockwaves. Shockwaves in which normally differences in pressure even out pretty quickly. Winds and difference in pressure in different parts of the globe, for example. Or differences in pressure when you're inflating a balloon and then letting it sort of out. These things dissipate pretty quickly and easily. But you can push things so far
that the normal processes don't dissipate that pressure. So breaking the sound barrier is a classic example. Pressure can be moved from one part of the air to another part of the air to equilibrate the pressure pretty quickly and easily. It can do so without much of a problem up to the speed of sound. The sounds that I'm producing now are the result of pressure waves that can be transmitted across
space in an atmosphere because of the way the pressure waves produce other pressure waves and pass this along. Those sound waves are the result of the fact that this is how fast pressure wave, pressure differentials move through the atmosphere. So, however, you can, because
The atoms in the air, the molecules in the air, can only bump into each other so hard, so fast, to move this. There are certain speeds above the speed of sound, we say, that now breaks this. So you now can't actually have the pressure dissipate across this boundary, where this pressure is so high that the air can't disperse it fast enough.
The result is when a jet breaks the sound barrier, it creates a break in the atmosphere, so to speak, a boundary in which there's high pressure air on one side, low pressure air on the other side, and it can't be relieved. It's only as the pressure decreases, the pressure differential decreases some distance away from the jet
that suddenly it can release and it releases in a huge bang, a sonic boom and this is because now finally we've come back to the level at which that pressure can be released and it's a huge differential and it's released rapidly, a loud sound. So what's happened here is that far from equilibrium we've pushed it so far that it can't re-equilibrate right away and so
beginning at the turn of the century and particularly after the mid-century, last century, studying systems that are very far from equilibrium began to produce all kinds of interesting results.
So we notice, for example, one of the things that on the wings of jets, as they're far from equilibrium, also they produce what amount to whirlpools, vortices. And the vortices are also playing a role in dispersing this energy differential in different ways. And the vortices can actually mess up your flight. And one of the real problems with breaking the sound barrier initially was all the vortices that were produced as you approached this
critical point, really destabilized everything. And so there's a lot of shattering, you know, in the first flights beyond the sound barrier, they were afraid it was going to rip apart the jet.
It turns out that it's not enough to do that, and we found how to go beyond that. But subsequently, what we discovered is that systems that we now maintain far from equilibrium, we don't allow to equilibrate, can produce all kinds of interesting effects. And one of the things that they do is they produce orderly effects. Now, why do I say orderly? One of the best examples of this that everybody knows is a whirlpool.
We just talked about the vortices at the end of the wings of jets, a whirlpool in a stream. What's happening is that you start the water flowing down the stream, and it comes and runs into, say, a barrier of some kind. Going around the barrier, it produces all kinds of irregularity, lots of unstable behaviors.
The flow is no longer laminar. It's now broken up and you get chaotic flows. But as the water continues, the chaotic flows begin to cancel out and whirlpools form. The whirlpool forms because all the chaotic interactions are, in a sense, contrary to each other. Because they're not all in the same direction, they begin to cancel each other out.
so that over time all the non-symmetric interactions begin to cancel out and those that are non-symmetric are the only ones that are left and that produces the laminar flow that becomes a whirlpool that actually allows water to move through this barrier more efficiently.
And one of the things we know about whirlpools, for example, think about bathtubs being drained and producing a whirlpool. The whirlpool actually, because it aligns the movement of water molecules so that they're not in each other's way, so to speak, that they're minimally out of each other's way, it more effectively, more efficiently empties the bathtub. The bathtub is, of course, now a gradient of difference, the gravity that's pulling the water down. That gradient is being dissipated
But it spontaneously, because it's such a high gradient, it spontaneously forms into a whirlpool because it more rapidly depletes that gradient.
the whirlpool by virtue of allowing spontaneously non-interacting, non-contradicting movements to be slowly eliminated. Now empties the bathtub faster, so that if you were sitting in the bathtub and then just messing up the whirlpool constantly, and now comparing that to a bathtub in which you didn't do that, messing up the whirlpool would allow it to drain more slowly.
The whirlpool actually, as system is far from equilibrium like this, moves towards equilibrium by regularizing the flow. Those are called dissipative structures or dissipative systems. Is there a difference between those two? What I'm saying is you're dissipating the gradient in this case, right? But if you have a strong gradient, sometimes, and particularly if that gradient is maintained for a long period of time, like a
full bathtub or a stream that keeps running. Then what happens is you're constantly dissipating that gradient, but the gradient doesn't get dissipated. Under those circumstances, what happens is it tends to fall into irregularity because the regularity reduces what you might call the dissipation length, how far a molecule has to travel before it leaves.
If it travels and it gets bumped back and it travels and gets bumped back by virtue of chaotic interactions, it's going to take longer to leave. And that means the trajectory of its departure, say, from the bathtub is longer. It's going to take longer, take more time. So what happens in systems that are maintained apart from equilibrium is they generally develop into regularities.
Now here's the issue. This is why way back in the 1940s the quantum physicist Erwin Schrödinger said this has got to be relevant to life because life is maintained far from equilibrium, because life is taking in fuel to keep pumping it far from equilibrium.
Yeah, he talked about this as, you know, as neg entropy. You know, if entropy is the increase of this process is maybe they have to eat neg entropy, that food is like neg entropy, pushing it. I think it's a bad term and most people have abandoned it. But the idea I think is right. And that is in order to stay far from equilibrium, because a system that is far from equilibrium is dumping energy as fast as it can.
In order to stay far from equilibrium, you have to keep pumping up that gradient. So the stream has to keep running. Or maybe you have to heat something constantly to keep it far from equilibrium. Or like in your refrigerator, you have to keep a motor running that dumps heat constantly because the refrigerator is spontaneously trying to fall towards equilibrium. But you've got to do work to constantly push it away from equilibrium.
To stay far from equilibrium, you have to do work. So that, of course, is relevant for life. What happened since then, and a lot of systems thinking about life, fell into, I think, this error, thinking that we're just like a whirlpool. We're just like a very complicated whirlpool. We get energy in and we bump it out, and therefore our regularity is like that. It turns out that that's a really useful insight.
Because it says, you know, to be what we are, we have to do this. But it's incorrect. It doesn't quite go far enough. To generate all the regularities that we have requires work. To generate all the gradients, different molecular systems in our body, all different cell types, all the different molecular interactions requires constant work.
And of course, you might say in that respect, it's like an extremely complicated whirlpool. But in fact, that's not quite the case. Because the story with a whirlpool or all far from equilibrium physical systems is that they're in the process of destroying themselves. They're in the process of getting rid of the gradient as fast as possible. They spontaneously regularize
in the process of dumping that gradient as fast as possible. Life has to be just the opposite of that. Life has to generate these gradients and keep it from undermining itself. Self organizing systems like this are self destructive by nature. Living systems are constantly involved in trying to keep that from happening.
So here's the paradox. A living system has to use self-organizing like processes, dissipative processes, to generate the order it has, and yet it has to use those dissipative processes to keep those dissipative processes from destroying themselves. That's a really interesting paradox. We're self-organizing,
but we're using our self-organization to keep the self-organization from falling apart. How could that possibly be? Now, when we do it, are you referring to a single person or the entire environment? No, so I'm actually talking about a single organism. I could be talking about a bacterium. Okay. Just as well. A bacterium has to keep itself far from equilibrium by taking in energy, using that energy to do work,
to maintain itself far from equilibrium in order to produce constraints, in order to produce gradients within itself, constraints on where particular molecules are located and how they can move and so on. The bacterium has to do this. The bacterium has to use the second law of thermodynamics to maintain things far from equilibrium. It has to do work to do something that produces it in the opposite direction.
What I meant was in the case of a bacteria or even a person is we dissipate something, so we're throwing something away, but what we're throwing away is detrimental to us. So what I mean is, okay, so I thought that what you're saying is it needs to get into a state where what we throw away is then somehow useful for us.
Oh, no, I didn't mean that. I'm sorry if I led you in that direction. It's my misunderstanding. No, that's all right. Words, remember? Right, right, right. So the issue is that, of course, we take in energy and then we produce waste, heats, if nothing else, but obviously waste products as well, things that we can't fully utilize or break down. And we do that, we're pumping energy through the system in order to maintain the constraints. In this respect, we're sort of like a refrigerator.
I think it's a useful model because we've got this motor that's constantly keeping the system far from equilibrium by what? By doing a lot of work and generating a lot of heat. That's ironic because this device that's supposed to keep something cool in the process of keeping a certain region cool, it has to produce more heat. In fact, it produces more entropy than would be given up, than would be produced by just allowing the refrigerator to go to equilibrium.
As we turn off the motor, the refrigerator now slowly goes to equilibrium. Entropy is increasing. That amount of entropy is a fixed amount of entropy. But by running the motor, we're now producing not only the reversal of that, keeping that entropy from increasing, but we're generating heat in the process. We're also producing more entropy.
So in fact, that process of keeping something far from equilibrium actually increases the total entropy of the system faster than just allowing it to go to equilibrium. So we have to produce waste. You know, generating our societies, we dump a whole lot of waste into the world, a lot of heat into the world.
It's because it's an inefficient process as we recognize. But now, let me go back to the paradox, because this will help us move to the next step. The paradox is to do what we do. By we, I mean all of us living creatures, including the bacterium, to stay far from the equilibrium. In fact, to stay so far from equilibrium that we can duplicate ourselves. We can make more of this stuff and actually reproduce.
You're doing a really good job of staying far from equilibrium. You're pushing yourself way far from equilibrium. But the question is, all processes that go far from equilibrium are in the processes of destroying themselves. How could it be that organisms don't do that? So Schrodinger, as part of the story, he recognizes this problem. Organisms are far from equilibrium. His suspicion is that
The thing that we now call DNA, that the genetics is somehow providing information and somehow the information is keeping us from going far from equilibrium. Now, this is the interesting connection because now it's giving us a sense that although information theory, which is itself described in terms of a kind of entropy story, and physical entropy are also interestingly linked together.
Every information system is a physical system. Every change of the information signal that carries information is actually a consequence of work done on that signal. So that there's now information is not just disembodied. Information always is linked, is always grounded. But it's grounded in interesting ways, like we described one, two, three.
That grounding is also different ways of being grounded. You can be grounded in form and you can be grounded in physicality. So now, so let's talk about the problem of life then. The origins of life has to be a transition in which you use, which the living system is using this process that's self-destructive, to keep the self-destructive processes from being self-destructive.
But it has to use them to do this. How is that even possible? This is one of the things that I struggled with and it turns out that it also produces a one, two, three step. The one step being the second law of thermodynamics. That is that entropy tends to increase. But if I have a system in which I'm constantly pushing entropy against itself, possibly doing work on something, I can keep it far from equilibrium.
And that can produce something that the increase of entropy wouldn't produce. That is now some regularity, some new constraints. The question is, how can I keep that from being self-destructive? It turns out the answer is curious, but simple. It has to be simple for two reasons. Number one, if life happens sort of spontaneously by accident,
Nobody didn't make it happen. It probably had to happen pretty simply. It couldn't be really a lot of complicated stuff happening. But the second thing is that this transition had to be a transition away from what every other part of the universe is doing. That is, tending towards increase of entropy. And if it's far from equilibrium, tending towards even faster. Living things have to prevent that, hold that off.
for a period of time. In terms of life on Earth, we've held it off for 3.8 billion years at least. That's incredible when you think about it. How does it work? What I realized is what has to happen is that self-organizing processes themselves produce regularities. Regularities are gradients or differences or constraints. But there are different kinds of constraints.
One kind of constraint might be the constraint on diffusion in a liquid or diffusion of a liquid in a liquid or molecules in liquid or dissolving. Diffusion, the edges of the cup constrain the diffusion of heat a little bit. The edges of the cup certainly constrain the diffusion of molecules out of the hot water into the atmosphere. That's a kind of constraint
In this case, I had to produce it by producing a cup, for example. But what I realized is that living processes, if they can be composed of more than one kind of self organizing, far from equilibrium process, might be able to produce constraints on each other. Reciprocal constraints. And the way to think about this is in terms of what we would call boundary conditions.
That is, a constraint is a boundary condition like the edge of a cup is a boundary condition. Okay, so how does that work? Constraints can produce boundary conditions or constraints can be boundary conditions and self-organized far from equilibrium processes can produce constraints and therefore boundary conditions. Are there chemical boundary conditions that can be reciprocal of each other? And the ones that I hit upon turn out to be characteristic of all of life.
They have to do with catalytic processes and diffusion processes. So catalytic processes when one molecule decreases the threshold in which another molecule can either interact with a third molecule or can break down and fall apart into two. We call these catalysts or enzymes. They simply make it possible. It's
Also possible, and people have begun to focus on this as maybe part of the story, that you could have reciprocal catalytic reactions, or catalyst A breaks down some molecules to produce catalyst B, which breaks down some other molecules, and one of its products is catalyst A. This is a reciprocal catalytic process. Does that explode and become exponential, or are there conditions where that's not? Yes. It will literally be a
You know, like a nuclear reaction, it's a runaway process. If catalyst A produces catalyst B, which produces catalyst A, now you've got two catalyst A's, which produce two catalyst B's, which produce four catalyst A's. So something has to be used up. But then that sounds to me like it's dissipating itself again. It is totally dissipated, totally self-destructive.
So when we set these things in motion, they use up the raw materials quickly and the system now A and B diffuse away from each other, they can't affect each other. The process stops and it stops faster than if it was just a single catalyst because it's a runaway process. Now there's another chemical process that's important to this that we find. So first of all we've noticed now when we look at the biochemistry of
cells of all cells of all types, bacteria as well as us, catalytic cycles are very, very common. We use that because to build the body, we want to have runaway like processes, we want to produce more stuff constantly, and you got to run that with energy. But of course, these are dissipated processes and self destructive, if we let them go alone. What's the major limitation
on a reciprocal catalytic process. It's that A and B have to be near each other. They have to be in the same vicinity. When they use up enough of their raw materials,
their diffusion will take them away from each other and A can't produce any more B's and B can't produce any more A's. Diffusion is where it ends. That is the increase in entropy is and you increase more and more A's and B's rapidly and rapidly and decrease the substrate until finally there's not enough substrate left and now the process rides to a halt. It's reached its highest energy position but it's also produced lots more of A's and B's. The process is diffusion.
One of the other things that happens spontaneously in the molecular world is tessellation and crystallization. I'm sorry, I just want to be clear. So the word diffusion here means a physical distance between the two or the using up of the raw materials or not. The using up of the raw materials slows the productions of A's and B's. A's and B's will spontaneously move away from each other. They mix
The second law of thermodynamics says that if you're not producing more of them here, they'll just tend to mix by not continually putting ink into this place in the water. It won't stay bluer than the rest. Once I stop, the ink particles will diffuse away. Well, the same thing is true with these molecules. They'll spontaneously diffuse away. So as soon as catalytic reaction slows down below a certain point,
Now the local concentration of A's and B's will dissipate. So it's a dissipative structure driven by, first of all, a far from equilibrium relationship between the substrates and how it's going to end up. But as soon as that is finished, the whole system dissipates and goes back to its highest entropy state, lowest energy state.
The other molecular process that I focused on was crystallization and what we call tessellation. Molecules that have a regular structure oftentimes, as they lose energy, can link up with each other and form crystals. So, for example, a solution that has a lot of sugar molecules in it, for example. Heat it up and you stir and you dissolve the sugar.
Once it cools down and the molecules slow down in their interaction, sugar molecules tend to stick to each other and they orient with respect to their structure. What you get is crystals. Crystals form because the sugar molecules are in a lower energy state when they're all oriented with respect to each other into a lattice-like structure. This is also the way that cell membranes in life and virus
shells form they form by virtue of molecules that just tend to stick together okay and viruses it's it's typically protein molecules that have a three-dimensional structure that they tend to stick to each other and they form shells typically or sheets okay it turns out that the formation of sheets and containers by this process of what we call self assembly when a system basically
crystallizes crystallization is another process that spontaneously forms regular structures by giving up energy and as the so think of your your solution of sugar water cooling down as it cools down eventually the rate at which sugar molecules attached to the crystal and then get dissolved off of the crystal becomes equal runs to equilibrium
The crystal molecule will stop growing at that state. You won't get crystals any larger. So what's happening is that crystallization is also a process that grows to a certain level. It's effectively frozen in this level because they're now stuck to each other. That's a low energy state, but it can't grow any further. So as crystallization grows, it depletes its environment of its raw materials.
But now think about these two processes. They turn out to produce each other's boundary conditions in the following sense. As a crystalline structure grows, in this case, I'm thinking of a polyhedron made up of a bunch of things that sort of grow together into a shell. This is how viruses form, for example. At some point, it will grow until it's used up all of its raw materials.
But what if you keep adding raw materials? Then you can get it to grow all the way until it stops, maybe closes itself. The catalytic process is producing more A's and B's, or A's, B's and C's and D's, and a bunch of side products is breaking down some molecule into side products. In the process it liberates a little bit of energy.
but it's also producing a side product that might be what we call a capsid-like molecule, a molecule that might be one of those molecules that sticks together and forms a sheet or a shell. Now we have a situation in which in a local region, if a reciprocal catalytic process produces enough molecules that can form a shell,
It will keep producing those every time the shell begins to grow and decrease the concentration of those molecules. New ones are produced into that space. But now if the system grows into a container, a shell, it will tend to contain the very catalysts that made it possible. Because the catalysts are going to be in the same space. The shell is going to grow fastest where there's the most catalytic reaction.
And that means the system is going to capture all the features that are needed to grow itself. But it's also going to stop the catalyst from diffusing away.
Football fan, a basketball fan, it always feels good to be ranked. Right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use. Pick two or more players. Pick more or less on their stat projections.
anything from touchdown to threes and if you write you can win big mix and match players from any sport on prize picks america's number one daily fantasy sports app prize picks is available in 40 plus states including california texas
When they run out of substrates. So now you've got an inert structure
sort of like a virus. It's got maybe a protein shell and a bunch of protein catalysts inside of it. But now if it gets broken again and spews out its contents, the process will start over again. And now the catalytic process, which is a self-organizing process that would run down if it wasn't stopped from diffusing,
and the shell growing process, which would stop growing if it wasn't continually getting new pieces to grow larger and larger and larger. Each of them is producing the boundary conditions of the other. Now you've got two self-organizing processes that both support each other and keep each other from running to the end.
So what we see now is that thermodynamic processes pushed far from equilibrium produce self-organizing processes because it's a thermodynamic process that's pushing something far from equilibrium and then that system is trying to give itself up. It's a relationship between two second
second law processes, two increase of entropy processes. One is you're driving something far from equilibrium, the other is just trying to go back towards equilibrium. It's a relationship between two second second law processes, two entropy increasing processes. What I've just described is that you could also now have two
reciprocal self-organizing processes that stabilize each other and promote each other's existence. That's what life has to have. Life has to be able to figure out how is the two self-organizing processes can be linked together so that they each produce the boundary conditions that allows each other to happen but keeps the each other from going towards full equilibrium.
So life has to be this interesting coupling. I've described this in terms of a thermodynamic story which says that we've got to expand thermodynamics. We've got to expand thermodynamics so that we can talk about far from the equilibrium thermodynamics that produce things that produce regularities transiently in order to produce regularities in the world that can be useful.
But because there's different kinds of regularities, different kinds of constraints, it's possible for those processes to become coupled in such a way that they support each other. That they do one thing that is not true in thermodynamics. So in our standard thermodynamic theory, we have sort of the first law and the second law. The first law is that energy and matter can't be created or destroyed.
Depending on if we throw in nuclear physics, that becomes a little more complicated so that mass energy is a quantity that can't be created or destroyed. The second law says yes, but entropy will increase and order will, you know, regularity and redundancy and gradients will be decreased. What we have in life is something interesting.
Because order can be created or destroyed. Constraints can be created or destroyed. Regularities can be created or destroyed. But physical stuff can't be created or destroyed. It can just be transformed. When I say that a life of a bacterium or of a person can be created or destroyed, it's not because I'm saying that its material can be created or destroyed. Its organization is what's there.
You and I are not the same physical thing that we were five years ago. The material stuff is transit passed through. We're like a whirlpool in that sense. But we're one that keeps ourselves going. It keeps by virtue of this coupling. But now what you see is that by virtue of coupling these kinds of far from equilibrium systems in this way, this is a system that protects its organization.
that reconstructs its organization, that repairs its organization if damaged. Now, what we're saying now is the existence of that organization is being maintained. Where matter and energy have this kind of, it doesn't have to do anything to stay in existence. Organization has to constantly do work to stay in existence.
So what this says is that thermodynamics can be talked about in three ways, and I've
I've coined three terms, probably won't survive history, but it helps me think about them. The first one is I call it homeodynamics. The second law of thermodynamics is about making things more homogeneous over time. When we talk about equilibrium, things are getting more homogeneous. So I want to call this homeodynamics. But when things are far from equilibrium and producing regularities,
We want to say that it's producing forms. The Greek term for form, of course, is morphe. So I want to call these morphodynamic processes. They're processes that produce dynamical forms. A whirlpool is, for example, a dynamical form. Its material is passing through it all the time, but it maintains its form. It's a morphodynamic process.
The two processes I just described, the two molecular processes, are also morphodynamic processes. One produces a structure. One produces a higher and higher concentration of catalysts in a local region. They produce constraints. They produce gradients. But when they're coupled to each other, they also protect each other from self-elimination.
and they protect their cooperative relationship from self-elimination. This is what life has to do. I call this teleodynamics because living systems now don't just fall towards disorder or go towards highest entropy, they go towards a target. Their target is self-maintenance in this case. That target has a specific form
It's not just general. It's a specific organization that's being maintained and passed on. So I use teleodynamics to talk about teleology or end directedness. Things that have a specific end also can go wrong. There can be right or wrong. There can be good chemistry or bad chemistry. Self organizing processes don't have this.
Processes of simple entropy increase, simple chemistry doesn't have this. But if you have a process that maintains the existence of its own organization, there now can be things that go right or wrong. There are things of value for it and things that are dangerous. So this is, in a sense, the introduction of value into the world. So does sentience come about at the introduction of values or is that separate? Yes.
So another way to say it is that now we also have information about something. Because the structure of this system, the structure of these devices, I call these little units autogenic viruses. It's like a virus, it has a virus shell, but it's not parasitic. It generates itself. So it's a structure that's got
a protein shell like a virus does, and inside of it it's got some enzymes. Some viruses have even catalysts within them, though mostly they have RNA and DNA. And that's because they're parasitic, they use the genetic systems of cells that are their hosts. But the argument I'm making is that there could be something that's maybe a little less than life as we know it, that has no RNA and DNA. But it can reproduce itself,
It can repair itself if damaged, and therefore it will be able to evolve. It's an autogenic virus because it's a virus that is not parasitic. But these are systems that now will have some environments that are useful for it, some environments that are not useful. There are some chemicals in the environment that will be detrimental to the process.
So it has value, but it also in effect is carrying information about itself. So it's got in its structure, even after that structure is damaged and broken open, in the co-localization of all these constraints, constraints embodied in chemistry now, information about what's necessary
to recapitulate that same organization so that even if this system gets broken open and catalysis begins again and shell formation begins to form around the catalysts during that process even though that information was not information like a DNA molecule or something like that it was distributed in this distributed group of constraints
So therefore, those constraints collectively are about themselves, about their unity, about their coherence, about their integration, how they need each other. It's carrying that information forward so that it can be embodied in new chemistry. So the thing gets broken open and dispersed. New chemistry begins to take place. New catalysis happens. New shell formation happens.
Over three or four cycles like this, probably all of the chemistry is replaced. Just like in reproduction in organisms. But if all of the chemistry is replaced, then the only thing that's got any continuity are the constraints. The constraints are of course these redundant relationships, these correlation relationships and likeness relationships.
between these two processes. Notice the similarity to everything I've just been saying before this. Why did you say that there's a redundant relationship? Because each of these processes has to produce multiple times. So to produce the shell, you have to produce more and more of these molecules. And also, now thinking about the relationship, for example, between RNA and DNA,
There are mirror images of each other to some extent. They're complements to each other. They're like hand and glove to each other. But the two processes that I've described are each the boundary conditions for each other. They're environments for each other. In that respect, there's a likeness in the same way that DNA and RNA are in likeness relationship to each other.
But they're also then they have to be correlated physically linked in space. So they have both features. They're linked in space because they share a common molecule. The molecule that's produced by the catalysis that becomes a shell molecule.
So you've got the likeness relationship, the complementary relationship between the constraints that they each produce. One constraints diffusion, the other constraints the decrease in diffusion of the molecules that produce the shell. As they produce complementary and therefore iconic relationships to each other. And yet they have to be physically bound to each other by virtue of sharing this common molecule.
that produces a structure in which the constraints, which are in a sense formal features, now can reproduce themselves in different material. Displacement. So what's happened to generate life is the same relationship in which you need iconic relationships linked to
these indexical-like relationships or covariance relationships linked to correlation relationships in such a way that they produce this capacity for displacement. Now you can have information that gets passed on transmitted to another kind of molecule and maintained. My argument is that
that DNA and RNA are late developments that begin to remember this kind of stuff after life has begun, produce a better kind of memory and a better kind of construction process, but that the same logic is involved all the way down to the origins of life and all the way up to the way brains work. There are constraints in the same sense that mathematics has constraints, but there
constraints of semiosis, constraints of aboutness. You can't have aboutness without certain things being constrained in certain ways. This is a quite general phenomenon. It's general enough that we can now begin to say, maybe this will lead us to new ways to explore the origins of life problem. And maybe it will lead us to new ways to talk about the consciousness problem.
And that's where you want to go, I know, when you ask me about sentience. Because what I'm now saying is that even this simple autogenic virus is responsive to its environment in a differential way. Some things it will take in and other things it will not take in.
Even in the simplest case, if the environment is full of random molecules, but just enough of the molecules to make catalysis and self-assembly possible, the system will close and capture lots of diverse molecules. But only those that when it's broken open will produce more catalysts and more shell molecules will increase in numbers.
And so over time, the irrelevant molecules will tend to be excluded. The system is spontaneously even in a mindless sort of way, organizing itself with its environment, taking in things which are useful and expelling things which are not useful. So what's the difference between sentience and consciousness? Ah, a couple of steps.
So first of all, what I would say is I call this vegetative sentience with the analogy to plants. Plants are responsive to their environment.
The roots of plants follow certain nutrients and certain water concentrations in the soil. Their sentience, they change their growth pattern with respect to that. With respect to sunlight, plants move out of the way of shadows and into the sunlight to maximize the amount of solar radiation they're getting.
Plants respond to their environment. We don't think about it because they don't move a lot. But if we speed up the camera, we showed in fast motion, we can see how sentient they actually are, how they're responsive to their environment. And some parasitic plants are even more insidious in their responsivity to their environment. I think about creeping vines that crawl up trees and begin to sap
the nutrients out of trees and so on. They're not just sensitive to light, but they're sensitive to chemistry, to the shape of the tree and so on. The particular species... So they move, it's just slower. They move and they're slower. But here's the issue. Their relationship to what they represent, their aboutness, is directly linked to their chemistry. So it's the chemistry of the tree
as the branches are moving towards the sunlight that's causing some cells to expand and other cells to shrink. It causes that branch to sort of bend in one direction or another. That's a direct chemical reaction, but notice that it doesn't have an independent relationship to what's causing that relationship. It's a direct relationship to sunlight or to water or to cold or whatever.
The bacterium also has a direct chemical relationship to its environment. What would an independent relationship look like? You and I. It's just symbolic? Is there something that's not okay? No, not just symbolic. Every animal with a brain. What a brain does is it now doesn't just immediately react to the environment. It reacts to structures in the environment, to forms in the environment.
and then links those forms to useful processes in the body. So one of the things you want to find out if you're a bacterium, for example, I want to have some sensor that says this is good to eat, that's bad to eat, go towards the good to eat stuff. Okay. But if you have a brain or with sensors, you can now say, okay,
I don't just need to know whether this molecule fits with these other molecules that I need, but I need to know things of that shape have a lot of that molecule in them. So brains are doing this displacement move from the physiology. So brains are now representing what the physiology needs, but in a totally different way.
It's not in terms of the chemistry. It's in terms of some other feature that is correlated with the chemistry, maybe. But now we need a totally different system to do this analysis. It can't be done by just the chemistry. It can't be done by just the temperature gradient or the heat gradient or the light gradient. What you want to do is you want a system that can now take indirect information from the world.
and therefore use it to get at the direct stuff that you need. So you need a whole system that in effect is sort of like what we've just been talking about, but now at the level of whole organisms. So what you need is to have some displaced way of getting information that's useful, that's grounded down back to your physiological needs, the chemistry of your body.
So how does that happen? Well, you got to have the iconic and indexical jump that we just talked about, the one, two, three step. But notice there's something else that's interesting here. In one sense, even the autogenic virus has a kind of self. It represents itself and it represents itself again and again in different materials. Why do you think that is? What we say about every living thing is they have a self.
It self-reproduces, it self-repares, it self-modifies. The term self we use all the time when we talk about living things. And when things die, they don't have a self anymore. You know, all their chemistry don't use that term anymore. Self is about this dynamics, and yet self is this fuzzy term. I think the problem with consciousness is that we're trying to solve the problem of self from this very high level of brains.
Often times from the high level of human brains, when the problem of self goes all the way down to the origins of life. The problem of self, self was first created in this very simple form. And if self was first created in that form, until we understand how self comes into the world in the first place, we're certainly not going to understand how self works in a complex brain.
There's too much involved. It's displaced away from what I call vegetative sentience. It's now a separate kind of sentience. I like to describe this as subjective sentience. I think that you and I have an additional one, symbolic sentience, that makes us aware of things like infinity and galaxies and truth and so on.
that my dog and my cat are uninterested in, have no way of even representing to themselves. We're sentients of that, we feel it, we recognize it, we can interpret it, we can talk about it, we can interact with it. Creatures with brains that don't do symbols can't do that. So there's at least three levels of sentience we want to talk about here. And the first question is, so how did the first level of sentience develop?
That's what I was just on. Okay, that's an interesting question. We're going to get there. So my first question was, well, how far does that go? So there was you said at least three. So then could you even imagine a four? And if you did, would that not just be incorporated in the three because you're a human and you imagine that someone else could imagine it? Like, how far does that go? What happens? What's happening now in the world with you and I talking to each other? First of all, symbols are not just in my head.
It's necessarily a distributed form of communication. The representation that we're using now to communicate requires not just one. If it's just me in the world, I don't need symbols. And in fact, if it's just me in the world, I'll never develop symbols and I'll never think about these other things. These will never be relevant. But symbols are not just in a human brain.
Symbols are something that's distributed around the world. It's a distributed form of cognition. What that means is that I'm not just part of me and my local group, but there's a little bit of Aristotle in me. Not just here and now. And the fact that I can now communicate to people in Australia at a moment's notice.
has changed that in a radical way. In one sense, symbolic communication has created the platform for a much higher level form of communication. We're beginning to feel the experience of what it's like to have this higher order distributed communication that we call the web distort our politics, our beliefs, our desires. The dynamics of this larger symbolic network
are really becoming problematic because there's certain iconic and indexical relationships taking place. Think about Facebook and how it amplifies certain kinds of likes. Not a surprise that they use that term. I like this. It resonates with me. Why? Because of my particular emotional state. I'll pass that on to somebody else. Now they have that icon as well.
So one of the things that happens is that human emotions that tend to have their own runaway process socially, this kind of mob technology that we're beginning to create, has its own dynamics. Its own dynamics because it's not in me, it's not in you, and it's not in anybody's control anymore. One of the reasons that this is a problem is we don't understand how it does this.
We're just beginning to understand how it has these troublesome consequences, why it's destroying a democracy or two. And that's because the logic of how they set up the algorithm does something in terms of symbols and even pictures, even now icons and indices, that also has a higher order iconic and indexical and symbolic like structure to it. It's got its own one, two, three repeat.
and the reciprocity, the recursion of it. That is, it changes the physicality that made it possible. Self undermining in many respects. And it may well be that the runaway process will be so self undermining that it'll destroy stability of the of the world community. We don't know whether it will or not. So actually understanding this logic may not just be relevant to understanding the origins of life.
may not just be relevant to understanding what a mind is, may not just be relevant to understanding what consciousness is. I'll come back to that when we come back to the sentience story. But also, is there a kind of higher order sentience already at work that we're a part of, that we're unaware of, that we can't experience, but is having an effect on us and our experience and our very physicality? That's, I think, the interesting and worrying question.
behind all of this. To get at that, let's go back to this question of sentience. So the question of sentience is one that gets us in trouble when we look at computing and we think, can AI be sentient? Can it know something? What I like to tell people about computing is that actually your automobile engine is a potential computer. How do you make it into a computer?
You look at all of its various states and you assign some interesting symbolic value, a code, to one of its states. And so each of its states has a code value. When you run the machine, you run your car engine from state A to state B to state C to state D, you're running it from symbol A
What we're doing with computing is we're finding an isomorphism between a mechanical operation and a symbolic operation, a manipulation of tokens, a manipulation of signs. The car engine doesn't know that it's just an addition any more than the calculator knows that it's done addition. I've done the addition, but I've recognized that there's a regularity
a syntax to it, a set of constraints, and those constraints can be mirrored by a physical process that's so constrained, similarly constrained. The actual aboutness, the interpretation, the awareness is not in running the algorithm,
The algorithm is just a variant of a machine. This is why we call computers virtual machines, effectively. But now the difference, let's look at them even in the simple example I just described, this autogenic virus. It's information, the forms that it's passing, the constraints it's passing from one step to another, embodied in the physicality of the changes.
It's embodied because it's embodied in chemistry, a process that's embodied in chemistry. Because it's about itself, because it's carrying information about how to repair itself if damaged, or how to reproduce itself in different material. It's also about its physicality.
That is, the information and the physicality are not independent. The form can be passed on. The form can go from generation to generation in this process with all the material changed. But the material has to be kept in that form. But you couldn't have passed on those constraints if it wasn't always embodied in some form, some material. The information had to be physical
But in the case of this system, in the case of all life, it's physical embodiment of information about its own physicality. That's an added twist. Now, remember when I said that displacement allows recursion? This is an interesting kind of recursion. This is a recursion in which a formal relationship
has a recursive effect on its physical embodiment and therefore its persistence or its very existence. If it goes, if it dies, if it fails to reproduce, if it fails to repair itself, it goes out of existence. So this is basically a living system or even a system as simple as this autogenic virus.
is information about its own existence. It's information is about existence. When we talk about information in philosophy, we use the phrase epistemology. That's about what we know, what things are about, what things are known. We talk about what exists in the world. We use the term ontology. So they're tied together? Epistemology and ontology?
Yeah, epistemology and ontology are sort of the two major divisions in metaphysics, talking about how things are known or what knowing means and what is. Well, here we're talking about what knowing is about. The aboutness is about its own existence. It's the epistemology of its own ontology in philosophical terms. That's what life is about.
So now let's jump up. Now, I want to say I've skipped a lot of levels because we're going to go up to brains and minds real quickly. But let's go back to the classic problem, the mind-body problem that Descartes introduces. He divides the world into the res cogitons, the thinking stuff, and the res extensa, the extended physical stuff in the world, the physical world and the mental world, the mind world.
and we might say the consciousness world. And the phrase that everybody takes from Descartes, kajito ergo sum, I think therefore I am. That's great for you and I. Descartes assumed that thinking and physicality were separate, and that the only way they could interact, if they could, was by some special organ like the pineal body.
which of course can't be true, for a whole variety of reasons. But basically that's because cognition, thought, is not physical stuff. The aboutness, the very concepts that I'm trying to get across here, are not physical stuff, they're this abstract thing. So for Descartes, the abstractness of thought, the abstractness of ideas, was something really fundamentally different than material stuff, energy,
Well, here's the issue. He should have said, not I think, therefore I am, but I feel, therefore I'm real. Let's go back now to the simple pre-organism I described. It's physically damaged. Its physicality has been disturbed. Now, the informational-like processes, the constraints that are there,
begin to repair that specifically. In a sense, it's the physicality of it that activated the informational side of it to prepare the physicality of it. Think about you and I. Big jump here. Why am I sensitive to certain things and not other things? Because they affect my materiality.
They affect my physicality. There may be risky or helpful. The things that catch our attention are the things that either hurt, surprise us, feel good, things that are really relevant to how my body works, how I feel. I think that we've misunderstood cognition. Cognition is tweaking feeling.
and feeling is about our own physicality. It's this entanglement of the informational part of our lives with the physicality part of our life, the inseparability of the two that Descartes thought were separable and couldn't come together again. That means that feeling is a necessary part of representing the world.
and why representing the world is always evolving towards doing it right, doing it better, getting truth, getting reality. If you think that language and thought are just arbitrary, a lot of postmodernism sort of went this way, using language as a kind of model. Maybe all of our physical theories, our theories of physics are just sort of weird thoughts. It doesn't have anything to do with reality.
No, in fact just the opposite. Because of the necessary entanglement of semiosis, that is producing aboutness, that it's always normative, it's always value-laden, and it always feels like something. What we're doing feels like something. It's not just that there's information here, but where we're surprised or confused or upset or worried
It affects our thinking. Starting a business can seem like a daunting task, unless you have a partner like Shopify. They have the tools you need to start and grow your business. From designing a website to marketing to selling and beyond, Shopify can help with everything you need. There's a reason millions of companies like Mattel, Heinz, and Allbirds continue to trust and use them. With Shopify on your side, turn your big business idea into... Sign up for your $1 per month trial at Shopify.com slash special offer.
So let me ask you this, just like off air, you reference that universal grammar should be capital U, universal grammar, because it's right at the fundament when it comes to physics and chemistry is on one end the post modernist and perhaps post structuralist, though I haven't studied them would say that it's subjective, especially morality is subjective.
Would you say that not only is it objective in a sense, it's capital O objective? Yes. It's O objective in the sense of the relation to a self. If there wasn't a self where something could be damaged or lost, if there wasn't some value there, then there could be no value to things in the world. It's objective
and subjective in that sense. But by subjective, we want to expand to even the simplest kind of self that I've described. It's not subjective in the way you and I are subjective, but there's a subject there. There's something that's subject to the whims of the world that will benefit or be harmed by what's going on in the world. So in that respect, it's not there in chemistry and physics.
Chemical and physical processes that are not alive don't have this feature. But in certain organizations of chemical physical processes, it can emerge. There was a time before selves, but now there are selves. There was a time before normative features in the cosmos, but now there are normative features, all with respect to selves. So in one sense,
The ontology epistemology story has fallen prey to the Cartesian story, this dualism, that we're never going to get them together. My point is, no, they actually are entangled. They can't be pulled apart. However, you can have ontology without epistemology.
You can have chemistry and physics. You can have a big bang. You can have an early universe without any selves. But as soon as there are selves, there's aboutness. There's reference, there's meaning, and there's value. And selves can become more complex. We happen to be an example of a very complex form of self in which there are selves within selves to some extent. Each of my cells is a self and has some self. My
Physiology, irrespective of my consciousness, has a self. In fact, in a coma with serious brain damage, if enough of my nervous system is working, my physiology is still going. In fact, after I pass away and die, my fingers, nails will keep growing, my hair will keep growing, because the cells are still alive. But the larger system is no longer there to support them, and eventually it will stop.
Brains and minds are cells within itself in that respect. Part of my environment is my body. And my body has an environment outside of my body. So it's this nesting of cells. But now think about you and I go back to that story about us being part of a global symbolic system linked together by language and computing. Are we like cells
within that system. Would we even know that system? Would we know that it's influencing us, that it's modifying us the way our nervous system modifies ourselves by changing the flow of hormones and stuff like that? I think understanding these principles, which currently are just anathema to material science for some reason,
The current science either wants to say, no, it's always already there. It's always just panpsychism. Every molecule has a little psychological self-something to it, and therefore it's always already there. That's a non-explanation. To say that it doesn't exist and it's all an illusion is also a non-explanation. Because who's having that illusion? And the illusion is normative. It's a kind of value.
Illusions are not real. But you gotta have something that can assess reality and non-reality. So that the consciousness as illusion story is also a reductio ad absurdum. The consciousness is already there in every atom, every electron, every quantum event, just simply postulates it into existence. Doesn't help us explain why a whole lot of that stuff together in a stream
And a small amount of it together in my head produced very, very different kinds of phenomena. To just say that, that everything has that, doesn't it? What it basically says is no, well, even if that's true, the whole explanation of the difference is in how they're differently organized. That's where all the explanation is going to be. It's not in that it's already always, always there. Yes, yes. So we have to have an explanation.
And what I've been trying to do here is to say that you can go from basic thermodynamics, build to life. And as soon as you do that, you have the semiotic capacity, you have this informational aboutness capacity. And with it, you have purposes, you have aims, you have ends, you have self, you have aboutness, you have value.
And yet, all of those things can become much, much more complex as you build higher order kinds of self. So to wrap this up in, let's say five minutes, because it's like a three hour behemoth podcast, and I could keep going for five more hours, and I think you can as well. But let's wrap it up in approximately five minutes with you tying together some of the threads that we've referenced. But
Talk about absence, so your notion of absence, which I don't know if it's related to the notions of nothingness of the East, but if it is, then you can talk about that. It's not. Okay, great. Okay. So talk about absence and its relationship to purpose and the other themes. Well, a purpose, of course, in our common sense, understanding of it is, is that there's something I want to happen.
And I have a representation of it, typically, a mental image of what I want to happen. And I use that to organize my behavior to make it come into existence. That state of things is currently absent. So before we got on this podcast, we had an idea of how we wanted it to happen and what was necessary to get there. That was currently not in existence. But a representation of that which was not in existence
was the target of this representation, was used to constrain my behaviors, your behaviors, getting our computers together, looking at our watches, doing all the things that were necessary to make it come into existence. So in that simple sense, my purpose is something present, the representation in my mind, with respect to something absent that was represented, but is not present. So something present
linked by my actions to something that's currently absent, which will then become present. So that's a simple way to talk about just sort of the common sense notion of purpose. It's asymmetric in time. In fact, usually with respect to you and I, we have to do work to make it happen. Because it's oftentimes that the world won't go that way. The spontaneous way the world changes,
is not towards that kind of thing I want to happen. So usually I have to do work. This is true, of course, even for simple microorganisms. They have a representation in a much, much simpler sense, of course. Let's say, take again, a bacterium searching for food as a representation captured in the form of its receptors that picks up certain kinds of molecules
and then it has a way to assess gradients of these molecules and it uses that to basically accomplish something which is to increase the concentration of those molecules and therefore to increase the food it needs to maintain itself far from equilibrium. So in that sense there is a much simpler sense of representation and purpose and absence. That is there's an absence of enough food
this case, and an action based upon a representation, though just a chemical representation in this case, that allows that thing that's absent being in an environment with low sugar to being in an environment with higher sugar concentration. So in that respect, purposive-like activity or what we might call teleological activity, end-directed activity,
is characteristic of this relationship of something present, some vehicle, some sign vehicle, some energetic or material something, even if it's just a process like a cognitive active neurological process that's continually recycling to keep this image in mind while I'm engaging in some activity. That's a physical something that's representing something that it's not.
And this is that problem I began this discussion with, this displacement issue. How is something present linked to something that's not present, therefore it's absent? Not nothingness, but something absent not now, not here. Now, is this related to potentiality versus actuality? Like that is to say that the absence is... In that sense, yes. These are potential things, but potential things are not the only things that are absent. There's lots of impossible things that are absent as well.
Yes. And we can think about those things, because we've got this displaced way of representing things. But so here's the issue. What we want to talk about is how in order for something to be alive, to have a self, there has to be self and non self. There has to be benefit and harm. Good for bad for truth, falsity.
Those things have to be there to be alive. And that's because what's being maintained, what's being brought into existence is the process of creating existence itself. So what's being maintained in existence? These forms, these constraints, these signs, this information is being maintained in existence. It always has to be embodied physically.
and therefore only certain kinds of physical processes can embody it and then use it to interpret other physical processes that are absent. So in this respect everything about what I think, my thoughts are present in my head but what they're about is not in my head and it's not the neural activity going on and it's not something necessarily even in the world. It's absent in a fairly abstract sense.
But it's because I've built up, because evolution and my own development has built up an interpretive system that can use these steps to allow fully disconnected things, unrelated things to represent each other. That it's possible for something absent, something even impossible to be represented.
And in fact, it's possible to now say, oh, this is I would like to create an autogenic virus in my laboratory. These don't exist as far as I know in science today, but I know all the parts. I know how it should work. And I have a sense of what kind of chemistry will make it happen because I understand how viruses work and I understand how catalysis works. This is something impossible. It doesn't exist, maybe.
I actually, to be honest with you, I suspect that the autogenic viruses exist even on the earth. We just haven't found them yet. But assuming that they don't, assuming that they got wiped out when extinct or whatever, I could have an image of what it would be like. I've just described this. They may not exist on the earth. But because of this description, this thing which doesn't exist can be brought into existence.
for the first time. And it will have a kind of aboutness that I don't have, that no other organism on earth has, because it's a distinct kind of system with a distinct kind of aboutness, distinct kind of relationship to its world. These things will emerge for the first time in the world. In the same sense that I think that selves emerged at one point in the evolution of the cosmos, probably not just here, but all over the place.
Um, but they merged at one time. It was possible, but it wasn't prefigured. I do think it was very, very likely to occur eventually as the universe cooled and we got planets and we got low energy. So I think particularly autogenic virus like structures, I think are probably going to be pretty widespread in the cosmos for the simple reason that they're chemically quite simple.
and they don't have all the sensitivity that living organisms do. Professor, I just wanted to thank you so much for spending such being so generous with your time. I got to 10% of the questions and some other themes I want to touch on next time if you're willing to come on for another round. Happy to do that. Great is nurture versus nature and why that's a false dichotomy.
Darwinian evolution. I said this in the introduction, so people were probably looking forward to this, but why Darwinian evolution isn't the whole story. You referenced this in one of your talks, and so I wanted to ask you about that, but we can do that next time. I talked about the other half as inverse Darwinism. Okay, well, we got to get to inverse Darwinism next time, and perhaps some of the things- It's exactly the topic of my new book. Great, great. Okay, so, Professor, again, thank you so much.
My pleasure. It's been a great conversation. I enjoyed it a lot.
The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, et cetera, it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
If you'd like to support more conversations like this, then do consider visiting theoriesofeverything.org. Again, it's support from the sponsors and you that allow me to work on Toe full-time. You get early access to ad-free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 9095,
"audio_duration_seconds": 11155.9,
"completed_at": "2025-12-01T01:19:34Z",
"segments": [
{
"end_time": 25.742,
"index": 0,
"start_time": 1.186,
"text": " So now, so let's talk about the problem of life then. The origins of life has to be a transition in which you use, which the living system is using this process that's self-destructive to keep the self-destructive processes from being self-destructive. But it has to use them to do this. How is that even possible?"
},
{
"end_time": 56.323,
"index": 1,
"start_time": 28.609,
"text": " Okay, I just finished editing this podcast. This is one of the most deep technical talks. The first 20 minutes are difficult to follow. And you may wonder where is it going? And how is it related to the topic at hand? What you'll notice that by the end, every tendril from every argument gets tied together neatly and nicely. So look through the timestamps of what you're interested in is specific subjects or watch until the end all of it where it plays out like a beautiful orchestra or movie where all the storylines start to intersect toward the end."
},
{
"end_time": 75.111,
"index": 2,
"start_time": 56.971,
"text": " Today we talk about consciousness, why Darwinian evolution is incomplete, and the symbolic grounding problem, which is another aspect of the hard problem of consciousness, which I call the hard problem of meaning. Professor Terence Deakin is a neuroanthropologist who's taught at Harvard for eight years and is currently a professor at the University of California, Berkeley."
},
{
"end_time": 98.592,
"index": 3,
"start_time": 75.111,
"text": " He has an extraordinarily philosophical mind. He has studied under Chomsky, Noam Chomsky, even though he has major disagreements with him. And part of his aim is tackling the evolution of human cognition as well as the hard problem of consciousness, which in his view is not so hard. Terrence will be coming on again for a part two. So if you have questions, then do leave them in the comment section because I'll be calling from there. I or and my wife read every single comment."
},
{
"end_time": 122.619,
"index": 4,
"start_time": 98.592,
"text": " My name is Kurt Jaimungal, and I have a background in mathematical physics, mainly in the theoretical end with unified theories. And this channel is dedicated to the exploration of the variegated terrain of theories of everything from primarily a physics perspective, but as well as exploring the role that consciousness has to the fundamental laws of nature, provided that those laws are even knowable to us. And if they're not because of some variants of girdles and completeness theorem, then why? Why does that have any application to physics?"
},
{
"end_time": 146.237,
"index": 5,
"start_time": 122.619,
"text": " And there's uncommon goods."
},
{
"end_time": 171.869,
"index": 6,
"start_time": 146.237,
"text": " the former two rocket money and masterworks you'll hear at approximately 20 minutes and then again at 40 minutes for roughly 60 seconds each and as for brilliance it's also 60 seconds which is occurring presently if you're familiar with toll you're familiar with brilliance but for those who don't know brilliance is a place where you go to learn math science and engineering through these bite-sized interactive learning experiences for example and i keep saying this i would like to do a podcast on information theory"
},
{
"end_time": 186.357,
"index": 7,
"start_time": 171.869,
"text": " particularly Chiara Marletto, which is David Deutsch's student, has a theory of everything that she puts forward called constructor theory, which is heavily contingent on information theory. So I took their course on random variable distributions and knowledge and uncertainty."
},
{
"end_time": 203.08,
"index": 8,
"start_time": 186.664,
"text": " in order"
},
{
"end_time": 227.739,
"index": 9,
"start_time": 203.08,
"text": " It would be unnatural to define it in any other manner. Visit brilliant.org slash toe. That is T O E to get 20% off the annual subscription. And I recommend that you don't stop before four lessons. I think you'll be greatly surprised that the ease at which you can now comprehend subjects you previously had a difficult time grokking. At some point, I'll also go through the courses and give a recommendation in order. Thank you and enjoy this highly anticipated and"
},
{
"end_time": 252.978,
"index": 10,
"start_time": 228.234,
"text": " Far overdue conversation with Terrence Deacon. Professor, I'm super, super, super excited to have you on. I feel like this is long overdue. The more that I've researched you, the more that I realized, man, this is a guest that I should have had a year ago, and I should have prepared for a year. And what I mean by that is I prepare for guests for weeks. For most people, I come up with a question every minute or so or"
},
{
"end_time": 279.991,
"index": 11,
"start_time": 253.575,
"text": " Every 30 seconds of watching a lecture of theirs online and maybe one question per paragraph if I'm reading a paper, whereas for you it's fivefold. So it's like questions occur so much that I'm scrawling and I told you I couldn't collect them all into one place. So anyway, thank you so much for coming on. No problem. I love to have conversations. Open conversations like this are the best. So this channel is"
},
{
"end_time": 303.797,
"index": 12,
"start_time": 280.486,
"text": " Dedicated to understanding fundamental reality whatever that means. So why don't you give a brief overview of your insights your research with respect to that topic with respect to consciousness with respect to The large questions that loom over once moments of existential dread So take us through how your views have developed over time. Where have they been and how do they change? I"
},
{
"end_time": 324.735,
"index": 13,
"start_time": 304.138,
"text": " So I'll do it in a sort of autobiographical sense that in the 1970s as a student, I was very much interested in the burgeoning computer science world in systems theory as a general theory of how the world works, how the universe works."
},
{
"end_time": 335.947,
"index": 14,
"start_time": 325.043,
"text": " and was of a student of Gregory Bateson, one of my long-term influences over the years, who was"
},
{
"end_time": 365.794,
"index": 15,
"start_time": 336.476,
"text": " a very eclectic anthropologist and biologist whose father, William Bateson, was sort of the father of genetics at the beginning of the 20th century. And Gregory Bateson was very much interested in information and communication and had spent the 1940s and 50s associated with a number of people who were studying the then growing field of information theory, communication theory, and systems theory. And as a result, set me off"
},
{
"end_time": 394.957,
"index": 16,
"start_time": 366.101,
"text": " But by the end of the 1970s, I, by accident, mostly by accident, not being interested in philosophy at all, really, I was mostly interested in the sciences, came across the work, as I say, by accident, almost of philosopher of the late 19th century, Charles Sanders Peirce. And he was writing also about what today we might call information and communication. But he had developed a"
},
{
"end_time": 409.224,
"index": 17,
"start_time": 395.862,
"text": " an unusual way of thinking about it that basically had mostly fallen out of the field of science and communication and particularly out of linguistics by the 1970s. And it's a field that he called semiotics."
},
{
"end_time": 439.172,
"index": 18,
"start_time": 409.906,
"text": " And as now since then has grown to be a much more widespread field with lots of interest, though mostly in the humanities. And that's always disturbed me because I think it's really a fundamental approach to the sciences. And that was person's interest. Back when he was writing a semiotic theory is basically like a theory of information only bigger in the following sense that he was interested in what"
},
{
"end_time": 466.152,
"index": 19,
"start_time": 439.684,
"text": " Most theorists of mind have been interested before and since, and that is this question about mental representation. You know, how are things represented? When something is represented, of course, it's not there. It's something standing in forward in some ways. But it doesn't help us understand what that actual physical relationship is. And one of the challenges ever since in philosophy has been to try to make sense of that."
},
{
"end_time": 495.725,
"index": 20,
"start_time": 466.442,
"text": " Peirce's way of approaching it was to say, no, there's actually three different ways in which this is done. And he was recognizing that there are representations by likeness, iconic, he called it, representations by correlation or contiguity or part-whole relationships, causal relationships of some form, he called indexical relationships, and other kind of relationships that words, for example, have to what they represent, which is"
},
{
"end_time": 524.906,
"index": 21,
"start_time": 496.203,
"text": " based on some kind of agreement of how to interpret sounds, but there's nothing about the words we produce in most cases that is linked to what they represent, right? We sometimes gloss this over by saying it's arbitrary. One thing that's happened in linguistics, and one of the things that got me interested in language and the origins of language, was why were the only species that does this? Were the only species that does this meaning does language? Does language, or in fact what I'll"
},
{
"end_time": 534.121,
"index": 22,
"start_time": 525.299,
"text": " eventually say is that does what we would call symbolic communication. That is something distinct from iconic and indexical communication."
},
{
"end_time": 561.323,
"index": 23,
"start_time": 534.514,
"text": " The question is, for me at the time, who became interested eventually in the neuroscientists, I spent most of my career as a neuroscientist, studying mostly development of brains and how they've evolved. And my original question was, well, so what happened if language is so unusual, and it takes a very unusual kind of cognition to accomplish it? It must have evolved at some point. And therefore, there should be"
},
{
"end_time": 591.681,
"index": 24,
"start_time": 562.142,
"text": " in effect a reflection of this in the structure of the human brain and how it's different from non-human brains. So I got into the neurosciences to try to find it. In fact my PhD work was tracing connections in monkey brains to try to find out what's fundamentally different about human brains that makes language possible. And this is where in interaction I became interested in Noam Chomsky, took courses from him. We never quite agreed about this in part because of course he's coming from it"
},
{
"end_time": 616.954,
"index": 25,
"start_time": 592.125,
"text": " coming to language from a formal point of view and I was coming to language from a biological point of view. Where we came into conflict was this an accident of evolution, some kind of magical thing that happened to the brain, a miracle that we don't understand and it happened all at once and language pops in suddenly, unfortunately doesn't explain very much."
},
{
"end_time": 639.718,
"index": 26,
"start_time": 617.619,
"text": " If you want to say, well, language is, you know, there's some algorithm that makes it possible, and that's in the brain. As a biologist, you want to explain, okay, how is it organized? What parts of the brain are involved? What parts of the brain change between chimpanzees and ourselves to make it not only possible, but easy? We acquire languages easily. I mean, even though, you know,"
},
{
"end_time": 666.869,
"index": 27,
"start_time": 640.35,
"text": " multiple languages become more and more difficult as we get older. Nevertheless, we're pretty good at it. And, and no other species can even get close. Maybe not even start, although I think there's some, some evidence that they could start, but to get very far almost not at all. So there had to be something about the brain. And we knew that brains got bigger. That's one thing, but bigger is a pretty generic feature. And I spent a lot of time studying"
},
{
"end_time": 692.858,
"index": 28,
"start_time": 667.159,
"text": " why human brains are in fact larger than other primate brains, for example, and why primate brains are on average larger for their body size than any other mammals. And we've now shown that, in fact, the size story is really a red herring in lots of interesting ways. And we can come back to talk about that. But as a result, I became very much interested in this. And my PhD work basically showed that there were no new parts."
},
{
"end_time": 713.729,
"index": 29,
"start_time": 693.2,
"text": " The human brain, although bigger, has more neurons, has no new kinds of structures in it that do this. It was a real paradox. So that basically I was looking at the connections between areas that we know in the human brain are really involved critically in this areas in the brain called Broca's area and Veronica's area. And"
},
{
"end_time": 741.118,
"index": 30,
"start_time": 714.77,
"text": " I was looking at the connections between them, but doing so in monkey brains, using the techniques at the time that allowed us to trace this out. And what I found was that everything in the monkey brain predicted what we should expect in a human brain. It wasn't until a couple of decades later with connectivity being demonstrated in the human brain using these various connectome technologies that we've developed,"
},
{
"end_time": 766.015,
"index": 31,
"start_time": 741.749,
"text": " basically tractography it's called these days, in which we use fMRI-like techniques to map out roughly what the connections are. In fact, I could do it microscopically, of course, in other species. We could look at actual synapses and axons. We're just sort of simulating that even now. But nevertheless, what it shows is that the connections that I was able to demonstrate in monkey brains"
},
{
"end_time": 782.466,
"index": 32,
"start_time": 766.015,
"text": " between areas that had the same cellular structure that we find in human brains. They were all there, but monkeys can't do anything like what we can do. And so it really sort of drove me in other directions. And it drove me to study what's behind brain evolution, which is brain development."
},
{
"end_time": 797.432,
"index": 33,
"start_time": 782.773,
"text": " How it is that the genes that determine what segments of the nervous system partition into different kinds of cells and how those cells send out their output branches, their axons to connect to other cells in the brain."
},
{
"end_time": 820.469,
"index": 34,
"start_time": 797.824,
"text": " And so by studying this, it shifted my attention towards a technique that would help me understand how that might be in different species. So all through the 1990s, I shifted my effort to studying how this happens in development by using fetal neural transplantation. And what I was doing at this time was to take"
},
{
"end_time": 849.138,
"index": 35,
"start_time": 821.374,
"text": " immature fetal cells from one species brain, transplant them into a different species brain, and I had markers that I could see the one kind of animal's cell in another kind of brain. So I could actually look at how individual neurons actually found their targets. And what I found is that in fact, and we started out by transplanting pig cells, pig embryonic cells into rat brains,"
},
{
"end_time": 876.101,
"index": 36,
"start_time": 849.48,
"text": " where we could actually see the pig cell in this environment of rat and see how the connections were made. It turns out that the pig cells found appropriate targets in the rat brain. Not only that, if the rat brain had been previously damaged, so it lost some function, transplanted pig cells could restore some of that function. We did this with this kind of Parkinson's model. All of this sort of began to make the question even more difficult in some respects."
},
{
"end_time": 905.998,
"index": 37,
"start_time": 876.408,
"text": " But what I had found is that in the enlargement of the brain, it changes the relative way that connections get established. And think about it in terms of our own voting systems here. We have what we call gerrymandering going on, in which you select a group that are going to compete against another group, but you select it in such a way so that one always wins over the other. Well, it turns out that quantity, of course, matters."
},
{
"end_time": 929.684,
"index": 38,
"start_time": 906.442,
"text": " in the development of connections within the brain. There's a significant overproduction of axons and connections in neurons, in fact, that then compete for connecting to other neurons. And that competition is driven by a kind of a Hebbian-like process as a process in which the standards"
},
{
"end_time": 957.398,
"index": 39,
"start_time": 930.196,
"text": " saying that we use for students is that neurons that fire together wire together. That is what happens is that there's a competition in which synchrony is actually an aid and that's because there's actually a kind of it's almost like a mini Darwinian process where you know Gerald Edelman decades ago called this neural Darwinism but basically that that that they get fed"
},
{
"end_time": 984.94,
"index": 40,
"start_time": 958.046,
"text": " the receiving cell, if it fires in synchrony with the inputs, it gives back a little bit of a growth factor that keeps those axons strong and those that are not in synchrony just don't get it. So it's basically very much like natural selection. There's a kind of a food competition going on here. But in any case, what happens is as the brain enlarges and some parts enlarge differentially with respect to other parts, it biases that competition in interesting ways."
},
{
"end_time": 1009.326,
"index": 41,
"start_time": 985.589,
"text": " And that biased competition actually restructures the connectivity, not in an absolute sense, but in a relative sense. Some things are more connected than others. Some things are less connected than others. And this, I think, has explained how language has become possible. I like to describe it that the demands of language, the demands of symbolic communication have actually"
},
{
"end_time": 1032.227,
"index": 42,
"start_time": 1010.009,
"text": " played a role in restructuring the brain that is generation upon generation brains that did better at this were reproduced and those that did worse at it were not. And that means that the very fact of using initially very crude kind of symbolic communication over the course of about 2 million years has played a significant role in restructuring the human brain."
},
{
"end_time": 1043.387,
"index": 43,
"start_time": 1032.568,
"text": " So that over time we become very good at it not because we have some special language device or some special algorithm that does it but because we have hundreds of little tweaks."
},
{
"end_time": 1072.841,
"index": 44,
"start_time": 1043.916,
"text": " that in all different ways have made it easy, whether it's having to do with how our hearing works, how our attention is shifted, all these various features that are slight biases just make it really easy for us to do this. So actually the title of my mid-90s book was the symbolic species, referring to a species that is adapted to the symbolic world, like you might say seals are aquatic species, we're symbolic species."
},
{
"end_time": 1100.555,
"index": 45,
"start_time": 1072.841,
"text": " We've adapted to living in a world of symbols. And I think of it sort of like a beaver in a beaver dam. Beavers over the course of millions of years have been damming up rivers and streams to create an aquatic environment. Beavers have become aquatic mammals because of what they've done to their environment. We've become symbolic mammals, symbolic primates, because what we've done to our environment"
},
{
"end_time": 1121.869,
"index": 46,
"start_time": 1101.015,
"text": " we've adapted to that environment by changing our bodies and changing our behavioral capacities, like beavers have adapted to an aquatic environment over the course of millions of years, flat tails, web feet, the capacity to hold their breath in interesting ways, and of course to build dance. All of this falls under the"
},
{
"end_time": 1152.756,
"index": 47,
"start_time": 1123.643,
"text": " the term niche construction, that is, we've constructed a niche, we do in every generation, we build this symbolic cultural niche that we have to adapt to. And that something like that has been going on, at least according to the work I've done, I would guess that has been going on for at least 2 million years, slowly, slowly modifying this. So this is the prelude to where I want to go next, because this is where it's going to come back to your question about, you know, what's fundamental to the world?"
},
{
"end_time": 1177.654,
"index": 48,
"start_time": 1153.626,
"text": " So one of the things that this said to me is that somehow communicating symbolically requires a very different kind of interpretive process than communicating by likenesses, icons, or indices. So when I scream with pain, that's an index. It's physically correlated, physiologically correlated."
},
{
"end_time": 1206.613,
"index": 49,
"start_time": 1178.097,
"text": " With this state, and that means that every other human being who has this same connection actually immediately can interpret this because they interpret it the same way. We laugh and we sob, and because we all do it the same way and it's specifically connected with a particular emotional state, we understand it pretty well. It's an index, like a fever is an index, because of its physical correlation. On the other hand,"
},
{
"end_time": 1232.705,
"index": 50,
"start_time": 1207.022,
"text": " We also produce iconic communication. That is, you know, coloration. I like to think about moths that have eye spots on them. A moth is sort of camouflaged sitting on a tree until a bird flies by and maybe would eat it. But if it disturbs it, the moth opens up its wings and there's these eyes, big eyes looking back at a bird."
},
{
"end_time": 1250.401,
"index": 51,
"start_time": 1233.046,
"text": " Of course that's going to play a role in affecting the bird's communication, the bird's interpretation of what's going on. It might be dangerous. Maybe I shouldn't eat this one. So what's happened is of course it's communicating iconically by likeness. Brains are really good at"
},
{
"end_time": 1272.551,
"index": 52,
"start_time": 1250.896,
"text": " Figuring out what something is by virtue of what it's like because they have lots of other experiences. With TD Early Pay, you get your paycheck up to two business days early, which means you can grab last second movie tickets in 5D Premium Ultra with popcorn, extra large popcorn,"
},
{
"end_time": 1284.77,
"index": 53,
"start_time": 1276.152,
"text": " TD Early Pay. Get your paycheck automatically deposited up to two business days early for free. That's how TD makes Payday unexpectedly human."
},
{
"end_time": 1312.892,
"index": 54,
"start_time": 1287.21,
"text": " Most of us share in the problem of being so digitally active that we overconsume or overproduce content, which in turn leads to an overwhelming amount of monthly monetary subscription. This channel's theme is exploration, at least one of the main themes, and it's difficult to peregrinate when we're handling the more routine and monotonous items. This is exactly why we're partnering with today's sponsor, Rocket Money, formerly known as Truebill. Rocket Money automates the process of financial saving."
},
{
"end_time": 1334.991,
"index": 55,
"start_time": 1312.892,
"text": " And tackles the various subscriptions that inevitably pile up for all of us, like Netflix, phone bills, Patreon subscriptions that aren't a toll, etc. Rocket Money displays all of your subscriptions, tells you where you're losing money, and how to cancel them with a single click of a button. Most of us think that we spend approximately $80 a month on subscriptions, but it turns out that the average figure is in excess of $200."
},
{
"end_time": 1351.971,
"index": 56,
"start_time": 1334.991,
"text": " Get rid of useless subscriptions with Rocket Money now. Go to rocketmoney.com slash everything. Seriously, it could save you hundreds per year. That's rocketmoney.com slash everything. Cancel your unnecessary subscriptions right now at rocketmoney.com slash everything."
},
{
"end_time": 1369.94,
"index": 57,
"start_time": 1352.278,
"text": " If you want to avoid humdrum, basic, and commonly monotonous gifts this year, then Uncommon Good is your secret weapon. Uncommon Goods is here to make your holiday shopping stress-free by scouring the globe for the most remarkable and unparalleled gifts for everyone. Whether you're shopping for Secret Santa or for gifts for your entire family,"
},
{
"end_time": 1396.493,
"index": 58,
"start_time": 1369.94,
"text": " Uncommon Goods knows precisely what they want. When you shop at Uncommon Goods, you're supporting artists and small independent businesses. These fine products are often made in small batches, so it's best to shop now before a certain item that you like happens to sell out. Uncommon Goods looks for products that are high in quality, unique, and often handmade or made in the U.S. They have some of the most meaningful and out-of-the-ordinary gifts anywhere. From art and jewelry to kitchen, home and bar, Uncommon Goods has something for everyone."
},
{
"end_time": 1421.067,
"index": 59,
"start_time": 1396.493,
"text": " So, not the same lackluster gifts that you can find just anywhere. Additionally, with every purchase made at Uncommon Goods, they give back $1 to a non-profit partner of your choice. So far, they've donated more than $2.5 million. To get 15% off your next gift, go to uncommongoods.com slash everything. That's uncommongoods.com slash everything for 15% off. Don't miss out on this limited time offer, uncommongoods.com slash everything."
},
{
"end_time": 1445.435,
"index": 60,
"start_time": 1421.527,
"text": " About the moth example, if it was the eyes of the moth that it revealed, so somehow it had eyes on its wings, would that then be indexical? Because it would be literal? So in a sense, now it becomes both. And this is what I'm going to talk about in a few minutes. And that is there's a hierarchy here. One depends on the other in interesting ways. If it wasn't the fact that birds already had a way to interpret eyes,"
},
{
"end_time": 1473.507,
"index": 61,
"start_time": 1446.135,
"text": " and recognizing that particularly large eyes looking straight ahead are very likely to be a predator, like an owl or a cat of some kind. It wouldn't have any effect at all. But now there's two things going on. Now, when the bird sees this, it now triggers maybe some behavior. Be wary of this. That's an indexical relationship."
},
{
"end_time": 1502.91,
"index": 62,
"start_time": 1473.899,
"text": " But also when the moth opens its wings suddenly, it's also indexical because now the bird knows that this is not just bark I've been looking at, but there's something else there. So if that's an index, so there's both indices and icons available here. But nervous systems, since their beginning, and I would say life since its beginning, has been about discerning iconic and indexical relationships. So, you know, think about"
},
{
"end_time": 1521.869,
"index": 63,
"start_time": 1503.387,
"text": " microorganism like E. coli, you know, it has to identify things that are edible and non edible. It iconically recognizes that that sugar like molecules and molecules that are breakdown products of something organic might in fact be a food source."
},
{
"end_time": 1543.08,
"index": 64,
"start_time": 1522.227,
"text": " They have a kind of a likeness to each other. Their likeness is because these are things by virtue of whatever traits they carry, whatever molecular features they carry, are interpreted by the receptor sites on this microorganism as a food. Food has a likeness. There's a certain trait that makes them alike. But now,"
},
{
"end_time": 1564.633,
"index": 65,
"start_time": 1543.797,
"text": " It also, if it's swimming around and it's moving around, the gradient of glucose might increase. That becomes an index that there's a diffusion away from some source. So it's a good idea to swim up that gradient of higher concentration."
},
{
"end_time": 1587.329,
"index": 66,
"start_time": 1564.94,
"text": " That becomes an index. So even as simple as a bacterium has iconic and indexical means. This is the way information is interpreted, not just carried, but interpreted. You have to have a system that interprets it. Obviously, if sweet things are not what you're looking for, then you're not going to have that tendency."
},
{
"end_time": 1617.176,
"index": 67,
"start_time": 1588.575,
"text": " We have a lot of these, of course. We're attracted to certain things and repelled by certain things, and they're icons and indices of various kinds. We can also produce conventional icons and indices. I mean, think about the stick figures on restroom doors, for example. They're both iconic in a crude sense, in part because of a convention that we have. I mean, girls don't wear dresses anymore, skirts anymore, and yet that's typically the figure in the West that we still have on restroom doors."
},
{
"end_time": 1647.381,
"index": 68,
"start_time": 1618.285,
"text": " It's there conventionally, but also because it's supposed to be iconic. But the fact that it's affixed to a particular door is indexical. It says that this indicates that behind this door, there's supposed to be males versus females. So icons and indices are there everywhere. The question is, how do we develop a capacity to represent something if the sign vehicle we're using has no relationship"
},
{
"end_time": 1675.077,
"index": 69,
"start_time": 1647.824,
"text": " to what it represents. That's the language problem. And for years, and this begins, I would say, hundreds of years back in time, we've just sort of assumed that, well, this is just a convention. We just agree that it does. The problem is that conventions just don't happen by accident. The problem is, yes, a miracle could happen and give us language."
},
{
"end_time": 1697.585,
"index": 70,
"start_time": 1675.657,
"text": " and allow us to have all the same ways of doing it. Or, like any other convention, any other social habit, it takes communication to set it up. So think about any other thing that we agree upon to do, a habit that we have socially. To establish that convention, we have to communicate about it."
},
{
"end_time": 1726.937,
"index": 71,
"start_time": 1698.541,
"text": " Now, that means you can't use a convention to produce a convention unless you already have a first convention. How do you get the first one? How do you get language when there's no language, when there's no other conventional form of communication? And that is just watch young children. They recognize things that are like each other. They communicate to others by sharing objects of a certain kind. They point or reach"
},
{
"end_time": 1757.039,
"index": 72,
"start_time": 1727.398,
"text": " That is, pointing is an index. Objects that you can play with, or food that's always food, or drink that's always drink, each instance is iconic of each other. So right away, children are born like other species with lots of iconic and indexical capacity. And as the first year goes by, and they develop this unique way of indicating that only we humans have, which is pointing with our fingers and hands,"
},
{
"end_time": 1774.343,
"index": 73,
"start_time": 1757.91,
"text": " And looking in particular directions and watching somebody else's gaze. These are all indices. They're directly correlated with something that's there. So children build up this conventionality slowly in the first year of life. They have lots of information."
},
{
"end_time": 1803.166,
"index": 74,
"start_time": 1774.855,
"text": " about communication. In order to build up a means of communicating that lacks that, in other words, to develop the conventionality, they also have to use iconic and indexical abilities. So one of the real challenges in understanding the brain then, is first of all to understand that whenever you build up conventional communication, I like to call this displaced reference. It's because nothing about the sign vehicle gives it away."
},
{
"end_time": 1829.906,
"index": 75,
"start_time": 1803.643,
"text": " It's only the sharing of interpretive capacities that makes it work. To share it, of course, we have to go through all of these processes. We're really good at it. My work has begun to pursue this even at the molecular level. We were just talking about bacteria a minute ago, but it turns out that we see the same thing going on even at the molecular level. Think about the simple example of"
},
{
"end_time": 1858.387,
"index": 76,
"start_time": 1831.305,
"text": " how DNA produces proteins that have interesting three-dimensional structures. DNA has a sequence of nucleotides. Every three of them we call a codon. It's interesting we use the metaphor of a code here. Think about Morse code. Morse code is an arbitrary correlation between dots and dashes and letters to conventions."
},
{
"end_time": 1887.551,
"index": 77,
"start_time": 1859.002,
"text": " and we've now linked them together with a conventional association because dots and dashes aren't letters. So we have exactly the kind of thing that we're struggling with here, a conventional or symbolic form of communication. A codon matches to an anticodon in a messenger RNA molecule so that in going from DNA to the messenger RNA it's laid out in"
},
{
"end_time": 1916.357,
"index": 78,
"start_time": 1888.029,
"text": " iconic ways. That is, the messenger RNA now is a reflection of an exact, you know, reverse isomorphism of the DNA that it took that information from. Its nucleotide sequence corresponds exactly to the one that's produced by the DNA molecule. Then taken into a ribosome, that structure also made up of DNA, I mean of RNA and protein,"
},
{
"end_time": 1945.674,
"index": 79,
"start_time": 1916.766,
"text": " then takes in another kind of RNA molecule, transfer RNA molecule, which has on one end of it three nucleotides, which will match again in what we call anti-codon fashion. That is the reverse three that now attaches it iconically to the messenger RNA molecule. But on the other end of the transfer RNA. Why is it iconically attaches? Because it's a mirror image."
},
{
"end_time": 1963.37,
"index": 80,
"start_time": 1946.186,
"text": " So we might say it covariates. It's a covariance relationship. And what we mean by it, there's a similarity. In this case, there's an inverse similarity. So in the DNA molecule, the A's, G's, C's and T's match up with"
},
{
"end_time": 1992.637,
"index": 81,
"start_time": 1964.172,
"text": " They're opposite. So G and C bind together, A and T bind together. In the messenger RNA, you replace one of them with another one called uracil, which is a slightly related nucleotide. And then, so now we've got three things that bind together. Let's see how we can do it by showing. So you've got RNA molecule binding to the DNA molecule. It's now carries a mirror image. In fact,"
},
{
"end_time": 2008.66,
"index": 82,
"start_time": 1993.848,
"text": " Its opposite side would be the DNA molecule, but now it then carries over and binds to, here I've got my RNA molecule coming across here, and it binds to another RNA molecule, the transfer RNA molecule."
},
{
"end_time": 2037.483,
"index": 83,
"start_time": 2008.951,
"text": " So the three bases in the DNA molecule map to three bases in the messenger RNA molecule, which map to three bases in the transfer RNA molecule, which is now, since this is the reverse, the mirror image, now it binds again and you've got the original image back. So what you have is you've maintained the form, the form or what I would call the constraint, the sequence constraint has been maintained."
},
{
"end_time": 2060.299,
"index": 84,
"start_time": 2038.046,
"text": " So we would say that by covariance relationship, information has been transferred. Okay, okay. So constraint is the same as information in this case or no? So what I'm going to show you is how they're the same, but it turns out there's a hierarchic relationship that we have to keep in mind in order to show this. This was lost in information theory originally in the 1940s and 50s."
},
{
"end_time": 2089.053,
"index": 85,
"start_time": 2060.657,
"text": " and it needs to be brought back and I'll explain why in a minute. But the idea is that now there's a covariance relationship. We've passed this covariance from one molecule to another molecule to a third molecule. That transfer RNA molecule has also got at the other end of it an amino acid and each three codons on one end is matched to a specific amino acid on the other end. That amino acid is now brought together"
},
{
"end_time": 2115.64,
"index": 86,
"start_time": 2089.582,
"text": " these three attached to the next three to the next three different amino acids correspond to the different codon and now by bringing together different transfer RNA molecules you're also bringing together different amino acids so that now what's happening there's a continuity relationship or a correlation relationship"
},
{
"end_time": 2145.094,
"index": 87,
"start_time": 2116.425,
"text": " that correlation relationship is now also correlating amino acids which tend to get stuck together and they produce a long string of amino acids that is coded for in the sequence in the DNA molecule. So now we have gone from a DNA molecule by a covariance relationship and to the RNA molecule which has a correlational relationship and neighboring transfer RNA molecules now provide"
},
{
"end_time": 2163.626,
"index": 88,
"start_time": 2145.367,
"text": " amino acids with a correlational relationship. So by virtue of taking similarity relationship, covariance relationship, and then a correlation relationship, we've now allowed DNA molecule which has no relationship to proteins as molecules."
},
{
"end_time": 2190.299,
"index": 89,
"start_time": 2163.985,
"text": " Proteins are made up of totally different molecules in a string that tend to fold up because of their electrical and hydrophobic and hydrophilic potentials into three dimensional structures that a linear structure has passed its information on through these multiple steps to a totally different kind of molecule, which can now interact with other proteins and other kinds of molecules to produce most of the work of the cell."
},
{
"end_time": 2200.026,
"index": 90,
"start_time": 2191.118,
"text": " What's happened is the reason we call this a codon relationship is there's a code-like relationship between them. That is, what we've done is we've figured out"
},
{
"end_time": 2226.237,
"index": 91,
"start_time": 2200.316,
"text": " in biology now, evolution has figured this out so to speak, how to take a linear relationship in one kind of molecule and create a particular kind of three-dimensional molecule made up of totally different kinds of atoms that is made up of amino acids as opposed to nucleotides and its function is determined by its three-dimensional structure by how it's folded."
},
{
"end_time": 2247.073,
"index": 92,
"start_time": 2227.295,
"text": " So, in effect, there's almost no relationship between them. This is why thinking of it as a code is not crazy. It's arbitrary. And yet the arbitrary has been produced by this multi-step process that links covariance with correlation in a way that allows this to happen."
},
{
"end_time": 2275.896,
"index": 93,
"start_time": 2248.268,
"text": " I like to call covariance relationships and correlation relationships grounded. They're grounded because the vehicle that carries it is carrying some feature in common with what it represents, with the information that it's carrying. So if it's a likeness relation, if it's a covariance relationship, it's carrying some kind of formal feature forward into another kind of molecule. Whereas if it's just physically correlated, struck together,"
},
{
"end_time": 2301.118,
"index": 94,
"start_time": 2276.698,
"text": " not by virtue of likeness, but by simply by virtue of being next to each other, being correlated, being attached. That also is grounded, physically grounded. One is formally grounded, and one is physically grounded. But by stacking one on top of the other, it now produces the kind of distance between where you started and where you end. So that now there's a what I call displaced reference,"
},
{
"end_time": 2330.145,
"index": 95,
"start_time": 2301.578,
"text": " The information in the DNA molecule is there in the protein, but in a totally different form. There's been a continuity of information transfer, but a complete loss of groundedness at the physical level. And yet there's an informational groundedness because you've transferred the same correlations. As a result, there can be selection that affects proteins and how they interact"
},
{
"end_time": 2349.701,
"index": 96,
"start_time": 2330.486,
"text": " And that'll select for particular sequences of genes that produce those useful proteins, even though they're totally separate kinds of molecules. It adds one further feature that I think is also important to think about it in terms of the mathematics of all of this. That is, the protein now has a three dimensional structure."
},
{
"end_time": 2376.8,
"index": 97,
"start_time": 2350.247,
"text": " but DNA also has a three-dimensional structure. It's twist, and its twist is slightly different depending on which nucleotides are in that sequence. Now, my watch is talking to me just a minute. Let me set it down here. It's all right. We'll repeat that last part, the DNA. So the DNA sequence is of nucleotides, but it turns out that we think about the DNA molecules as a twist, as a helix."
},
{
"end_time": 2403.217,
"index": 98,
"start_time": 2377.09,
"text": " It turns out that the helix is slightly differently twisted depending on the sequence. So it's not a uniform twist all the way up. So depending on the actual rung in the ladder or depending on a sequence of rungs? No, depending on the sequence of rungs. The sequence of rungs make it more tightly twisted or more loosely twisted depending on what the sequence is. And that means that there's a three-dimensional structure to the DNA molecule."
},
{
"end_time": 2419.002,
"index": 99,
"start_time": 2403.746,
"text": " So there's information that's passed from the DNA to the protein."
},
{
"end_time": 2449.002,
"index": 100,
"start_time": 2419.343,
"text": " And then there's also structural information in terms of the zeros and ones that we ordinarily think of. So those are the rungs. And then there's also structural information in the sense that certain sequences of those zeros and ones physically produce a different structure in the DNA. And it's not to say necessarily that the protein then copies that, but it somehow relates to it. No, the proteins don't copy. I want to be clear. What this says is that because the protein molecule is a totally different kind of molecule, but it has three dimensional shape."
},
{
"end_time": 2479.787,
"index": 101,
"start_time": 2450.35,
"text": " What that means is that some proteins, we call them transcription factors, actually bind to DNA by fitting with a twist. And since that twist is determined by the sequence, then what's happened is that the proteins identify a region of the DNA molecule. And as a result, those proteins can play a role in upregulating or downregulating the expression"
},
{
"end_time": 2507.022,
"index": 102,
"start_time": 2480.384,
"text": " of genes. So now you have recursion possible. So what's happened by virtue of this displacement from one kind of information, that is sequence information, zero and one information, you might think about it, to three dimensional physical information, because they are both embodied physically, they're capable of interacting, but now on a totally different basis."
},
{
"end_time": 2534.514,
"index": 103,
"start_time": 2508.08,
"text": " Now the protein can regulate the DNA that produces proteins. By virtue of this displacement of reference, it's possible to now have that displaced molecule regulate the kind of molecule that produced it. This is recursion. But notice that this is a kind of interesting recursion. It's very similar to the Gurdelian recursion."
},
{
"end_time": 2561.715,
"index": 104,
"start_time": 2535.418,
"text": " What you need to have, even a simple paradox like a liar's paradox, is that you can't stop interpreting because it refers to itself. The liar's paradox, the simplest one is, of course, this statement is a lie or this statement is false. Once you interpret it, you have to reapply it back to the statement originally. You can't stop interpreting because if it's a lie, then it must not be true."
},
{
"end_time": 2583.541,
"index": 105,
"start_time": 2562.193,
"text": " But if it must not be true, then it's a lie, then it's a lie, so it must be true. The classic circularity here. This is the basis in a much more complicated version of Gödel's incompleteness proof for mathematics. That is, basically, you have to have some way that something can refer to itself. It has to have this kind of recursion."
},
{
"end_time": 2611.954,
"index": 106,
"start_time": 2583.968,
"text": " The point here is that that recursion is possible with DNA and proteins because of this displacement. DNA and RNA have to be linked to each other by likeness relationships. But DNA and proteins have totally different reasons for having the physical structures they have. But as a result, now you can have modified DNA indirectly through proteins."
},
{
"end_time": 2641.988,
"index": 107,
"start_time": 2612.773,
"text": " This hierarchy of communication in very simple genetics is what makes possible the complexity of bodies and cells. You've got to have this kind of recursion because now the DNA can play a role through communications through other interactions with the cell because the proteins are now carrying information also about their interaction with other proteins, maybe even with the world. That information can now modify how genes are produced and expressed. So"
},
{
"end_time": 2664.531,
"index": 108,
"start_time": 2642.363,
"text": " The complexity becomes possible. People in semiotics have talked about this as scaffolding, that is you're building up new levels of information that can now be a basis for building up yet higher order information. So in fact this relationship I've just talked about is what makes it possible to have"
},
{
"end_time": 2690.572,
"index": 109,
"start_time": 2665.64,
"text": " complicated multi-celled bodies that have repeated parts. Think about the multiple legs on insects or the multiple ribs or the multiple vertebrae in our bodies. This is the result of exactly this kind of transcription factor effect. That is proteins that produce genes that modify other genes. Well, it turns out that by a series of other features,"
},
{
"end_time": 2718.933,
"index": 110,
"start_time": 2691.067,
"text": " genes have duplicated and varied, and you're producing this in a sort of theme and variation fashion. Once you have recursion, you can do sort of theme and variation effects, higher order effects. And this is what produces the sort of the what you might call the theme and variation repeated body parts that makes plants and animals possible. So in effect, what I became very interested in, and this is really by the end of the 1990s,"
},
{
"end_time": 2745.623,
"index": 111,
"start_time": 2719.735,
"text": " was not only that I'm finding that there's a lot of indirect causality that makes this possible in the brain and how it must have been possible neurologically, but it now tells me that there's a way that this is relevant to biology and maybe something as universal as mathematics itself, that this same hierarchical relationship, how you get recursion, how you make recursion possible,"
},
{
"end_time": 2775.401,
"index": 112,
"start_time": 2746.169,
"text": " has to do with this, what I call, I describe this sort of characteristically as one, two, three, repeat. One, the iconicity, that is the covariation. Two, correlation, physical correlation in some respects. And three, displacement. Once you've displaced, you can now go back and have that displaced reference modify one. One, two, three, repeat is giving you this capacity for recursion."
},
{
"end_time": 2805.299,
"index": 113,
"start_time": 2777.022,
"text": " Now, one of the things that I realized at this point is that it also helped me understand why Noam Chomsky and I were having so much trouble agreeing about what he called universal grammar. When I wrote in the mid 1990s about the evolution of language, mostly from the point of view of what's changed in the brain, I was saying that, look, universal grammar has a genetically predisposed feature is"
},
{
"end_time": 2835.35,
"index": 114,
"start_time": 2805.896,
"text": " evolutionarily essentially impossible. It's impossible for the same reason that we don't have innate words. For something to be selected in the course of evolution, it has to first of all be consistently repeated again and again and again in the world. Its relationship to something that's relevant to life has to also be a strong correlation. It's got to have these same features repeated again and again and again. That's iconic."
},
{
"end_time": 2860.828,
"index": 115,
"start_time": 2836.084,
"text": " Having a correlation with something relevant like survival value, food, predator defense, whatever. That's a physical correlation. That has to be there true. But then to develop the evolution of this, this selection has to be repeated again and again. And the substrate in the body that's now going to put those together, to hold them together, that is the adaptation."
},
{
"end_time": 2890.265,
"index": 116,
"start_time": 2861.305,
"text": " has to be produced by this recursive process happening again and again in the course of evolution. Words, for example, the reason we don't have innate words is simple, and that is words change their meaning pretty rapidly. They don't persist generation upon generation for thousands of years. In just a thousand years, a language that has split can become totally non-inter-translatable."
},
{
"end_time": 2919.36,
"index": 117,
"start_time": 2891.049,
"text": " And we've seen that, of course, in the history of Europe for the most part. But the same thing is true in evolution, but much, much larger. There's the constancy must go on for hundreds of generations to really make a dent in the biology of the organism. So words, of course, don't have this feature. They change too fast. Their association with things in the world is too fleeting and flexible. And finally, they don't have much of a reproductive consequence."
},
{
"end_time": 2946.135,
"index": 118,
"start_time": 2920.196,
"text": " In order for mutations to have any kind of effect at all, they have to be the same over and over and over again, and they have to have a reproductive consequence. Words don't have this. And so we have to learn words socially. We have to transmit them from person to person in social groups. In fact, you have to have a larger social group to remember all that information, to hold it collectively. Now think about grammar."
},
{
"end_time": 2971.834,
"index": 119,
"start_time": 2947.892,
"text": " Grammar is even more abstract than words. Grammar has fewer constraints, in part because it's not directly correlated with things in the world. At least my word dog and my word drink have something physically correlated in the world. It's just not very stable and regular over the course of evolutionary time, so it can't evolve biologically."
},
{
"end_time": 2989.428,
"index": 120,
"start_time": 2972.176,
"text": " As the Toe Project grows, we get plenty of sponsors coming. And I thought, you know, this one is a fascinating company. Our new sponsor is Masterworks. Masterworks is the only platform that allows you to invest in multi-million dollar works of art by Picasso, Banksy, and more. Masterworks is giving you access to invest in fine art,"
},
{
"end_time": 3005.282,
"index": 121,
"start_time": 2989.428,
"text": " which is usually only accessible to multi-millionaires or billionaires, the art that you see hanging in museums can now be partially owned by you. The inventive part is that you don't need to know the details of art or investing. Masterworks makes the whole process straightforward with a clean interface and exceptional customer service."
},
{
"end_time": 3022.466,
"index": 122,
"start_time": 3005.282,
"text": " They're innovating as more traditional investments suffer. Last month, we verified a sell which had a 21.5% return. So for instance, if you were to put $10,000 in you would now have 12,000. Welcome to our new sponsor Masterworks and the link to them is in the description. Just so you know, there's in fact a waitlist to join their platform right now. However,"
},
{
"end_time": 3051.92,
"index": 123,
"start_time": 3022.466,
"text": " Toe listeners can skip the waitlist and get priority access to their new offerings by clicking the link in the description or going to masterworks.com and using the promo code Toe. That is T-O-E. That's masterworks.com promo code T-O-E or click the link in the description. Just to make this distinction between indexical, symbolic, and iconic more clear for people, when you said the word dog, that's symbolic. And when you went like this drink, that's symbolic, but also indexical because you showed something."
},
{
"end_time": 3078.729,
"index": 124,
"start_time": 3052.619,
"text": " Exactly. And I'm going to, this is what's going to get us back to finally back to syntax. Why I think after reading it, writing the symbolic. I want to make it clear syntax is a synonym for grammar in this case. Grammar and syntax just refers mostly to relative positions of things in a sequence. So we can talk about the syntax of, of symbols in a mathematical equation."
},
{
"end_time": 3102.927,
"index": 125,
"start_time": 3079.582,
"text": " the syntaxes, how you have to move them around, who can be next to who, who can't be next to who, who can modify who, and so on. What constitutes a well-formed sentence? Yeah, exactly. Okay. So, grammar is the specific kind of constraints on that, that language has. So, grammar can tell you there's certain kinds of words that are content words."
},
{
"end_time": 3131.869,
"index": 126,
"start_time": 3103.49,
"text": " certain kinds of words that play the role of a verb or a noun or a pronoun. Because of their kind, they have to be connected with other words in a particular way. You can't just randomly throw words together and have them work. They have to have both a sequence that's maybe characteristically different in different languages. But most languages have words that play the role of a verb or a noun, for example."
},
{
"end_time": 3159.394,
"index": 127,
"start_time": 3132.944,
"text": " Those are Chomsky's universals, and there are many of them. Chomsky talks about lots of syntactical operations that he says are going to be universal around the world. He developed this interestingly enough in the late 1950s, early 1960s, by virtue of his knowledge of something else that I thought was just brilliant about his realization back then, which was"
},
{
"end_time": 3186.613,
"index": 128,
"start_time": 3159.94,
"text": " comparing how you analyze grammar and syntax to a Turing machine. Not to a Turing machine as a machine, but the abstract nature of a Turing machine that is an algorithm, a specifically described sequence of operations. A Turing machine logic can be laid out as a replacement logic. String A can be replaced by String B."
},
{
"end_time": 3213.626,
"index": 129,
"start_time": 3188.029,
"text": " That is a rewrite rule. That's what the Turing machine is about. It's about having specific rewrite rules. So you read a particular symbol and you rewrite another symbol somewhere else. You have a rule for that. Chomsky realized in his book Syntactic Structures, written in the early 60s, that you could describe syntax and grammar like a Turing machine with a series of rewrite rules."
},
{
"end_time": 3243.899,
"index": 130,
"start_time": 3214.377,
"text": " That was a brilliant recognition. Now, what's happening at this point in time in history, of course, is computing is finally being figured out. We're working out the abstract nature of computing. Chomsky is struggling with the abstract nature of language. And he says, wow, this is interesting. Here's both the value and the problem. The value is that a Turing machine can describe any"
},
{
"end_time": 3269.838,
"index": 131,
"start_time": 3245.077,
"text": " completely describable operation. The power of a Turing machine is its universal. We talk about this as a universal Turing machine. You're beginning to see the connection here. Now, if we think of grammar and syntax as being producible by a Turing machine, by a universal machine,"
},
{
"end_time": 3293.712,
"index": 132,
"start_time": 3271.305,
"text": " then that suggests that grammar and syntax itself might be universal and that there's a particular Turing machine in the brain of every human being that produces its grammar and syntax. And that once we figure out what words mean, what they refer to in the world, we can now just turn on our universal Turing machine"
},
{
"end_time": 3318.285,
"index": 133,
"start_time": 3294.121,
"text": " It's not universal anymore. It's a specific Turing machine that has specific rules for rewrite, and you can produce grammar and syntax. That becomes the original and I think brilliant discovery that Noam Chomsky makes in the early 1960s. But of course, it now that it begs another question, where does it come from? Where does this Turing machine come from? Well,"
},
{
"end_time": 3348.012,
"index": 134,
"start_time": 3318.814,
"text": " Maybe it's in the brain. And of course, we begin thinking about brains as computers. About this time, we begin to use this sort of computational metaphor to talk about thought, a very powerful tool for cognitive science for studying brains in a much more systematic way, and our behaviors and our cognition and so on. But now, if you think about brains as a machine, as a computing machine, then it suggests that, well, maybe an algorithm evolved,"
},
{
"end_time": 3373.148,
"index": 135,
"start_time": 3348.865,
"text": " So people began thinking, really, it took about 10, 15, 20 years for people beginning to move in a direction that Chomsky had already begun to assess, which is that if we think about cognition as computation, then we can think about language as run by a Turing machine in the brain, an algorithm, and that that algorithm could have evolved."
},
{
"end_time": 3397.432,
"index": 136,
"start_time": 3373.985,
"text": " So that it's a very straightforward, good argument, logical argument about this. The only problem is the one I just mentioned before this, and that is what makes something evolvable? Grammar and syntax and even words are not evolvable biologically. They have to be evolvable socially. But that's a problem."
},
{
"end_time": 3423.507,
"index": 137,
"start_time": 3397.892,
"text": " And it's a problem because if it's justifiable socially, you might think, well, there should be all kinds of different grammars, all kinds of languages that are totally unlike each other. Chomsky was clearly right that languages have a lot of things in common. They have a lot of syntactical constraints that you can't do certain things and you have to do certain other things. Everything, like a sentence, for example, has to have at least two parts."
},
{
"end_time": 3449.241,
"index": 138,
"start_time": 3424.258,
"text": " And this in fact became an insight that helped me think this through. Why do sentences have to have more than one word? We do have a few things that communicate, what we call holophrastic. That means a single word, a single announcement is to communicate something. The classic example is sitting in a crowded theater and yelling fire."
},
{
"end_time": 3476.869,
"index": 139,
"start_time": 3449.94,
"text": " Suddenly, everybody knows that it indicates that someone thinks that there's fire here, immediately present. It's an index in the sense that it says that physically it's correlated with my producing this sound. But this is interesting. Think about the word loud, okay?"
},
{
"end_time": 3505.811,
"index": 140,
"start_time": 3477.654,
"text": " Loud brings to mind, by association, soft, sweet, garish, you know, a whole bunch of associations can come up with it. But Loud doesn't refer to anything particular by itself. Noam Chomsky, of course, has struggled with this as linguists have for generations. Words don't exactly refer to specific things, with the exception of proper nouns. The name Chicago refers to a specific city."
},
{
"end_time": 3526.067,
"index": 141,
"start_time": 3506.254,
"text": " But city does not. Now let's go back to the word loud. Loud doesn't refer to anything particular. But how about this? Loud! Now it does, right? It was correlated with a loud sound."
},
{
"end_time": 3556.903,
"index": 142,
"start_time": 3528.456,
"text": " Now that correlation says that loud doesn't just refer to some abstract relationship, it actually refers to something physical in the world. I could have said clapping my hands is loud. The phrase clapping my hands does the same thing as this, because it's correlated in the sentence with the word loud. Clapping my hands is an index, but the correlation of the word"
},
{
"end_time": 3576.715,
"index": 143,
"start_time": 3557.193,
"text": " spoken in quick succession with the sound of clapping my hands, also is indexical. It's a correlation. So what's happened is that in order for words to refer specifically to things, they need to be coupled with an index."
},
{
"end_time": 3606.425,
"index": 144,
"start_time": 3577.688,
"text": " And that's, of course, what pointing does for acquiring language in the first place. It's an index that allows children to correlate a sound with something in the world. How would a blind person? Yeah, well, so a blind person, you can't point. Now you have to touch. But touching is, of course, a version of pointing. Pointing is a physical correlation just with some distance. Touching is physical correlation without distance. To really communicate, you have to do it that way."
},
{
"end_time": 3634.718,
"index": 145,
"start_time": 3607.176,
"text": " But what it basically says is that the reason that all languages have something like a sentence, where there's multiple kinds of words that have different functions, is that you've got to have these two semiotic functions together. Otherwise, symbols, abstract things, can't refer. So this again, replays this relationship in a bunch of different ways. I've now described it in terms of sentences,"
},
{
"end_time": 3664.582,
"index": 146,
"start_time": 3635.162,
"text": " But describe it in terms of DNA and proteins and describe it in terms of mathematics. And it is referring to meaning or reference or what? No, this relationship is one, two, three, repeat relationship. How you get displaced reference, how you can get something that refers to something that it's not, even though it has nothing physically in common with it, nothing formally in common with it. So words have its referential capability, but this is, of course, also the DNA story."
},
{
"end_time": 3694.121,
"index": 147,
"start_time": 3665.265,
"text": " DNA doesn't have anything in common with the weather, with the kind of food you have to eat, radically displaced away from it. And yet the information can be shared between bodies and behaviors and DNA molecules. This incredible difference by virtue of this continuity that's established. This is what I say is a little bit like mathematics."
},
{
"end_time": 3723.899,
"index": 148,
"start_time": 3695.606,
"text": " Here's how I want to characterize mathematics versus language. Mathematics, you're required not to have confusion. You've got to keep things separate, distinct. Distinction is the absolute most important thing. You can't run a mathematical operation where one becomes five, where some value that you fixed is unfixed."
},
{
"end_time": 3753.78,
"index": 149,
"start_time": 3724.514,
"text": " Okay, so precision. Precision and precise. It has to be absolutely precise. You have no freedom to vary. That's a constraint that says all of my referential operations are constrained to maintain this one thing in common. In language, the constraints have to be weaker. They have to be weaker for a number of reasons. What we want is, of course, not to have confusion."
},
{
"end_time": 3776.374,
"index": 150,
"start_time": 3754.735,
"text": " not to confound things, not to have ambiguity, but we can survive with some degree of ambiguity. We can't do that in mathematics. We have to get rid of that ambiguity. This is why Gödel's incompleteness proof was so problematic, so troublesome for people, because it said that you can't have a system that completely gets rid of ambiguity,"
},
{
"end_time": 3806.169,
"index": 151,
"start_time": 3779.002,
"text": " and have it closed and complete. It has to be either, if it's capable of getting rid of ambiguity, it has to be open. If it gets rid of ambiguity, it's simple. So, so, so Gödel basically shows that you can't get rid of it completely. And I think semiotics basically tells us this, you can't get rid of it completely, but you can build on it. And you can make it more and more complex. So,"
},
{
"end_time": 3833.319,
"index": 152,
"start_time": 3806.903,
"text": " To get us back to the mathematical comparison to language, language has to also minimize ambiguity. But of course, ambiguity is only so troublesome as the problem you're having to face, the particular pragmatic details. In fact, here I am trying to communicate this to you and to an audience. You maybe don't have to get all the details."
},
{
"end_time": 3863.097,
"index": 153,
"start_time": 3835.367,
"text": " You may be able to build up the details as we talk more and more and more, understand what the words refer to, how it works, in a sense. In this process, even two sentences later, you've forgotten the sentence that came before that. You've translated it into something totally different than words, into maybe images, to maybe sort of feelings in your body, whatever. We've extracted a gist. Yeah."
},
{
"end_time": 3892.278,
"index": 154,
"start_time": 3865.111,
"text": " In mathematics, that means we can make it as hard to penetrate as we need, so long as our manipulations, the operations we use, verify discreteness, don't allow A to equal B if A and B are not the same. That means that mathematics is symbolic communication that is highly constrained"
},
{
"end_time": 3921.237,
"index": 155,
"start_time": 3892.944,
"text": " but we don't have to interpret it right away. It may take years or never to understand a particular equation, a particular level of mathematical complexity, because we can look at it, reflect on it, look at it, reflect on it, try things out, maybe transfer the equations into some physical-like relationship, analogy, like a graph, for example, to try to understand what it means and how it works."
},
{
"end_time": 3950.23,
"index": 156,
"start_time": 3922.568,
"text": " In language, of course, we have to do it in real time. We don't have the luxury. So the linguistic capacity also has a constraint, but the constraints are much weaker, much looser. And they're just simply limited to what I want to accomplish now, what needs to be transmitted at this moment. Oftentimes, the details are not going to matter. So this is the case of language in the case of genetics."
},
{
"end_time": 3958.234,
"index": 157,
"start_time": 3951.049,
"text": " It's somewhere in between. It's not as precise as mathematics. There's lots of folding abnormalities that can happen."
},
{
"end_time": 3983.985,
"index": 158,
"start_time": 3958.626,
"text": " In fact, one of the things that happens with translating from DNA to messenger RNA in cells like yours and mine, eukaryotic cells, the RNA molecules are carrying a lot of non-protein information, what we call introns, strings within the DNA molecule that are just simply not carrying information that will play any role in the eventual protein."
},
{
"end_time": 4011.186,
"index": 159,
"start_time": 3983.985,
"text": " And so in eukaryotic cells we have a structure called a spliceosome, a complicated structure made up of five RNA molecules that themselves have three-dimensional structure because of the way they stick together and fold and literally dozens of different protein molecules that can bind and disconnect and rebind. The spliceosome splices out the introns"
},
{
"end_time": 4040.879,
"index": 160,
"start_time": 4011.664,
"text": " The RNA molecule is full of noise, segments that are not going to be useful. The spliceosome figures out which ones are not going to be useful, cuts them out, and re-splices the RNA molecule back together. So that the eventual RNA molecule that gets sent to the ribosome to turn into a protein, to generate a protein, has now been cleaned up, spliced out all the useless stuff. To have a system that has this kind of noise in it means that"
},
{
"end_time": 4070.23,
"index": 161,
"start_time": 4041.288,
"text": " In fact, there's a little bit of flux in this system. Things aren't exactly perfect. And so in that respect, you know, it's a little bit between mathematics and language in that respect. But the same constraints are necessary. You need the same three ways of representing things. One, two and three. Those are universals. And that means that it's going to be also true for language and for mathematics."
},
{
"end_time": 4100.196,
"index": 162,
"start_time": 4070.657,
"text": " Mathematics is going to have to respect those features as well. The equal sign in an equation is about covariance, about iconicity. This equation is the same as this equation so long as you do the right operations. That's what the equal sign is saying. Having operations next to the variables. So how is this all related to meaning and even sentience? Right, so it absolutely is."
},
{
"end_time": 4125.811,
"index": 163,
"start_time": 4101.408,
"text": " Maybe we'll go to sentience first because that might help a little bit. That is, you might call it the hard problem. I don't think it's a hard problem. I think it's a counterintuitive problem. And I think that's the challenge. In the same way that what I just described is probably counterintuitive because we think about codes so naturally. Codes do it all. The problem is a code already assumes representation."
},
{
"end_time": 4154.462,
"index": 164,
"start_time": 4126.698,
"text": " A code doesn't give you representation. The dots and dashes and the letters already represent something. The dots and dashes represent letters. The letters represent parts of a sentence, parts of a word. We've already assumed that we know the meaning for that so that the Morse code carries the meaning because the meaning was established outside of the code relationship. When you think about, and this is I think the problem that I find with linguistics in general,"
},
{
"end_time": 4181.22,
"index": 165,
"start_time": 4154.991,
"text": " because linguistics has accepted the code relationship between words and meanings. It has a number of problems. One of the problems is that in a code, there's a one-to-one matching between something and something else. That is, you know, three dots means something. Okay, it's the S. It matches specifically one-to-one."
},
{
"end_time": 4209.172,
"index": 166,
"start_time": 4182.21,
"text": " But in a word, of course, we know that words don't match one to one except for proper nouns, proper names. Words map not to a thing, but oftentimes to something we often call a meaning or a sense of something. That's not like a code. And so one of the first problems is we're beginning to treat language as though it's a code, although it's not a code. So code is devoid of meaning."
},
{
"end_time": 4233.268,
"index": 167,
"start_time": 4210.35,
"text": " So the code relationship, the stand-in relationship, where something can stand in for something else, is itself not about meaning something. It's just about standing in for something else. The question is, how does it stand in for something else? It stands in for something else for somebody that interprets it. So that obviously Morse code by itself"
},
{
"end_time": 4252.756,
"index": 168,
"start_time": 4233.882,
"text": " If not interpreted, it's just sounds or dots or blips coming across some line. The same thing is with words, of course. If I don't understand the language, it's just sound. It doesn't have any reference. The question is, how does that reference get established? How does the meaning get established?"
},
{
"end_time": 4277.142,
"index": 169,
"start_time": 4253.148,
"text": " Going back to the beginning of the 20th century, end of the 19th century, the philosopher Gottlob Frege distinguished two aspects of reference. He talked about it as sense and reference, or in German, Sinn und Bedeutung. Sense is the sense of the meaning of something. And it's also been called the intention of something."
},
{
"end_time": 4305.043,
"index": 170,
"start_time": 4278.097,
"text": " And the reference is something in the world that refers to. He called that the bidoitum or the reference, sometimes called the extension of something. Can you give an example? Yeah, I'll give an example. And this comes directly from Gottlob Frege, a brilliant idea. He said that, look, we have the names Hesperus and Phosphorus for the morning star and the evening star."
},
{
"end_time": 4330.947,
"index": 171,
"start_time": 4306.476,
"text": " But in fact, they have two different meanings, morning star and evening star, and the names talk about something, they actually describe something. Although phosphorus and Hesperus are just names for what they describe. They're names for something in particular, but it turns out they're names for the planet Venus. The actual extension, the physical extension is the same thing."
},
{
"end_time": 4360.196,
"index": 172,
"start_time": 4331.715,
"text": " even though the words are different, and they refer to different senses. The sense of Morning Star and the sense of Evening Star have a different sense. But what they refer to in the world is a thing, a particular thing, one. So what he says is that sense and reference are different. And when we talk about meaning, we oftentimes confuse these. And we confuse it with something even more complicated, usefulness or significance or value."
},
{
"end_time": 4388.148,
"index": 173,
"start_time": 4360.93,
"text": " What's the meaning of what I've been saying? It's not just the words, it's not just a reference. It's not just something in the world. It's actually, is it useful? Is it helpful? Is it valuable? The meaning for something. So the word meaning is really ambiguous because it brings all these things together. So I actually don't like to use the word. I don't like to use the word because what's the meaning of the white line down the middle of the road?"
},
{
"end_time": 4416.271,
"index": 174,
"start_time": 4389.582,
"text": " Well, it doesn't have a meaning like a word. It doesn't have a definition. And it doesn't refer to anything particular. It actually indicates where you should drive. It's an index. It's a conventionalized index. Now, you have to know what that index is, what you have to know what the convention was. And for the most part, it takes language to figure that out. But"
},
{
"end_time": 4441.544,
"index": 175,
"start_time": 4417.875,
"text": " In many respects, it's not that meaning is telling you something, it's that we want to distinguish those things that we think have to do with meaning. And this is where this iconic indexical and symbolic feature help out, because symbols can have all of those features. So a word can have value,"
},
{
"end_time": 4464.343,
"index": 176,
"start_time": 4442.056,
"text": " usefulness, a particular new neologism in science that can be very useful because it picks out something that wasn't picked out before that we need to understand and distinguish. The word itself can refer to some general type of things, a description of things. So the word symbol,"
},
{
"end_time": 4486.237,
"index": 177,
"start_time": 4465.23,
"text": " unfortunately is used both to refer to tokens like the letters that show up on a screen and a religious artifact. The word symbol is itself very sort of multifaceted and ambiguous, but in context, of course, we"
},
{
"end_time": 4512.363,
"index": 178,
"start_time": 4486.783,
"text": " specify what it means. But in fact, using the word symbol to talk about an alphanumeric character is actually sneaking something in. And let me give a sense of this. An alphanumeric character is something we've agreed on to be an arbitrary sign to code for a particular sound in a phonetic language."
},
{
"end_time": 4541.169,
"index": 179,
"start_time": 4514.036,
"text": " Therefore, it's playing a code-like or symbolic-like role. It's displaced from what it refers to. It has no characteristics in common. On the other hand, if my computer screen starts to spew at random all kinds of alphanumeric characters, they're not symbols. They don't refer to anything symbolically. They're actually"
},
{
"end_time": 4567.551,
"index": 180,
"start_time": 4542.108,
"text": " They indicate that there's something going wrong in my computer. Which is interesting because in that case the noise is a form of a signal. That's exactly right. And what this tells us in information theory is that the distinction between noise and signal is not intrinsic. It's something that has to do with interpretation."
},
{
"end_time": 4594.718,
"index": 181,
"start_time": 4567.892,
"text": " Is that always the case? Now, is there a non-trivial example? So for instance, if a computer is not working, then we would see the fact that it's not working as a form of noise. So you mentioned just some pixels on the screen, and then we say that... That noise is symbolic or is referential to the repairman in that case. Right. A particular kind of noise now can carry information, but the same thing is true. Think about, you know, in simple Shannonian information theory."
},
{
"end_time": 4621.647,
"index": 182,
"start_time": 4595.06,
"text": " You have a signal that's got a certain level of noise. What he argued is that adding redundancy to some level can help us discern what signal what's noise because the signal will be redundantly repeated in each repetition, whereas the noise produced by an independent source will not be repeated. And so we can basically figure out where it comes from. But if the noise itself has structure,"
},
{
"end_time": 4645.23,
"index": 183,
"start_time": 4622.193,
"text": " then that can carry information about the source of that noise. If the noise is not purely random, it can be confused with a signal, but in fact that non-randomness is telling us something about where it comes from. And one of the things I like to say about information theory, and this is just"
},
{
"end_time": 4670.913,
"index": 184,
"start_time": 4645.896,
"text": " brought back to us with the people like Landauer and others who developed computational theories and quantum theories of information, is that anything that carries information is a physical thing. That is, it's either some energetic or some material object, or maybe some process. But that means that it has a certain degree of entropy."
},
{
"end_time": 4698.729,
"index": 185,
"start_time": 4672.722,
"text": " as a certain redundancy and a certain lack of redundancy to its structure. And it's particularly the constraint that is the redundancy that usually carries the information. And that can be picked out. That's what Shannon has told us when he says that by increasing the redundancy of a noisy signal, we can discern what's signal and what's noise."
},
{
"end_time": 4724.104,
"index": 186,
"start_time": 4699.224,
"text": " Now, what we're doing is we're saying, I don't know what it's signal about, but it's redundant, so it's carrying more information because of its redundancy. It's actually eliminating some uncertainty, which is Shannon's ultimate measure, but notice that the Shannon theory of information is just about the medium itself. When we say that"
},
{
"end_time": 4754.684,
"index": 187,
"start_time": 4724.804,
"text": " Alphanumeric characters spewed on my screen at random. I call them symbols showing up on my screen. I'm using symbol in the Shannonian sense, talking about the medium itself. But here's the interesting thing. The medium itself has iconic and indexical relationships to the world that it's referring to. So one of the stories about where do we get information? Why does redundancy of a signal provide us information about something?"
},
{
"end_time": 4784.701,
"index": 188,
"start_time": 4755.179,
"text": " because the redundancy, the constraint on that signal is there because not all degrees of freedom, not fully randomness was in that signal. A physical system that is not at its lowest energy state, that is at highest entropy state, is there because it was pushed away from its highest entropy state. Work had to be done. Something has prevented it from going to its"
},
{
"end_time": 4810.998,
"index": 189,
"start_time": 4785.043,
"text": " lowest energy highest entropy state. But what that tells you is that if the entropy of a signal, both the Shannon entropy and the physical entropy are reduced, it's because it's in relation to something else that is producing that constraint. Work has been done to either prevent it from its highest entropy or"
},
{
"end_time": 4840.043,
"index": 190,
"start_time": 4811.271,
"text": " Push it away from full entropy. So there's some redundancy in it. Every scientific tool uses this trick. In other words, this is how we get indexical information about the world, how an experiment which manipulates the world produces certain regularities or irregularities. And it's those that are carrying the information. They do so because communication is physical."
},
{
"end_time": 4870.725,
"index": 191,
"start_time": 4840.947,
"text": " Because it's physical, all of this abstract stuff we've been talking about has another piece to it. That is, it's always grounded in the world. And to unground it, to use a kind of communication that's code-like, we need to go through these steps to unground it. Language is ungrounded in a sense. Right, okay, so this is what's known as the symbolic grounding problem, correct? The symbol grounding problem, right. Please, like, reiterate, what is the symbol grounding problem and also the definition of grounding?"
},
{
"end_time": 4893.336,
"index": 192,
"start_time": 4871.101,
"text": " The symbol grounding problem, also sometimes called the empty symbol problem, is that how is it that a word or a particular machine state in your artificial intelligence machine is somehow linked to the world, is grounded to the world, that its reference has been fixed or established?"
},
{
"end_time": 4920.367,
"index": 193,
"start_time": 4894.804,
"text": " So there's now an informational link so that we know what this feature is, the symbol, this arbitrary sound or squiggle, how it's linked to something in the world. We know how we do it. We say, OK, Morse code, I'm going to call three dots S. I'm going to say that. But of course, if I want to explain how it starts,"
},
{
"end_time": 4947.858,
"index": 194,
"start_time": 4920.776,
"text": " How do I get reference in the first place? How do I get symbolic reference? How do I get two things that are unrelated to now become fixed in their relationship? So that whenever you see one, you think the other one. Can you give another example? Well, so... Like cat. Go ahead. You suggest something and we'll talk it through. How is the word cat a problem? Like we say cat, we mean cat."
},
{
"end_time": 4977.227,
"index": 195,
"start_time": 4948.012,
"text": " Right. So the word cat for most people in the world has nothing to do with this, this mammal, this small furry mammal. Right. It's kind of a little carnivore that's a pet quietly judges you. Right. Right. So the how do you acquire it? If you're a child, this donut doesn't have language yet. How do you acquire that word? Well, you have to first of all, you have to have some familiarity with cats or maybe just familiarity with four legged animals."
},
{
"end_time": 5006.8,
"index": 196,
"start_time": 4978.183,
"text": " And somebody shows you a cat, points to it, cat, and kids also have another bias that helps them with language. We tend to copy each other's sounds. So they say cat, they point. Now, a dog walks by a day later and they point and they say cat. Now, the question is, what do they see? They've referred to something, they've made an iconic guess."
},
{
"end_time": 5036.749,
"index": 197,
"start_time": 5007.142,
"text": " The sound is the same, cat sounds the same, and there's some similarity between this first object and the second object. Somebody says, no, that's a dog. Okay, now the infant has a problem. What's the difference between cats and dogs? You've got whenever the word dog comes up, I see a four-legged creature. Whenever the word cat comes up, I see a four-legged creature. Now I have to say, okay, what distinguishes them? What's non-iconic between them?"
},
{
"end_time": 5066.288,
"index": 198,
"start_time": 5037.056,
"text": " and the the words because they're iconically used cat cat cat cat every time it's repeated it suggests that i need to look for a correlated object that has also got similar features dog dog dog dog dog it's iconic of each itself the sounds are iconic each time it's produced and it's there's you've got to figure out what's in common each time and what's different in each time what's in common is of course iconic what's"
},
{
"end_time": 5092.346,
"index": 199,
"start_time": 5066.732,
"text": " What things stand out that indicates that this is not that? What are the different features? So what we're doing in the process of acquiring word meanings is using this indexical iconic bridge to eventually say that now I can develop this habit of thinking that only these objects, these four-legged small purring objects are related to the word cat."
},
{
"end_time": 5117.773,
"index": 200,
"start_time": 5092.722,
"text": " So what's happened is that you're building up this interpretive habit. And the interpretive habit is built up by the scaffolding of likeness and correlation. So where's the problem? Where's the problem in the symbol grounding problem? Sounds like this is working fine. Children acquire... This is working fine for kids, right? This is actually how you ground symbols. The question is,"
},
{
"end_time": 5131.254,
"index": 201,
"start_time": 5118.148,
"text": " If you start from given symbols and given things in the world, how do they get grounded? If you ignore these features, there is no way that they can be grounded. Now let's think about computing."
},
{
"end_time": 5161.869,
"index": 202,
"start_time": 5132.927,
"text": " How is it that you ground, think about even more sophisticated computing like machine learning in which we use neural nets, for example, to identify cats on the internet. How do they do it? Well, what we do is, of course, we show them thousands and thousands of images of cats and we strengthen and weaken certain links in this system so that eventually it converges so that"
},
{
"end_time": 5183.848,
"index": 203,
"start_time": 5163.166,
"text": " What's happened is that each cat has some things in common. Each image has some things in common. Each image has some things that are different from other kinds of images. And what's happening is that each of those in common, we strengthen certain synapses, connections, correlations, and we weaken others."
},
{
"end_time": 5211.34,
"index": 204,
"start_time": 5184.838,
"text": " So again, even there, we're using this iconic and indexical logic to zero in to produce so that the final state of the machine, it pumps out the word cat when we show it a cat picture. Okay. Now, does it understand that's what the cat means? That's what the word means. This is the problem of sentience, of course. Is the machine sentient"
},
{
"end_time": 5238.848,
"index": 205,
"start_time": 5211.92,
"text": " of the relationship between this typographical sequence and these pictures. We're beginning to discover now that, in fact, they do it very different than we do it, first of all, which is one of the reasons why we need thousands and thousands of examples to make it work. But the other thing we're realizing is that, in effect, this is just an arbitrary correlation. The correlation"
},
{
"end_time": 5256.647,
"index": 206,
"start_time": 5239.275,
"text": " is with us. We've decided. We've done the job of pruning the network. We already know the interpretation. The interpretation there was there ahead of time. We just wanted to build it up. We have to build it up in the same way, but"
},
{
"end_time": 5274.275,
"index": 207,
"start_time": 5256.988,
"text": " The interpretation is outside of this process. The machine doesn't know that these three letters have to do with an animal out there, because as soon as we show it animals, unless it sees the animal and sees it in a perspective that's sort of like what the machine sees, it's going to get it wrong."
},
{
"end_time": 5304.394,
"index": 208,
"start_time": 5274.565,
"text": " And there's all kinds of ways, of course, we figured out how to strengthen this using moving objects, moving objects in front of backgrounds, that kind of thing is changing the foreground background a lot, you know, all these ways that we strengthen that process. But again, the machine is just got the physical correlation. Nothing else. The question is, what is that something else? And that's of course, we're after we talk about minds and brains. The key to this story that I've"
},
{
"end_time": 5333.831,
"index": 209,
"start_time": 5304.94,
"text": " led up to that really followed all my work in the nervous system and language. When I began to see that it had some relevance to molecular biology and therefore evolution, I realized that it also probably had a relationship to how the physical world has produced this stuff. The one, two, three repeat story turns out to be much more general than I ever thought."
},
{
"end_time": 5362.602,
"index": 210,
"start_time": 5334.241,
"text": " And so by the middle of the 2000s, by 2010 and 2011 and 2012, when I produced this book Incomplete Nature, I had begun to think that this was a real problem that was much more general, relevant to physics, chemistry, biology, as well as brains and evolution. And the question was, how is it that creatures like us"
},
{
"end_time": 5392.961,
"index": 211,
"start_time": 5363.404,
"text": " that have ends in mind, that are teleological, that have purposes, that can have meanings. If you think about meanings, there can be good meanings and bad meanings, right meanings and wrong meanings, incorrect solutions and correct solutions. There's no correct chemistry and incorrect chemistry. There's no correctness or incorrectness or good or bad physics. There's good or bad representations of physics, or good and bad predictions about chemistry."
},
{
"end_time": 5422.176,
"index": 212,
"start_time": 5393.217,
"text": " But actual chemistry and actual physics, there's no normative feature. There's no right, wrong, good, bad. But that's something that every organism has. There's good things or bad things even for viruses, but not for water. So one of the questions in, it's a deep philosophical question to some extent, but how is it that the material world, the physical world has produced creatures"
},
{
"end_time": 5451.852,
"index": 213,
"start_time": 5422.944,
"text": " like us, like bacteria, that are normative, that actually divide the world into good or bad, useful or unuseful. Notice that that's the other aspect of meaning. Meaning is about value. And one of the deepest problems in philosophy is where does value come from? What I was realizing is that there's an analogous problem here. An analogous problem that's not"
},
{
"end_time": 5478.643,
"index": 214,
"start_time": 5453.831,
"text": " semiotic in all the ways I talked about, but the constraints that produce semiotics have something to do with this problem as well, with the problem of how a totally new kind of property or relationship being about something can come into the world. Although we can talk about distributions in the physical world in terms of information,"
},
{
"end_time": 5506.391,
"index": 215,
"start_time": 5479.377,
"text": " how much information is contained on your hard disk, how much information is expressed on a piece of paper, or how much information is destroyed as a black hole sucks in material from the outside world. That's just a measure of differences. It's not information that's about anything. We make it about things. It has to be interpreted to be about something."
},
{
"end_time": 5535.708,
"index": 216,
"start_time": 5507.346,
"text": " We're gonna have to wait because my dog's gonna bark for a few minutes here. You can hear the dog in the background. Yeah, but it's fine. It's minute. Okay. Yeah, because zoom has denoising. Good. That's wonderful. I love it. So the situation here is about this. This is a phrase that's sort of banded around by physicists. Excuse me by"
},
{
"end_time": 5564.411,
"index": 217,
"start_time": 5536.22,
"text": " by philosophers to talk about something that's different than what the physicists want to talk about information. Information can be about something, but aboutness is a relationship of something present to something that's not present, something that's absent. And this is one of the things I realized that what was going on in this move from iconic and indexical to symbolic to displaced reference is that now there's no connection. We now have an absent relationship."
},
{
"end_time": 5594.275,
"index": 218,
"start_time": 5565.009,
"text": " There's no features that's in common with the symbol and what it represents. To be clear, when you say that it's absent, so again, the word cat, we can talk about cat without there being a cat. OK, that's right. That's right. And it still refers to them. And in fact, I can say, you know, I'm going to gift you with a cat to adopt next week and it will have some consequences. There will be a real cat."
},
{
"end_time": 5620.043,
"index": 219,
"start_time": 5595.469,
"text": " Is this the same as being hypothetical or is hypothetical just a subset of what you're referring to? Just one variant. Every aboutness relationship we think about requires interpretation. There are physical features in the world that can afford it sometimes, make it easier. So the fact that a photograph of a cat or the shadow of a cat"
},
{
"end_time": 5646.988,
"index": 220,
"start_time": 5620.503,
"text": " has features in common, is an affordance that if I'm sensitive to it, I can interpret it as being about it, referring to it, pointing to it, being an index of it. Iconic and indexical relationships are something physically affords that interpretive process, makes it easier for us to interpret it. Because I already have information about that correlation."
},
{
"end_time": 5664.974,
"index": 221,
"start_time": 5648.08,
"text": " by past experience. I have information about that covariance from past experience. So that when I see one of my favorite examples is windsocks. Windsocks are an index of the strength and direction of the wind. So they're an indicator."
},
{
"end_time": 5692.756,
"index": 222,
"start_time": 5665.572,
"text": " material feature. I mean, lots of pictures I could show as well. We'd like to go through that because what it is, it's set up typically in airports and heliports because you want to know the direction and the strength of the wind. And a windsock is basically a conical material object, like a flag in some respects, but it's got an open end on the point end of the conical part so that wind blows through it."
},
{
"end_time": 5721.664,
"index": 223,
"start_time": 5693.097,
"text": " and it's on a rotatable structure so that the wind will blow it and the straighter it is, the harder the wind has been blowing. If it's not blowing so hard, the windsock is sort of partially extended and of course it's extended in a particular direction. So it's a great device for showing the direction and the strength of the wind. Why? Even if you've never seen a windsock before,"
},
{
"end_time": 5748.78,
"index": 224,
"start_time": 5722.159,
"text": " And you're looking at it only online. You would actually be able to interpret what it's about. Why? Because you've had the experience of looking at clothes blowing in the wind, of looking at leaves blowing at the wind, looking at your hat blowing off, all of these things. And you have these things. So when you look out at this windsock, even if you don't feel or have any experience of the wind,"
},
{
"end_time": 5778.951,
"index": 225,
"start_time": 5749.701,
"text": " It brings to mind a lot of like experiences, iconic similar experiences. But each of those experiences in your memory is associated also with another experience, the experience of having wind blowing. So when I'm out watching clothes blowing in the wind or watching leaves blowing in the wind, I'm experiencing the wind. So what I see now, looking at this windsock, brings to mind both similar sorts of things"
},
{
"end_time": 5796.271,
"index": 226,
"start_time": 5779.565,
"text": " which a material which should be hanging straight down is flopping around hanging out that brings to mind these other experiences, which now are correlated with not the same of as but correlated with the experience that I've had of wind."
},
{
"end_time": 5827.346,
"index": 227,
"start_time": 5797.978,
"text": " Now each of those correlations, so each of those images remembering the blowing weeds, the blowing clothes, my hat blowing off, all of these features have that in common as well. So there's another likeness. So each of their correlations with wind is a likeness as well, a likeness between wind experiences. So it allows us to look immediately at this windsock and say, okay,"
},
{
"end_time": 5849.991,
"index": 228,
"start_time": 5827.773,
"text": " This is telling me that the wind is blowing strongly from this direction because I have all those other experiences but notice that they were put together instantly probably within a fraction of a second as you looked at this for the first time by virtue of the fact that it brought up iconic and indexical relations that built upon each other"
},
{
"end_time": 5878.029,
"index": 229,
"start_time": 5850.35,
"text": " to allow you to see this as a conventional model of something. The iconic features, these are just in your memory, so what this tells us is this is how perception is building up this complicated causal relationship that we see out there. So to see this hierarchy, this constructive hierarchy, began to bother me and it bothered me in the following sense."
},
{
"end_time": 5894.241,
"index": 230,
"start_time": 5878.302,
"text": " the study of life. One of the things that's been really interesting in the last half century is beginning to realize that life, as Erwin Schrödinger suggested, is"
},
{
"end_time": 5924.224,
"index": 231,
"start_time": 5894.684,
"text": " A process that's far from equilibrium, maintains itself far from equilibrium. I'm William Gouge, a Vuri Collaborator and professional ultrarunner from the UK. I love to tackle endurance runs around the world, including a 55 day, 3064 mile run across the US. So I know a thing or two about performance wear. When it comes to relaxing, I look for something ultra versatile and comfy. The Ponto Performance Jogger from Vuri is perfect for all of those things."
},
{
"end_time": 5952.159,
"index": 232,
"start_time": 5924.548,
"text": " It's the comfiest jogger I've ever worn. And the Dreamknit fabric is wild, always reach for them over other joggers. Check them out in the Dreamknit collection by going to www.beurie.com. That's www.beurie.com. Where new customers can receive 20% off their first order. Plus enjoy free shipping in the US on orders over $75 and free returns. Exclusions applied, visit the website for full terms and conditions."
},
{
"end_time": 5981.186,
"index": 233,
"start_time": 5953.217,
"text": " Can you define that term far from equilibrium? So at equilibrium, it's where a system will fall to, so to speak, degrade to, if left alone. That is, what happens is the system will fall to its lowest energy state until it's prevented from falling into a lower energy state. In terms of entropy, where we oftentimes think about this, a system that's well mixed"
},
{
"end_time": 6011.391,
"index": 234,
"start_time": 5982.312,
"text": " is at its lowest state. It can't be more mixed. Once you pour your cream into your coffee and you stir it well enough, it's well mixed and it's almost impossible to go backwards, to unmix it. That's at its highest energy state. So a room temperature mug would be at equilibrium? So a room temperature mug is now, you might say that its temperature, the liquid is now, the temperature in the liquid is now well mixed with its environment."
},
{
"end_time": 6043.712,
"index": 235,
"start_time": 6013.968,
"text": " So it's at equilibrium, that is, there's no shift, there's no asymmetry between them. The same thing is true of the coffee in your mug that you stirred. What happens, or the ink dropped into water, the stirred and becomes equally distributed, no part is significantly different than any other part. Whereas when the coffee is first hot, or when I first pour in the cream, now there's a separate gradient of difference between the two parts."
},
{
"end_time": 6067.961,
"index": 236,
"start_time": 6044.036,
"text": " Okay, okay. So the hot cup is far from equilibrium with respect to colder room."
},
{
"end_time": 6091.357,
"index": 237,
"start_time": 6068.08,
"text": " So why is the terminology called far from equilibrium rather than just saying not in equilibrium? If, for example, think about my refrigerator. My refrigerator does work to keep its internal relationship far from equilibrium with respect to the environment. We usually talk about far from equilibrium as things being having really high gradients of difference."
},
{
"end_time": 6122.449,
"index": 238,
"start_time": 6092.517,
"text": " So being away from equilibrium, most of thermodynamics was developed in what we call near equilibrium conditions. Conditions where things are a little bit off from equilibrium, but will tend to fall towards equilibrium. I say fall towards equilibrium. Is that a precise term to be near equilibrium versus far from equilibrium or does it depend on the context? Of course it's near and far are of course not quantitative terms."
},
{
"end_time": 6151.032,
"index": 239,
"start_time": 6123.575,
"text": " So when we say near equilibrium, we're dealing with a whole range of processes in which the gradient of difference is not huge. And there's a reason for that because when the gradient is large, some very unusual things can happen. And this has produced a whole series of studies that have really been going on for more than a century now about unusual features that happen far from equilibrium."
},
{
"end_time": 6179.343,
"index": 240,
"start_time": 6151.834,
"text": " So an example are shockwaves. Shockwaves in which normally differences in pressure even out pretty quickly. Winds and difference in pressure in different parts of the globe, for example. Or differences in pressure when you're inflating a balloon and then letting it sort of out. These things dissipate pretty quickly and easily. But you can push things so far"
},
{
"end_time": 6210.333,
"index": 241,
"start_time": 6180.452,
"text": " that the normal processes don't dissipate that pressure. So breaking the sound barrier is a classic example. Pressure can be moved from one part of the air to another part of the air to equilibrate the pressure pretty quickly and easily. It can do so without much of a problem up to the speed of sound. The sounds that I'm producing now are the result of pressure waves that can be transmitted across"
},
{
"end_time": 6235.06,
"index": 242,
"start_time": 6211.237,
"text": " space in an atmosphere because of the way the pressure waves produce other pressure waves and pass this along. Those sound waves are the result of the fact that this is how fast pressure wave, pressure differentials move through the atmosphere. So, however, you can, because"
},
{
"end_time": 6261.22,
"index": 243,
"start_time": 6236.442,
"text": " The atoms in the air, the molecules in the air, can only bump into each other so hard, so fast, to move this. There are certain speeds above the speed of sound, we say, that now breaks this. So you now can't actually have the pressure dissipate across this boundary, where this pressure is so high that the air can't disperse it fast enough."
},
{
"end_time": 6284.548,
"index": 244,
"start_time": 6261.732,
"text": " The result is when a jet breaks the sound barrier, it creates a break in the atmosphere, so to speak, a boundary in which there's high pressure air on one side, low pressure air on the other side, and it can't be relieved. It's only as the pressure decreases, the pressure differential decreases some distance away from the jet"
},
{
"end_time": 6311.681,
"index": 245,
"start_time": 6285.111,
"text": " that suddenly it can release and it releases in a huge bang, a sonic boom and this is because now finally we've come back to the level at which that pressure can be released and it's a huge differential and it's released rapidly, a loud sound. So what's happened here is that far from equilibrium we've pushed it so far that it can't re-equilibrate right away and so"
},
{
"end_time": 6322.739,
"index": 246,
"start_time": 6312.039,
"text": " beginning at the turn of the century and particularly after the mid-century, last century, studying systems that are very far from equilibrium began to produce all kinds of interesting results."
},
{
"end_time": 6351.834,
"index": 247,
"start_time": 6323.131,
"text": " So we notice, for example, one of the things that on the wings of jets, as they're far from equilibrium, also they produce what amount to whirlpools, vortices. And the vortices are also playing a role in dispersing this energy differential in different ways. And the vortices can actually mess up your flight. And one of the real problems with breaking the sound barrier initially was all the vortices that were produced as you approached this"
},
{
"end_time": 6364.343,
"index": 248,
"start_time": 6353.097,
"text": " critical point, really destabilized everything. And so there's a lot of shattering, you know, in the first flights beyond the sound barrier, they were afraid it was going to rip apart the jet."
},
{
"end_time": 6395.333,
"index": 249,
"start_time": 6365.367,
"text": " It turns out that it's not enough to do that, and we found how to go beyond that. But subsequently, what we discovered is that systems that we now maintain far from equilibrium, we don't allow to equilibrate, can produce all kinds of interesting effects. And one of the things that they do is they produce orderly effects. Now, why do I say orderly? One of the best examples of this that everybody knows is a whirlpool."
},
{
"end_time": 6420.145,
"index": 250,
"start_time": 6395.538,
"text": " We just talked about the vortices at the end of the wings of jets, a whirlpool in a stream. What's happening is that you start the water flowing down the stream, and it comes and runs into, say, a barrier of some kind. Going around the barrier, it produces all kinds of irregularity, lots of unstable behaviors."
},
{
"end_time": 6446.8,
"index": 251,
"start_time": 6420.589,
"text": " The flow is no longer laminar. It's now broken up and you get chaotic flows. But as the water continues, the chaotic flows begin to cancel out and whirlpools form. The whirlpool forms because all the chaotic interactions are, in a sense, contrary to each other. Because they're not all in the same direction, they begin to cancel each other out."
},
{
"end_time": 6467.466,
"index": 252,
"start_time": 6447.5,
"text": " so that over time all the non-symmetric interactions begin to cancel out and those that are non-symmetric are the only ones that are left and that produces the laminar flow that becomes a whirlpool that actually allows water to move through this barrier more efficiently."
},
{
"end_time": 6497.568,
"index": 253,
"start_time": 6468.063,
"text": " And one of the things we know about whirlpools, for example, think about bathtubs being drained and producing a whirlpool. The whirlpool actually, because it aligns the movement of water molecules so that they're not in each other's way, so to speak, that they're minimally out of each other's way, it more effectively, more efficiently empties the bathtub. The bathtub is, of course, now a gradient of difference, the gravity that's pulling the water down. That gradient is being dissipated"
},
{
"end_time": 6507.619,
"index": 254,
"start_time": 6497.875,
"text": " But it spontaneously, because it's such a high gradient, it spontaneously forms into a whirlpool because it more rapidly depletes that gradient."
},
{
"end_time": 6534.77,
"index": 255,
"start_time": 6507.995,
"text": " the whirlpool by virtue of allowing spontaneously non-interacting, non-contradicting movements to be slowly eliminated. Now empties the bathtub faster, so that if you were sitting in the bathtub and then just messing up the whirlpool constantly, and now comparing that to a bathtub in which you didn't do that, messing up the whirlpool would allow it to drain more slowly."
},
{
"end_time": 6560.759,
"index": 256,
"start_time": 6535.401,
"text": " The whirlpool actually, as system is far from equilibrium like this, moves towards equilibrium by regularizing the flow. Those are called dissipative structures or dissipative systems. Is there a difference between those two? What I'm saying is you're dissipating the gradient in this case, right? But if you have a strong gradient, sometimes, and particularly if that gradient is maintained for a long period of time, like a"
},
{
"end_time": 6587.039,
"index": 257,
"start_time": 6561.203,
"text": " full bathtub or a stream that keeps running. Then what happens is you're constantly dissipating that gradient, but the gradient doesn't get dissipated. Under those circumstances, what happens is it tends to fall into irregularity because the regularity reduces what you might call the dissipation length, how far a molecule has to travel before it leaves."
},
{
"end_time": 6612.637,
"index": 258,
"start_time": 6587.398,
"text": " If it travels and it gets bumped back and it travels and gets bumped back by virtue of chaotic interactions, it's going to take longer to leave. And that means the trajectory of its departure, say, from the bathtub is longer. It's going to take longer, take more time. So what happens in systems that are maintained apart from equilibrium is they generally develop into regularities."
},
{
"end_time": 6635.794,
"index": 259,
"start_time": 6614.377,
"text": " Now here's the issue. This is why way back in the 1940s the quantum physicist Erwin Schrödinger said this has got to be relevant to life because life is maintained far from equilibrium, because life is taking in fuel to keep pumping it far from equilibrium."
},
{
"end_time": 6661.852,
"index": 260,
"start_time": 6636.8,
"text": " Yeah, he talked about this as, you know, as neg entropy. You know, if entropy is the increase of this process is maybe they have to eat neg entropy, that food is like neg entropy, pushing it. I think it's a bad term and most people have abandoned it. But the idea I think is right. And that is in order to stay far from equilibrium, because a system that is far from equilibrium is dumping energy as fast as it can."
},
{
"end_time": 6690.572,
"index": 261,
"start_time": 6663.063,
"text": " In order to stay far from equilibrium, you have to keep pumping up that gradient. So the stream has to keep running. Or maybe you have to heat something constantly to keep it far from equilibrium. Or like in your refrigerator, you have to keep a motor running that dumps heat constantly because the refrigerator is spontaneously trying to fall towards equilibrium. But you've got to do work to constantly push it away from equilibrium."
},
{
"end_time": 6718.404,
"index": 262,
"start_time": 6691.527,
"text": " To stay far from equilibrium, you have to do work. So that, of course, is relevant for life. What happened since then, and a lot of systems thinking about life, fell into, I think, this error, thinking that we're just like a whirlpool. We're just like a very complicated whirlpool. We get energy in and we bump it out, and therefore our regularity is like that. It turns out that that's a really useful insight."
},
{
"end_time": 6744.428,
"index": 263,
"start_time": 6719.172,
"text": " Because it says, you know, to be what we are, we have to do this. But it's incorrect. It doesn't quite go far enough. To generate all the regularities that we have requires work. To generate all the gradients, different molecular systems in our body, all different cell types, all the different molecular interactions requires constant work."
},
{
"end_time": 6772.773,
"index": 264,
"start_time": 6744.65,
"text": " And of course, you might say in that respect, it's like an extremely complicated whirlpool. But in fact, that's not quite the case. Because the story with a whirlpool or all far from equilibrium physical systems is that they're in the process of destroying themselves. They're in the process of getting rid of the gradient as fast as possible. They spontaneously regularize"
},
{
"end_time": 6802.108,
"index": 265,
"start_time": 6773.234,
"text": " in the process of dumping that gradient as fast as possible. Life has to be just the opposite of that. Life has to generate these gradients and keep it from undermining itself. Self organizing systems like this are self destructive by nature. Living systems are constantly involved in trying to keep that from happening."
},
{
"end_time": 6827.671,
"index": 266,
"start_time": 6803.183,
"text": " So here's the paradox. A living system has to use self-organizing like processes, dissipative processes, to generate the order it has, and yet it has to use those dissipative processes to keep those dissipative processes from destroying themselves. That's a really interesting paradox. We're self-organizing,"
},
{
"end_time": 6855.759,
"index": 267,
"start_time": 6828.234,
"text": " but we're using our self-organization to keep the self-organization from falling apart. How could that possibly be? Now, when we do it, are you referring to a single person or the entire environment? No, so I'm actually talking about a single organism. I could be talking about a bacterium. Okay. Just as well. A bacterium has to keep itself far from equilibrium by taking in energy, using that energy to do work,"
},
{
"end_time": 6884.462,
"index": 268,
"start_time": 6856.664,
"text": " to maintain itself far from equilibrium in order to produce constraints, in order to produce gradients within itself, constraints on where particular molecules are located and how they can move and so on. The bacterium has to do this. The bacterium has to use the second law of thermodynamics to maintain things far from equilibrium. It has to do work to do something that produces it in the opposite direction."
},
{
"end_time": 6909.377,
"index": 269,
"start_time": 6885.145,
"text": " What I meant was in the case of a bacteria or even a person is we dissipate something, so we're throwing something away, but what we're throwing away is detrimental to us. So what I mean is, okay, so I thought that what you're saying is it needs to get into a state where what we throw away is then somehow useful for us."
},
{
"end_time": 6938.746,
"index": 270,
"start_time": 6909.548,
"text": " Oh, no, I didn't mean that. I'm sorry if I led you in that direction. It's my misunderstanding. No, that's all right. Words, remember? Right, right, right. So the issue is that, of course, we take in energy and then we produce waste, heats, if nothing else, but obviously waste products as well, things that we can't fully utilize or break down. And we do that, we're pumping energy through the system in order to maintain the constraints. In this respect, we're sort of like a refrigerator."
},
{
"end_time": 6968.626,
"index": 271,
"start_time": 6939.48,
"text": " I think it's a useful model because we've got this motor that's constantly keeping the system far from equilibrium by what? By doing a lot of work and generating a lot of heat. That's ironic because this device that's supposed to keep something cool in the process of keeping a certain region cool, it has to produce more heat. In fact, it produces more entropy than would be given up, than would be produced by just allowing the refrigerator to go to equilibrium."
},
{
"end_time": 6998.49,
"index": 272,
"start_time": 6969.514,
"text": " As we turn off the motor, the refrigerator now slowly goes to equilibrium. Entropy is increasing. That amount of entropy is a fixed amount of entropy. But by running the motor, we're now producing not only the reversal of that, keeping that entropy from increasing, but we're generating heat in the process. We're also producing more entropy."
},
{
"end_time": 7020.555,
"index": 273,
"start_time": 6999.582,
"text": " So in fact, that process of keeping something far from equilibrium actually increases the total entropy of the system faster than just allowing it to go to equilibrium. So we have to produce waste. You know, generating our societies, we dump a whole lot of waste into the world, a lot of heat into the world."
},
{
"end_time": 7049.309,
"index": 274,
"start_time": 7021.783,
"text": " It's because it's an inefficient process as we recognize. But now, let me go back to the paradox, because this will help us move to the next step. The paradox is to do what we do. By we, I mean all of us living creatures, including the bacterium, to stay far from the equilibrium. In fact, to stay so far from equilibrium that we can duplicate ourselves. We can make more of this stuff and actually reproduce."
},
{
"end_time": 7074.565,
"index": 275,
"start_time": 7049.804,
"text": " You're doing a really good job of staying far from equilibrium. You're pushing yourself way far from equilibrium. But the question is, all processes that go far from equilibrium are in the processes of destroying themselves. How could it be that organisms don't do that? So Schrodinger, as part of the story, he recognizes this problem. Organisms are far from equilibrium. His suspicion is that"
},
{
"end_time": 7103.353,
"index": 276,
"start_time": 7075.674,
"text": " The thing that we now call DNA, that the genetics is somehow providing information and somehow the information is keeping us from going far from equilibrium. Now, this is the interesting connection because now it's giving us a sense that although information theory, which is itself described in terms of a kind of entropy story, and physical entropy are also interestingly linked together."
},
{
"end_time": 7133.609,
"index": 277,
"start_time": 7104.991,
"text": " Every information system is a physical system. Every change of the information signal that carries information is actually a consequence of work done on that signal. So that there's now information is not just disembodied. Information always is linked, is always grounded. But it's grounded in interesting ways, like we described one, two, three."
},
{
"end_time": 7164.633,
"index": 278,
"start_time": 7134.718,
"text": " That grounding is also different ways of being grounded. You can be grounded in form and you can be grounded in physicality. So now, so let's talk about the problem of life then. The origins of life has to be a transition in which you use, which the living system is using this process that's self-destructive, to keep the self-destructive processes from being self-destructive."
},
{
"end_time": 7194.36,
"index": 279,
"start_time": 7165.691,
"text": " But it has to use them to do this. How is that even possible? This is one of the things that I struggled with and it turns out that it also produces a one, two, three step. The one step being the second law of thermodynamics. That is that entropy tends to increase. But if I have a system in which I'm constantly pushing entropy against itself, possibly doing work on something, I can keep it far from equilibrium."
},
{
"end_time": 7223.114,
"index": 280,
"start_time": 7195.077,
"text": " And that can produce something that the increase of entropy wouldn't produce. That is now some regularity, some new constraints. The question is, how can I keep that from being self-destructive? It turns out the answer is curious, but simple. It has to be simple for two reasons. Number one, if life happens sort of spontaneously by accident,"
},
{
"end_time": 7251.954,
"index": 281,
"start_time": 7224.36,
"text": " Nobody didn't make it happen. It probably had to happen pretty simply. It couldn't be really a lot of complicated stuff happening. But the second thing is that this transition had to be a transition away from what every other part of the universe is doing. That is, tending towards increase of entropy. And if it's far from equilibrium, tending towards even faster. Living things have to prevent that, hold that off."
},
{
"end_time": 7281.544,
"index": 282,
"start_time": 7252.5,
"text": " for a period of time. In terms of life on Earth, we've held it off for 3.8 billion years at least. That's incredible when you think about it. How does it work? What I realized is what has to happen is that self-organizing processes themselves produce regularities. Regularities are gradients or differences or constraints. But there are different kinds of constraints."
},
{
"end_time": 7309.889,
"index": 283,
"start_time": 7282.346,
"text": " One kind of constraint might be the constraint on diffusion in a liquid or diffusion of a liquid in a liquid or molecules in liquid or dissolving. Diffusion, the edges of the cup constrain the diffusion of heat a little bit. The edges of the cup certainly constrain the diffusion of molecules out of the hot water into the atmosphere. That's a kind of constraint"
},
{
"end_time": 7337.449,
"index": 284,
"start_time": 7310.725,
"text": " In this case, I had to produce it by producing a cup, for example. But what I realized is that living processes, if they can be composed of more than one kind of self organizing, far from equilibrium process, might be able to produce constraints on each other. Reciprocal constraints. And the way to think about this is in terms of what we would call boundary conditions."
},
{
"end_time": 7368.541,
"index": 285,
"start_time": 7339.002,
"text": " That is, a constraint is a boundary condition like the edge of a cup is a boundary condition. Okay, so how does that work? Constraints can produce boundary conditions or constraints can be boundary conditions and self-organized far from equilibrium processes can produce constraints and therefore boundary conditions. Are there chemical boundary conditions that can be reciprocal of each other? And the ones that I hit upon turn out to be characteristic of all of life."
},
{
"end_time": 7394.053,
"index": 286,
"start_time": 7370.316,
"text": " They have to do with catalytic processes and diffusion processes. So catalytic processes when one molecule decreases the threshold in which another molecule can either interact with a third molecule or can break down and fall apart into two. We call these catalysts or enzymes. They simply make it possible. It's"
},
{
"end_time": 7422.824,
"index": 287,
"start_time": 7394.462,
"text": " Also possible, and people have begun to focus on this as maybe part of the story, that you could have reciprocal catalytic reactions, or catalyst A breaks down some molecules to produce catalyst B, which breaks down some other molecules, and one of its products is catalyst A. This is a reciprocal catalytic process. Does that explode and become exponential, or are there conditions where that's not? Yes. It will literally be a"
},
{
"end_time": 7442.978,
"index": 288,
"start_time": 7423.2,
"text": " You know, like a nuclear reaction, it's a runaway process. If catalyst A produces catalyst B, which produces catalyst A, now you've got two catalyst A's, which produce two catalyst B's, which produce four catalyst A's. So something has to be used up. But then that sounds to me like it's dissipating itself again. It is totally dissipated, totally self-destructive."
},
{
"end_time": 7472.108,
"index": 289,
"start_time": 7443.746,
"text": " So when we set these things in motion, they use up the raw materials quickly and the system now A and B diffuse away from each other, they can't affect each other. The process stops and it stops faster than if it was just a single catalyst because it's a runaway process. Now there's another chemical process that's important to this that we find. So first of all we've noticed now when we look at the biochemistry of"
},
{
"end_time": 7499.616,
"index": 290,
"start_time": 7472.483,
"text": " cells of all cells of all types, bacteria as well as us, catalytic cycles are very, very common. We use that because to build the body, we want to have runaway like processes, we want to produce more stuff constantly, and you got to run that with energy. But of course, these are dissipated processes and self destructive, if we let them go alone. What's the major limitation"
},
{
"end_time": 7514.206,
"index": 291,
"start_time": 7500.316,
"text": " on a reciprocal catalytic process. It's that A and B have to be near each other. They have to be in the same vicinity. When they use up enough of their raw materials,"
},
{
"end_time": 7544.07,
"index": 292,
"start_time": 7515.179,
"text": " their diffusion will take them away from each other and A can't produce any more B's and B can't produce any more A's. Diffusion is where it ends. That is the increase in entropy is and you increase more and more A's and B's rapidly and rapidly and decrease the substrate until finally there's not enough substrate left and now the process rides to a halt. It's reached its highest energy position but it's also produced lots more of A's and B's. The process is diffusion."
},
{
"end_time": 7571.834,
"index": 293,
"start_time": 7545.316,
"text": " One of the other things that happens spontaneously in the molecular world is tessellation and crystallization. I'm sorry, I just want to be clear. So the word diffusion here means a physical distance between the two or the using up of the raw materials or not. The using up of the raw materials slows the productions of A's and B's. A's and B's will spontaneously move away from each other. They mix"
},
{
"end_time": 7597.756,
"index": 294,
"start_time": 7572.21,
"text": " The second law of thermodynamics says that if you're not producing more of them here, they'll just tend to mix by not continually putting ink into this place in the water. It won't stay bluer than the rest. Once I stop, the ink particles will diffuse away. Well, the same thing is true with these molecules. They'll spontaneously diffuse away. So as soon as catalytic reaction slows down below a certain point,"
},
{
"end_time": 7623.012,
"index": 295,
"start_time": 7598.302,
"text": " Now the local concentration of A's and B's will dissipate. So it's a dissipative structure driven by, first of all, a far from equilibrium relationship between the substrates and how it's going to end up. But as soon as that is finished, the whole system dissipates and goes back to its highest entropy state, lowest energy state."
},
{
"end_time": 7652.671,
"index": 296,
"start_time": 7625.128,
"text": " The other molecular process that I focused on was crystallization and what we call tessellation. Molecules that have a regular structure oftentimes, as they lose energy, can link up with each other and form crystals. So, for example, a solution that has a lot of sugar molecules in it, for example. Heat it up and you stir and you dissolve the sugar."
},
{
"end_time": 7682.312,
"index": 297,
"start_time": 7653.439,
"text": " Once it cools down and the molecules slow down in their interaction, sugar molecules tend to stick to each other and they orient with respect to their structure. What you get is crystals. Crystals form because the sugar molecules are in a lower energy state when they're all oriented with respect to each other into a lattice-like structure. This is also the way that cell membranes in life and virus"
},
{
"end_time": 7710.162,
"index": 298,
"start_time": 7683.012,
"text": " shells form they form by virtue of molecules that just tend to stick together okay and viruses it's it's typically protein molecules that have a three-dimensional structure that they tend to stick to each other and they form shells typically or sheets okay it turns out that the formation of sheets and containers by this process of what we call self assembly when a system basically"
},
{
"end_time": 7736.852,
"index": 299,
"start_time": 7710.572,
"text": " crystallizes crystallization is another process that spontaneously forms regular structures by giving up energy and as the so think of your your solution of sugar water cooling down as it cools down eventually the rate at which sugar molecules attached to the crystal and then get dissolved off of the crystal becomes equal runs to equilibrium"
},
{
"end_time": 7764.531,
"index": 300,
"start_time": 7737.466,
"text": " The crystal molecule will stop growing at that state. You won't get crystals any larger. So what's happening is that crystallization is also a process that grows to a certain level. It's effectively frozen in this level because they're now stuck to each other. That's a low energy state, but it can't grow any further. So as crystallization grows, it depletes its environment of its raw materials."
},
{
"end_time": 7791.084,
"index": 301,
"start_time": 7765.179,
"text": " But now think about these two processes. They turn out to produce each other's boundary conditions in the following sense. As a crystalline structure grows, in this case, I'm thinking of a polyhedron made up of a bunch of things that sort of grow together into a shell. This is how viruses form, for example. At some point, it will grow until it's used up all of its raw materials."
},
{
"end_time": 7817.159,
"index": 302,
"start_time": 7792.363,
"text": " But what if you keep adding raw materials? Then you can get it to grow all the way until it stops, maybe closes itself. The catalytic process is producing more A's and B's, or A's, B's and C's and D's, and a bunch of side products is breaking down some molecule into side products. In the process it liberates a little bit of energy."
},
{
"end_time": 7840.316,
"index": 303,
"start_time": 7818.063,
"text": " but it's also producing a side product that might be what we call a capsid-like molecule, a molecule that might be one of those molecules that sticks together and forms a sheet or a shell. Now we have a situation in which in a local region, if a reciprocal catalytic process produces enough molecules that can form a shell,"
},
{
"end_time": 7870.742,
"index": 304,
"start_time": 7840.998,
"text": " It will keep producing those every time the shell begins to grow and decrease the concentration of those molecules. New ones are produced into that space. But now if the system grows into a container, a shell, it will tend to contain the very catalysts that made it possible. Because the catalysts are going to be in the same space. The shell is going to grow fastest where there's the most catalytic reaction."
},
{
"end_time": 7887.824,
"index": 305,
"start_time": 7871.903,
"text": " And that means the system is going to capture all the features that are needed to grow itself. But it's also going to stop the catalyst from diffusing away."
},
{
"end_time": 7904.923,
"index": 306,
"start_time": 7888.217,
"text": " Football fan, a basketball fan, it always feels good to be ranked. Right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use. Pick two or more players. Pick more or less on their stat projections."
},
{
"end_time": 7920.316,
"index": 307,
"start_time": 7904.923,
"text": " anything from touchdown to threes and if you write you can win big mix and match players from any sport on prize picks america's number one daily fantasy sports app prize picks is available in 40 plus states including california texas"
},
{
"end_time": 7948.319,
"index": 308,
"start_time": 7920.538,
"text": " When they run out of substrates. So now you've got an inert structure"
},
{
"end_time": 7973.473,
"index": 309,
"start_time": 7949.309,
"text": " sort of like a virus. It's got maybe a protein shell and a bunch of protein catalysts inside of it. But now if it gets broken again and spews out its contents, the process will start over again. And now the catalytic process, which is a self-organizing process that would run down if it wasn't stopped from diffusing,"
},
{
"end_time": 7994.94,
"index": 310,
"start_time": 7974.514,
"text": " and the shell growing process, which would stop growing if it wasn't continually getting new pieces to grow larger and larger and larger. Each of them is producing the boundary conditions of the other. Now you've got two self-organizing processes that both support each other and keep each other from running to the end."
},
{
"end_time": 8017.09,
"index": 311,
"start_time": 7998.422,
"text": " So what we see now is that thermodynamic processes pushed far from equilibrium produce self-organizing processes because it's a thermodynamic process that's pushing something far from equilibrium and then that system is trying to give itself up. It's a relationship between two second"
},
{
"end_time": 8040.009,
"index": 312,
"start_time": 8018.183,
"text": " second law processes, two increase of entropy processes. One is you're driving something far from equilibrium, the other is just trying to go back towards equilibrium. It's a relationship between two second second law processes, two entropy increasing processes. What I've just described is that you could also now have two"
},
{
"end_time": 8068.78,
"index": 313,
"start_time": 8040.503,
"text": " reciprocal self-organizing processes that stabilize each other and promote each other's existence. That's what life has to have. Life has to be able to figure out how is the two self-organizing processes can be linked together so that they each produce the boundary conditions that allows each other to happen but keeps the each other from going towards full equilibrium."
},
{
"end_time": 8094.138,
"index": 314,
"start_time": 8070.452,
"text": " So life has to be this interesting coupling. I've described this in terms of a thermodynamic story which says that we've got to expand thermodynamics. We've got to expand thermodynamics so that we can talk about far from the equilibrium thermodynamics that produce things that produce regularities transiently in order to produce regularities in the world that can be useful."
},
{
"end_time": 8119.821,
"index": 315,
"start_time": 8094.462,
"text": " But because there's different kinds of regularities, different kinds of constraints, it's possible for those processes to become coupled in such a way that they support each other. That they do one thing that is not true in thermodynamics. So in our standard thermodynamic theory, we have sort of the first law and the second law. The first law is that energy and matter can't be created or destroyed."
},
{
"end_time": 8144.48,
"index": 316,
"start_time": 8120.486,
"text": " Depending on if we throw in nuclear physics, that becomes a little more complicated so that mass energy is a quantity that can't be created or destroyed. The second law says yes, but entropy will increase and order will, you know, regularity and redundancy and gradients will be decreased. What we have in life is something interesting."
},
{
"end_time": 8173.507,
"index": 317,
"start_time": 8145.367,
"text": " Because order can be created or destroyed. Constraints can be created or destroyed. Regularities can be created or destroyed. But physical stuff can't be created or destroyed. It can just be transformed. When I say that a life of a bacterium or of a person can be created or destroyed, it's not because I'm saying that its material can be created or destroyed. Its organization is what's there."
},
{
"end_time": 8202.824,
"index": 318,
"start_time": 8174.019,
"text": " You and I are not the same physical thing that we were five years ago. The material stuff is transit passed through. We're like a whirlpool in that sense. But we're one that keeps ourselves going. It keeps by virtue of this coupling. But now what you see is that by virtue of coupling these kinds of far from equilibrium systems in this way, this is a system that protects its organization."
},
{
"end_time": 8231.442,
"index": 319,
"start_time": 8203.507,
"text": " that reconstructs its organization, that repairs its organization if damaged. Now, what we're saying now is the existence of that organization is being maintained. Where matter and energy have this kind of, it doesn't have to do anything to stay in existence. Organization has to constantly do work to stay in existence."
},
{
"end_time": 8248.2,
"index": 320,
"start_time": 8232.261,
"text": " So what this says is that thermodynamics can be talked about in three ways, and I've"
},
{
"end_time": 8274.514,
"index": 321,
"start_time": 8248.507,
"text": " I've coined three terms, probably won't survive history, but it helps me think about them. The first one is I call it homeodynamics. The second law of thermodynamics is about making things more homogeneous over time. When we talk about equilibrium, things are getting more homogeneous. So I want to call this homeodynamics. But when things are far from equilibrium and producing regularities,"
},
{
"end_time": 8297.432,
"index": 322,
"start_time": 8275.06,
"text": " We want to say that it's producing forms. The Greek term for form, of course, is morphe. So I want to call these morphodynamic processes. They're processes that produce dynamical forms. A whirlpool is, for example, a dynamical form. Its material is passing through it all the time, but it maintains its form. It's a morphodynamic process."
},
{
"end_time": 8322.91,
"index": 323,
"start_time": 8297.858,
"text": " The two processes I just described, the two molecular processes, are also morphodynamic processes. One produces a structure. One produces a higher and higher concentration of catalysts in a local region. They produce constraints. They produce gradients. But when they're coupled to each other, they also protect each other from self-elimination."
},
{
"end_time": 8350.503,
"index": 324,
"start_time": 8323.746,
"text": " and they protect their cooperative relationship from self-elimination. This is what life has to do. I call this teleodynamics because living systems now don't just fall towards disorder or go towards highest entropy, they go towards a target. Their target is self-maintenance in this case. That target has a specific form"
},
{
"end_time": 8373.507,
"index": 325,
"start_time": 8350.981,
"text": " It's not just general. It's a specific organization that's being maintained and passed on. So I use teleodynamics to talk about teleology or end directedness. Things that have a specific end also can go wrong. There can be right or wrong. There can be good chemistry or bad chemistry. Self organizing processes don't have this."
},
{
"end_time": 8402.21,
"index": 326,
"start_time": 8374.633,
"text": " Processes of simple entropy increase, simple chemistry doesn't have this. But if you have a process that maintains the existence of its own organization, there now can be things that go right or wrong. There are things of value for it and things that are dangerous. So this is, in a sense, the introduction of value into the world. So does sentience come about at the introduction of values or is that separate? Yes."
},
{
"end_time": 8429.804,
"index": 327,
"start_time": 8402.961,
"text": " So another way to say it is that now we also have information about something. Because the structure of this system, the structure of these devices, I call these little units autogenic viruses. It's like a virus, it has a virus shell, but it's not parasitic. It generates itself. So it's a structure that's got"
},
{
"end_time": 8459.548,
"index": 328,
"start_time": 8430.282,
"text": " a protein shell like a virus does, and inside of it it's got some enzymes. Some viruses have even catalysts within them, though mostly they have RNA and DNA. And that's because they're parasitic, they use the genetic systems of cells that are their hosts. But the argument I'm making is that there could be something that's maybe a little less than life as we know it, that has no RNA and DNA. But it can reproduce itself,"
},
{
"end_time": 8486.869,
"index": 329,
"start_time": 8460.265,
"text": " It can repair itself if damaged, and therefore it will be able to evolve. It's an autogenic virus because it's a virus that is not parasitic. But these are systems that now will have some environments that are useful for it, some environments that are not useful. There are some chemicals in the environment that will be detrimental to the process."
},
{
"end_time": 8515.265,
"index": 330,
"start_time": 8487.227,
"text": " So it has value, but it also in effect is carrying information about itself. So it's got in its structure, even after that structure is damaged and broken open, in the co-localization of all these constraints, constraints embodied in chemistry now, information about what's necessary"
},
{
"end_time": 8541.169,
"index": 331,
"start_time": 8515.725,
"text": " to recapitulate that same organization so that even if this system gets broken open and catalysis begins again and shell formation begins to form around the catalysts during that process even though that information was not information like a DNA molecule or something like that it was distributed in this distributed group of constraints"
},
{
"end_time": 8572.654,
"index": 332,
"start_time": 8542.739,
"text": " So therefore, those constraints collectively are about themselves, about their unity, about their coherence, about their integration, how they need each other. It's carrying that information forward so that it can be embodied in new chemistry. So the thing gets broken open and dispersed. New chemistry begins to take place. New catalysis happens. New shell formation happens."
},
{
"end_time": 8599.991,
"index": 333,
"start_time": 8573.183,
"text": " Over three or four cycles like this, probably all of the chemistry is replaced. Just like in reproduction in organisms. But if all of the chemistry is replaced, then the only thing that's got any continuity are the constraints. The constraints are of course these redundant relationships, these correlation relationships and likeness relationships."
},
{
"end_time": 8627.21,
"index": 334,
"start_time": 8601.084,
"text": " between these two processes. Notice the similarity to everything I've just been saying before this. Why did you say that there's a redundant relationship? Because each of these processes has to produce multiple times. So to produce the shell, you have to produce more and more of these molecules. And also, now thinking about the relationship, for example, between RNA and DNA,"
},
{
"end_time": 8655.708,
"index": 335,
"start_time": 8627.892,
"text": " There are mirror images of each other to some extent. They're complements to each other. They're like hand and glove to each other. But the two processes that I've described are each the boundary conditions for each other. They're environments for each other. In that respect, there's a likeness in the same way that DNA and RNA are in likeness relationship to each other."
},
{
"end_time": 8674.462,
"index": 336,
"start_time": 8656.271,
"text": " But they're also then they have to be correlated physically linked in space. So they have both features. They're linked in space because they share a common molecule. The molecule that's produced by the catalysis that becomes a shell molecule."
},
{
"end_time": 8704.104,
"index": 337,
"start_time": 8675.589,
"text": " So you've got the likeness relationship, the complementary relationship between the constraints that they each produce. One constraints diffusion, the other constraints the decrease in diffusion of the molecules that produce the shell. As they produce complementary and therefore iconic relationships to each other. And yet they have to be physically bound to each other by virtue of sharing this common molecule."
},
{
"end_time": 8734.582,
"index": 338,
"start_time": 8704.787,
"text": " that produces a structure in which the constraints, which are in a sense formal features, now can reproduce themselves in different material. Displacement. So what's happened to generate life is the same relationship in which you need iconic relationships linked to"
},
{
"end_time": 8757.858,
"index": 339,
"start_time": 8735.043,
"text": " these indexical-like relationships or covariance relationships linked to correlation relationships in such a way that they produce this capacity for displacement. Now you can have information that gets passed on transmitted to another kind of molecule and maintained. My argument is that"
},
{
"end_time": 8787.244,
"index": 340,
"start_time": 8758.729,
"text": " that DNA and RNA are late developments that begin to remember this kind of stuff after life has begun, produce a better kind of memory and a better kind of construction process, but that the same logic is involved all the way down to the origins of life and all the way up to the way brains work. There are constraints in the same sense that mathematics has constraints, but there"
},
{
"end_time": 8814.855,
"index": 341,
"start_time": 8787.807,
"text": " constraints of semiosis, constraints of aboutness. You can't have aboutness without certain things being constrained in certain ways. This is a quite general phenomenon. It's general enough that we can now begin to say, maybe this will lead us to new ways to explore the origins of life problem. And maybe it will lead us to new ways to talk about the consciousness problem."
},
{
"end_time": 8835.981,
"index": 342,
"start_time": 8816.425,
"text": " And that's where you want to go, I know, when you ask me about sentience. Because what I'm now saying is that even this simple autogenic virus is responsive to its environment in a differential way. Some things it will take in and other things it will not take in."
},
{
"end_time": 8864.667,
"index": 343,
"start_time": 8837.295,
"text": " Even in the simplest case, if the environment is full of random molecules, but just enough of the molecules to make catalysis and self-assembly possible, the system will close and capture lots of diverse molecules. But only those that when it's broken open will produce more catalysts and more shell molecules will increase in numbers."
},
{
"end_time": 8888.609,
"index": 344,
"start_time": 8865.367,
"text": " And so over time, the irrelevant molecules will tend to be excluded. The system is spontaneously even in a mindless sort of way, organizing itself with its environment, taking in things which are useful and expelling things which are not useful. So what's the difference between sentience and consciousness? Ah, a couple of steps."
},
{
"end_time": 8902.671,
"index": 345,
"start_time": 8889.36,
"text": " So first of all, what I would say is I call this vegetative sentience with the analogy to plants. Plants are responsive to their environment."
},
{
"end_time": 8923.609,
"index": 346,
"start_time": 8903.865,
"text": " The roots of plants follow certain nutrients and certain water concentrations in the soil. Their sentience, they change their growth pattern with respect to that. With respect to sunlight, plants move out of the way of shadows and into the sunlight to maximize the amount of solar radiation they're getting."
},
{
"end_time": 8949.684,
"index": 347,
"start_time": 8924.087,
"text": " Plants respond to their environment. We don't think about it because they don't move a lot. But if we speed up the camera, we showed in fast motion, we can see how sentient they actually are, how they're responsive to their environment. And some parasitic plants are even more insidious in their responsivity to their environment. I think about creeping vines that crawl up trees and begin to sap"
},
{
"end_time": 8977.295,
"index": 348,
"start_time": 8949.991,
"text": " the nutrients out of trees and so on. They're not just sensitive to light, but they're sensitive to chemistry, to the shape of the tree and so on. The particular species... So they move, it's just slower. They move and they're slower. But here's the issue. Their relationship to what they represent, their aboutness, is directly linked to their chemistry. So it's the chemistry of the tree"
},
{
"end_time": 9004.343,
"index": 349,
"start_time": 8977.841,
"text": " as the branches are moving towards the sunlight that's causing some cells to expand and other cells to shrink. It causes that branch to sort of bend in one direction or another. That's a direct chemical reaction, but notice that it doesn't have an independent relationship to what's causing that relationship. It's a direct relationship to sunlight or to water or to cold or whatever."
},
{
"end_time": 9033.609,
"index": 350,
"start_time": 9005.503,
"text": " The bacterium also has a direct chemical relationship to its environment. What would an independent relationship look like? You and I. It's just symbolic? Is there something that's not okay? No, not just symbolic. Every animal with a brain. What a brain does is it now doesn't just immediately react to the environment. It reacts to structures in the environment, to forms in the environment."
},
{
"end_time": 9062.261,
"index": 351,
"start_time": 9034.053,
"text": " and then links those forms to useful processes in the body. So one of the things you want to find out if you're a bacterium, for example, I want to have some sensor that says this is good to eat, that's bad to eat, go towards the good to eat stuff. Okay. But if you have a brain or with sensors, you can now say, okay,"
},
{
"end_time": 9090.538,
"index": 352,
"start_time": 9062.619,
"text": " I don't just need to know whether this molecule fits with these other molecules that I need, but I need to know things of that shape have a lot of that molecule in them. So brains are doing this displacement move from the physiology. So brains are now representing what the physiology needs, but in a totally different way."
},
{
"end_time": 9119.548,
"index": 353,
"start_time": 9090.947,
"text": " It's not in terms of the chemistry. It's in terms of some other feature that is correlated with the chemistry, maybe. But now we need a totally different system to do this analysis. It can't be done by just the chemistry. It can't be done by just the temperature gradient or the heat gradient or the light gradient. What you want to do is you want a system that can now take indirect information from the world."
},
{
"end_time": 9145.486,
"index": 354,
"start_time": 9120.845,
"text": " and therefore use it to get at the direct stuff that you need. So you need a whole system that in effect is sort of like what we've just been talking about, but now at the level of whole organisms. So what you need is to have some displaced way of getting information that's useful, that's grounded down back to your physiological needs, the chemistry of your body."
},
{
"end_time": 9175.333,
"index": 355,
"start_time": 9146.476,
"text": " So how does that happen? Well, you got to have the iconic and indexical jump that we just talked about, the one, two, three step. But notice there's something else that's interesting here. In one sense, even the autogenic virus has a kind of self. It represents itself and it represents itself again and again in different materials. Why do you think that is? What we say about every living thing is they have a self."
},
{
"end_time": 9205.572,
"index": 356,
"start_time": 9175.879,
"text": " It self-reproduces, it self-repares, it self-modifies. The term self we use all the time when we talk about living things. And when things die, they don't have a self anymore. You know, all their chemistry don't use that term anymore. Self is about this dynamics, and yet self is this fuzzy term. I think the problem with consciousness is that we're trying to solve the problem of self from this very high level of brains."
},
{
"end_time": 9235.742,
"index": 357,
"start_time": 9206.084,
"text": " Often times from the high level of human brains, when the problem of self goes all the way down to the origins of life. The problem of self, self was first created in this very simple form. And if self was first created in that form, until we understand how self comes into the world in the first place, we're certainly not going to understand how self works in a complex brain."
},
{
"end_time": 9263.012,
"index": 358,
"start_time": 9237.602,
"text": " There's too much involved. It's displaced away from what I call vegetative sentience. It's now a separate kind of sentience. I like to describe this as subjective sentience. I think that you and I have an additional one, symbolic sentience, that makes us aware of things like infinity and galaxies and truth and so on."
},
{
"end_time": 9290.23,
"index": 359,
"start_time": 9263.422,
"text": " that my dog and my cat are uninterested in, have no way of even representing to themselves. We're sentients of that, we feel it, we recognize it, we can interpret it, we can talk about it, we can interact with it. Creatures with brains that don't do symbols can't do that. So there's at least three levels of sentience we want to talk about here. And the first question is, so how did the first level of sentience develop?"
},
{
"end_time": 9318.387,
"index": 360,
"start_time": 9290.657,
"text": " That's what I was just on. Okay, that's an interesting question. We're going to get there. So my first question was, well, how far does that go? So there was you said at least three. So then could you even imagine a four? And if you did, would that not just be incorporated in the three because you're a human and you imagine that someone else could imagine it? Like, how far does that go? What happens? What's happening now in the world with you and I talking to each other? First of all, symbols are not just in my head."
},
{
"end_time": 9348.114,
"index": 361,
"start_time": 9319.889,
"text": " It's necessarily a distributed form of communication. The representation that we're using now to communicate requires not just one. If it's just me in the world, I don't need symbols. And in fact, if it's just me in the world, I'll never develop symbols and I'll never think about these other things. These will never be relevant. But symbols are not just in a human brain."
},
{
"end_time": 9379.036,
"index": 362,
"start_time": 9349.053,
"text": " Symbols are something that's distributed around the world. It's a distributed form of cognition. What that means is that I'm not just part of me and my local group, but there's a little bit of Aristotle in me. Not just here and now. And the fact that I can now communicate to people in Australia at a moment's notice."
},
{
"end_time": 9409.753,
"index": 363,
"start_time": 9379.787,
"text": " has changed that in a radical way. In one sense, symbolic communication has created the platform for a much higher level form of communication. We're beginning to feel the experience of what it's like to have this higher order distributed communication that we call the web distort our politics, our beliefs, our desires. The dynamics of this larger symbolic network"
},
{
"end_time": 9437.466,
"index": 364,
"start_time": 9410.555,
"text": " are really becoming problematic because there's certain iconic and indexical relationships taking place. Think about Facebook and how it amplifies certain kinds of likes. Not a surprise that they use that term. I like this. It resonates with me. Why? Because of my particular emotional state. I'll pass that on to somebody else. Now they have that icon as well."
},
{
"end_time": 9465.794,
"index": 365,
"start_time": 9438.729,
"text": " So one of the things that happens is that human emotions that tend to have their own runaway process socially, this kind of mob technology that we're beginning to create, has its own dynamics. Its own dynamics because it's not in me, it's not in you, and it's not in anybody's control anymore. One of the reasons that this is a problem is we don't understand how it does this."
},
{
"end_time": 9496.613,
"index": 366,
"start_time": 9466.988,
"text": " We're just beginning to understand how it has these troublesome consequences, why it's destroying a democracy or two. And that's because the logic of how they set up the algorithm does something in terms of symbols and even pictures, even now icons and indices, that also has a higher order iconic and indexical and symbolic like structure to it. It's got its own one, two, three repeat."
},
{
"end_time": 9525.52,
"index": 367,
"start_time": 9497.381,
"text": " and the reciprocity, the recursion of it. That is, it changes the physicality that made it possible. Self undermining in many respects. And it may well be that the runaway process will be so self undermining that it'll destroy stability of the of the world community. We don't know whether it will or not. So actually understanding this logic may not just be relevant to understanding the origins of life."
},
{
"end_time": 9555.06,
"index": 368,
"start_time": 9525.811,
"text": " may not just be relevant to understanding what a mind is, may not just be relevant to understanding what consciousness is. I'll come back to that when we come back to the sentience story. But also, is there a kind of higher order sentience already at work that we're a part of, that we're unaware of, that we can't experience, but is having an effect on us and our experience and our very physicality? That's, I think, the interesting and worrying question."
},
{
"end_time": 9585.503,
"index": 369,
"start_time": 9555.52,
"text": " behind all of this. To get at that, let's go back to this question of sentience. So the question of sentience is one that gets us in trouble when we look at computing and we think, can AI be sentient? Can it know something? What I like to tell people about computing is that actually your automobile engine is a potential computer. How do you make it into a computer?"
},
{
"end_time": 9609.309,
"index": 370,
"start_time": 9585.947,
"text": " You look at all of its various states and you assign some interesting symbolic value, a code, to one of its states. And so each of its states has a code value. When you run the machine, you run your car engine from state A to state B to state C to state D, you're running it from symbol A"
},
{
"end_time": 9639.735,
"index": 371,
"start_time": 9610.247,
"text": " What we're doing with computing is we're finding an isomorphism between a mechanical operation and a symbolic operation, a manipulation of tokens, a manipulation of signs. The car engine doesn't know that it's just an addition any more than the calculator knows that it's done addition. I've done the addition, but I've recognized that there's a regularity"
},
{
"end_time": 9662.568,
"index": 372,
"start_time": 9640.06,
"text": " a syntax to it, a set of constraints, and those constraints can be mirrored by a physical process that's so constrained, similarly constrained. The actual aboutness, the interpretation, the awareness is not in running the algorithm,"
},
{
"end_time": 9691.834,
"index": 373,
"start_time": 9663.865,
"text": " The algorithm is just a variant of a machine. This is why we call computers virtual machines, effectively. But now the difference, let's look at them even in the simple example I just described, this autogenic virus. It's information, the forms that it's passing, the constraints it's passing from one step to another, embodied in the physicality of the changes."
},
{
"end_time": 9717.995,
"index": 374,
"start_time": 9693.302,
"text": " It's embodied because it's embodied in chemistry, a process that's embodied in chemistry. Because it's about itself, because it's carrying information about how to repair itself if damaged, or how to reproduce itself in different material. It's also about its physicality."
},
{
"end_time": 9749.189,
"index": 375,
"start_time": 9720.111,
"text": " That is, the information and the physicality are not independent. The form can be passed on. The form can go from generation to generation in this process with all the material changed. But the material has to be kept in that form. But you couldn't have passed on those constraints if it wasn't always embodied in some form, some material. The information had to be physical"
},
{
"end_time": 9778.422,
"index": 376,
"start_time": 9751.118,
"text": " But in the case of this system, in the case of all life, it's physical embodiment of information about its own physicality. That's an added twist. Now, remember when I said that displacement allows recursion? This is an interesting kind of recursion. This is a recursion in which a formal relationship"
},
{
"end_time": 9804.991,
"index": 377,
"start_time": 9779.718,
"text": " has a recursive effect on its physical embodiment and therefore its persistence or its very existence. If it goes, if it dies, if it fails to reproduce, if it fails to repair itself, it goes out of existence. So this is basically a living system or even a system as simple as this autogenic virus."
},
{
"end_time": 9830.674,
"index": 378,
"start_time": 9806.476,
"text": " is information about its own existence. It's information is about existence. When we talk about information in philosophy, we use the phrase epistemology. That's about what we know, what things are about, what things are known. We talk about what exists in the world. We use the term ontology. So they're tied together? Epistemology and ontology?"
},
{
"end_time": 9859.275,
"index": 379,
"start_time": 9831.323,
"text": " Yeah, epistemology and ontology are sort of the two major divisions in metaphysics, talking about how things are known or what knowing means and what is. Well, here we're talking about what knowing is about. The aboutness is about its own existence. It's the epistemology of its own ontology in philosophical terms. That's what life is about."
},
{
"end_time": 9889.087,
"index": 380,
"start_time": 9860.35,
"text": " So now let's jump up. Now, I want to say I've skipped a lot of levels because we're going to go up to brains and minds real quickly. But let's go back to the classic problem, the mind-body problem that Descartes introduces. He divides the world into the res cogitons, the thinking stuff, and the res extensa, the extended physical stuff in the world, the physical world and the mental world, the mind world."
},
{
"end_time": 9913.797,
"index": 381,
"start_time": 9889.684,
"text": " and we might say the consciousness world. And the phrase that everybody takes from Descartes, kajito ergo sum, I think therefore I am. That's great for you and I. Descartes assumed that thinking and physicality were separate, and that the only way they could interact, if they could, was by some special organ like the pineal body."
},
{
"end_time": 9941.937,
"index": 382,
"start_time": 9914.087,
"text": " which of course can't be true, for a whole variety of reasons. But basically that's because cognition, thought, is not physical stuff. The aboutness, the very concepts that I'm trying to get across here, are not physical stuff, they're this abstract thing. So for Descartes, the abstractness of thought, the abstractness of ideas, was something really fundamentally different than material stuff, energy,"
},
{
"end_time": 9972.295,
"index": 383,
"start_time": 9944.377,
"text": " Well, here's the issue. He should have said, not I think, therefore I am, but I feel, therefore I'm real. Let's go back now to the simple pre-organism I described. It's physically damaged. Its physicality has been disturbed. Now, the informational-like processes, the constraints that are there,"
},
{
"end_time": 10000.026,
"index": 384,
"start_time": 9972.841,
"text": " begin to repair that specifically. In a sense, it's the physicality of it that activated the informational side of it to prepare the physicality of it. Think about you and I. Big jump here. Why am I sensitive to certain things and not other things? Because they affect my materiality."
},
{
"end_time": 10031.186,
"index": 385,
"start_time": 10001.596,
"text": " They affect my physicality. There may be risky or helpful. The things that catch our attention are the things that either hurt, surprise us, feel good, things that are really relevant to how my body works, how I feel. I think that we've misunderstood cognition. Cognition is tweaking feeling."
},
{
"end_time": 10058.131,
"index": 386,
"start_time": 10032.619,
"text": " and feeling is about our own physicality. It's this entanglement of the informational part of our lives with the physicality part of our life, the inseparability of the two that Descartes thought were separable and couldn't come together again. That means that feeling is a necessary part of representing the world."
},
{
"end_time": 10088.916,
"index": 387,
"start_time": 10060.776,
"text": " and why representing the world is always evolving towards doing it right, doing it better, getting truth, getting reality. If you think that language and thought are just arbitrary, a lot of postmodernism sort of went this way, using language as a kind of model. Maybe all of our physical theories, our theories of physics are just sort of weird thoughts. It doesn't have anything to do with reality."
},
{
"end_time": 10117.824,
"index": 388,
"start_time": 10089.258,
"text": " No, in fact just the opposite. Because of the necessary entanglement of semiosis, that is producing aboutness, that it's always normative, it's always value-laden, and it always feels like something. What we're doing feels like something. It's not just that there's information here, but where we're surprised or confused or upset or worried"
},
{
"end_time": 10148.234,
"index": 389,
"start_time": 10119.292,
"text": " It affects our thinking. Starting a business can seem like a daunting task, unless you have a partner like Shopify. They have the tools you need to start and grow your business. From designing a website to marketing to selling and beyond, Shopify can help with everything you need. There's a reason millions of companies like Mattel, Heinz, and Allbirds continue to trust and use them. With Shopify on your side, turn your big business idea into... Sign up for your $1 per month trial at Shopify.com slash special offer."
},
{
"end_time": 10179.701,
"index": 390,
"start_time": 10150.52,
"text": " So let me ask you this, just like off air, you reference that universal grammar should be capital U, universal grammar, because it's right at the fundament when it comes to physics and chemistry is on one end the post modernist and perhaps post structuralist, though I haven't studied them would say that it's subjective, especially morality is subjective."
},
{
"end_time": 10209.753,
"index": 391,
"start_time": 10180.333,
"text": " Would you say that not only is it objective in a sense, it's capital O objective? Yes. It's O objective in the sense of the relation to a self. If there wasn't a self where something could be damaged or lost, if there wasn't some value there, then there could be no value to things in the world. It's objective"
},
{
"end_time": 10240.282,
"index": 392,
"start_time": 10210.776,
"text": " and subjective in that sense. But by subjective, we want to expand to even the simplest kind of self that I've described. It's not subjective in the way you and I are subjective, but there's a subject there. There's something that's subject to the whims of the world that will benefit or be harmed by what's going on in the world. So in that respect, it's not there in chemistry and physics."
},
{
"end_time": 10271.169,
"index": 393,
"start_time": 10241.647,
"text": " Chemical and physical processes that are not alive don't have this feature. But in certain organizations of chemical physical processes, it can emerge. There was a time before selves, but now there are selves. There was a time before normative features in the cosmos, but now there are normative features, all with respect to selves. So in one sense,"
},
{
"end_time": 10295.043,
"index": 394,
"start_time": 10271.51,
"text": " The ontology epistemology story has fallen prey to the Cartesian story, this dualism, that we're never going to get them together. My point is, no, they actually are entangled. They can't be pulled apart. However, you can have ontology without epistemology."
},
{
"end_time": 10324.155,
"index": 395,
"start_time": 10296.067,
"text": " You can have chemistry and physics. You can have a big bang. You can have an early universe without any selves. But as soon as there are selves, there's aboutness. There's reference, there's meaning, and there's value. And selves can become more complex. We happen to be an example of a very complex form of self in which there are selves within selves to some extent. Each of my cells is a self and has some self. My"
},
{
"end_time": 10352.483,
"index": 396,
"start_time": 10325.213,
"text": " Physiology, irrespective of my consciousness, has a self. In fact, in a coma with serious brain damage, if enough of my nervous system is working, my physiology is still going. In fact, after I pass away and die, my fingers, nails will keep growing, my hair will keep growing, because the cells are still alive. But the larger system is no longer there to support them, and eventually it will stop."
},
{
"end_time": 10384.172,
"index": 397,
"start_time": 10354.497,
"text": " Brains and minds are cells within itself in that respect. Part of my environment is my body. And my body has an environment outside of my body. So it's this nesting of cells. But now think about you and I go back to that story about us being part of a global symbolic system linked together by language and computing. Are we like cells"
},
{
"end_time": 10414.241,
"index": 398,
"start_time": 10384.565,
"text": " within that system. Would we even know that system? Would we know that it's influencing us, that it's modifying us the way our nervous system modifies ourselves by changing the flow of hormones and stuff like that? I think understanding these principles, which currently are just anathema to material science for some reason,"
},
{
"end_time": 10444.684,
"index": 399,
"start_time": 10415.725,
"text": " The current science either wants to say, no, it's always already there. It's always just panpsychism. Every molecule has a little psychological self-something to it, and therefore it's always already there. That's a non-explanation. To say that it doesn't exist and it's all an illusion is also a non-explanation. Because who's having that illusion? And the illusion is normative. It's a kind of value."
},
{
"end_time": 10473.063,
"index": 400,
"start_time": 10445.111,
"text": " Illusions are not real. But you gotta have something that can assess reality and non-reality. So that the consciousness as illusion story is also a reductio ad absurdum. The consciousness is already there in every atom, every electron, every quantum event, just simply postulates it into existence. Doesn't help us explain why a whole lot of that stuff together in a stream"
},
{
"end_time": 10499.053,
"index": 401,
"start_time": 10474.036,
"text": " And a small amount of it together in my head produced very, very different kinds of phenomena. To just say that, that everything has that, doesn't it? What it basically says is no, well, even if that's true, the whole explanation of the difference is in how they're differently organized. That's where all the explanation is going to be. It's not in that it's already always, always there. Yes, yes. So we have to have an explanation."
},
{
"end_time": 10526.135,
"index": 402,
"start_time": 10499.599,
"text": " And what I've been trying to do here is to say that you can go from basic thermodynamics, build to life. And as soon as you do that, you have the semiotic capacity, you have this informational aboutness capacity. And with it, you have purposes, you have aims, you have ends, you have self, you have aboutness, you have value."
},
{
"end_time": 10553.558,
"index": 403,
"start_time": 10527.585,
"text": " And yet, all of those things can become much, much more complex as you build higher order kinds of self. So to wrap this up in, let's say five minutes, because it's like a three hour behemoth podcast, and I could keep going for five more hours, and I think you can as well. But let's wrap it up in approximately five minutes with you tying together some of the threads that we've referenced. But"
},
{
"end_time": 10580.247,
"index": 404,
"start_time": 10553.814,
"text": " Talk about absence, so your notion of absence, which I don't know if it's related to the notions of nothingness of the East, but if it is, then you can talk about that. It's not. Okay, great. Okay. So talk about absence and its relationship to purpose and the other themes. Well, a purpose, of course, in our common sense, understanding of it is, is that there's something I want to happen."
},
{
"end_time": 10610.674,
"index": 405,
"start_time": 10581.254,
"text": " And I have a representation of it, typically, a mental image of what I want to happen. And I use that to organize my behavior to make it come into existence. That state of things is currently absent. So before we got on this podcast, we had an idea of how we wanted it to happen and what was necessary to get there. That was currently not in existence. But a representation of that which was not in existence"
},
{
"end_time": 10642.244,
"index": 406,
"start_time": 10612.619,
"text": " was the target of this representation, was used to constrain my behaviors, your behaviors, getting our computers together, looking at our watches, doing all the things that were necessary to make it come into existence. So in that simple sense, my purpose is something present, the representation in my mind, with respect to something absent that was represented, but is not present. So something present"
},
{
"end_time": 10671.886,
"index": 407,
"start_time": 10643.217,
"text": " linked by my actions to something that's currently absent, which will then become present. So that's a simple way to talk about just sort of the common sense notion of purpose. It's asymmetric in time. In fact, usually with respect to you and I, we have to do work to make it happen. Because it's oftentimes that the world won't go that way. The spontaneous way the world changes,"
},
{
"end_time": 10701.22,
"index": 408,
"start_time": 10672.875,
"text": " is not towards that kind of thing I want to happen. So usually I have to do work. This is true, of course, even for simple microorganisms. They have a representation in a much, much simpler sense, of course. Let's say, take again, a bacterium searching for food as a representation captured in the form of its receptors that picks up certain kinds of molecules"
},
{
"end_time": 10731.34,
"index": 409,
"start_time": 10701.988,
"text": " and then it has a way to assess gradients of these molecules and it uses that to basically accomplish something which is to increase the concentration of those molecules and therefore to increase the food it needs to maintain itself far from equilibrium. So in that sense there is a much simpler sense of representation and purpose and absence. That is there's an absence of enough food"
},
{
"end_time": 10759.957,
"index": 410,
"start_time": 10731.852,
"text": " this case, and an action based upon a representation, though just a chemical representation in this case, that allows that thing that's absent being in an environment with low sugar to being in an environment with higher sugar concentration. So in that respect, purposive-like activity or what we might call teleological activity, end-directed activity,"
},
{
"end_time": 10785.896,
"index": 411,
"start_time": 10760.469,
"text": " is characteristic of this relationship of something present, some vehicle, some sign vehicle, some energetic or material something, even if it's just a process like a cognitive active neurological process that's continually recycling to keep this image in mind while I'm engaging in some activity. That's a physical something that's representing something that it's not."
},
{
"end_time": 10816.152,
"index": 412,
"start_time": 10786.817,
"text": " And this is that problem I began this discussion with, this displacement issue. How is something present linked to something that's not present, therefore it's absent? Not nothingness, but something absent not now, not here. Now, is this related to potentiality versus actuality? Like that is to say that the absence is... In that sense, yes. These are potential things, but potential things are not the only things that are absent. There's lots of impossible things that are absent as well."
},
{
"end_time": 10842.756,
"index": 413,
"start_time": 10816.544,
"text": " Yes. And we can think about those things, because we've got this displaced way of representing things. But so here's the issue. What we want to talk about is how in order for something to be alive, to have a self, there has to be self and non self. There has to be benefit and harm. Good for bad for truth, falsity."
},
{
"end_time": 10872.5,
"index": 414,
"start_time": 10843.882,
"text": " Those things have to be there to be alive. And that's because what's being maintained, what's being brought into existence is the process of creating existence itself. So what's being maintained in existence? These forms, these constraints, these signs, this information is being maintained in existence. It always has to be embodied physically."
},
{
"end_time": 10902.722,
"index": 415,
"start_time": 10873.882,
"text": " and therefore only certain kinds of physical processes can embody it and then use it to interpret other physical processes that are absent. So in this respect everything about what I think, my thoughts are present in my head but what they're about is not in my head and it's not the neural activity going on and it's not something necessarily even in the world. It's absent in a fairly abstract sense."
},
{
"end_time": 10931.51,
"index": 416,
"start_time": 10903.848,
"text": " But it's because I've built up, because evolution and my own development has built up an interpretive system that can use these steps to allow fully disconnected things, unrelated things to represent each other. That it's possible for something absent, something even impossible to be represented."
},
{
"end_time": 10963.029,
"index": 417,
"start_time": 10933.046,
"text": " And in fact, it's possible to now say, oh, this is I would like to create an autogenic virus in my laboratory. These don't exist as far as I know in science today, but I know all the parts. I know how it should work. And I have a sense of what kind of chemistry will make it happen because I understand how viruses work and I understand how catalysis works. This is something impossible. It doesn't exist, maybe."
},
{
"end_time": 10993.575,
"index": 418,
"start_time": 10963.695,
"text": " I actually, to be honest with you, I suspect that the autogenic viruses exist even on the earth. We just haven't found them yet. But assuming that they don't, assuming that they got wiped out when extinct or whatever, I could have an image of what it would be like. I've just described this. They may not exist on the earth. But because of this description, this thing which doesn't exist can be brought into existence."
},
{
"end_time": 11022.466,
"index": 419,
"start_time": 10994.377,
"text": " for the first time. And it will have a kind of aboutness that I don't have, that no other organism on earth has, because it's a distinct kind of system with a distinct kind of aboutness, distinct kind of relationship to its world. These things will emerge for the first time in the world. In the same sense that I think that selves emerged at one point in the evolution of the cosmos, probably not just here, but all over the place."
},
{
"end_time": 11048.387,
"index": 420,
"start_time": 11023.37,
"text": " Um, but they merged at one time. It was possible, but it wasn't prefigured. I do think it was very, very likely to occur eventually as the universe cooled and we got planets and we got low energy. So I think particularly autogenic virus like structures, I think are probably going to be pretty widespread in the cosmos for the simple reason that they're chemically quite simple."
},
{
"end_time": 11075.691,
"index": 421,
"start_time": 11049.838,
"text": " and they don't have all the sensitivity that living organisms do. Professor, I just wanted to thank you so much for spending such being so generous with your time. I got to 10% of the questions and some other themes I want to touch on next time if you're willing to come on for another round. Happy to do that. Great is nurture versus nature and why that's a false dichotomy."
},
{
"end_time": 11103.473,
"index": 422,
"start_time": 11076.135,
"text": " Darwinian evolution. I said this in the introduction, so people were probably looking forward to this, but why Darwinian evolution isn't the whole story. You referenced this in one of your talks, and so I wanted to ask you about that, but we can do that next time. I talked about the other half as inverse Darwinism. Okay, well, we got to get to inverse Darwinism next time, and perhaps some of the things- It's exactly the topic of my new book. Great, great. Okay, so, Professor, again, thank you so much."
},
{
"end_time": 11107.705,
"index": 423,
"start_time": 11104.565,
"text": " My pleasure. It's been a great conversation. I enjoyed it a lot."
},
{
"end_time": 11135.879,
"index": 424,
"start_time": 11108.046,
"text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, et cetera, it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
},
{
"end_time": 11155.879,
"index": 425,
"start_time": 11135.879,
"text": " If you'd like to support more conversations like this, then do consider visiting theoriesofeverything.org. Again, it's support from the sponsors and you that allow me to work on Toe full-time. You get early access to ad-free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you."
}
]
}
No transcript available.