Audio Player
Starting at:
Joscha Bach Λ Ben Goertzel: Conscious Ai, LLMs, AGI
October 17, 2023
•
2:04:04
•
undefined
Audio:
Download MP3
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
307 sentences
18,690 words
Method: api-polled
Transcription time: 120m 54s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull? Jokes aside, Verizon has the most ways to save on phones and plants where everyone
The first breakthrough to incontrovertibly human-level AGI to a superintelligence is months to years. Will that be good or bad for humanity? To me, these are less clear than what I think is the probable timeline.
Yoshabok is known for his insights into consciousness and cognitive architectures and Ben Gortzal is a seminal figure in the world of artificial general intelligence and known for his work on open cog. Both are coming together here on theories of everything for a theolocution. A theolocution is an advancement of knowledge couched both in tenderness and regard rather than the usual tendency of debates which is characterized by trying to be correct even to the detriment of the other person, maybe even destructively, maybe even sardonically.
We have a foray in this episode into semantics and Pierce's sign theory. This also extends into what it truly takes to build a conscious AGI. An AGI is an artificial general intelligence which mimics human-like intelligence. But then the question lingers, what about consciousness?
What differentiates mere computation from awareness? Man, this was a fascinating discussion and there will definitely be a part two. Recall the system here on toe, which is if you have a question for any of the guests, whether here or on a different podcast, you leave a comment with the word query and a colon. And this way, when I'm searching for the next part with the guest,
I can just press Ctrl F, and I can find it easily in the YouTube Studio backend. And then I'll cite your name either aloud, verbally, or in the description. To those of you who are new to this channel, my name is Kurt Jaimungal, and this is Theories of Everything, where we explore usually physics and mathematics-related theories of everything. How do you reconcile quantum mechanics with general relativity, for instance? That's the standard archetype of a toe.
But also, more generally, where does consciousness come in? What role does it have to play in fundamental law? Is fundamental, quote-unquote, the correct philosophical framework to evaluate explanatory frameworks for the universe and ourselves? We've also spoken to Yocha three times before. One solo, that episode's linked in the description. Another time with John Vervecky and Yoshabok. And another time with Donald Hoffman and Yoshabok. That was a legendary episode. Also, Ben Gortzel has given a talk on this program, which was filmed at MindFest.
Welcome. This is going to be so much fun. Many, many people are very much looking forward to this, including me, including yourselves. Welcome to the Theories of Everything podcast. I appreciate you all coming back on.
Thank you for having us. I always enjoy discussing with Ben. It's always been fun and it's I think the first time we are on a podcast together. Yes, wonderful. So let's bring some of those off air discussions to the forefront. How did you all meet?
We met first at the AGI conference in Memphis. Ben had organized it and I went there because I wanted to work on AI in the traditional Minskian sense and that worked on a cognitive architecture. My PI didn't really like it so I paid my own way to this conference to publish it and I found like-minded people there and foremost among them was Ben.
I don't think I've changed my mind about anything major related to AGI in the last six months, but certainly seeing how well LLMs have worked over the last nine months or so has been quite interesting. I mean, it's not that they've worked
a hundred times better than I thought they would or something, but certainly just how far you can go by this sort of non-AGI system that munges together a huge amount of data from the web has been quite interesting to see, and it's revised my opinion on how much
of the global economy may be converted to AI, even before we get before we get to AGI, right? So that that's, it's shifted by thinking on, on that a bit, but not so much on fundamentally, how do you build an AGI? Because I think these systems are somewhat off to the side of that, although they may usefully serve as components of integrated AGI systems. And Yosha.
I have some things that changed my mind are outside of the topic of AGI. I thought a lot about the way in which psychology was conceptualized in Greece, for instance, but I think that's maybe too far out here.
In terms of AI, I looked into some kind of new learning algorithms that fascinate me and that are more brain-like and move a little bit beyond the perceptron. I'm making slow and steady progress in this area. It doesn't feel like there is a big singular breakthrough that dramatically changed my thinking in the last six months, but I feel that there is an area where I begin to understand more and more things.
All right, let's get to some of the comparisons between you all the contrasting ones. It's my understanding that Yocha, you have more of the mindset of everything is computation or all is computation. And then you believe there to be other categories, I believe you refer to them as archetypal categories, or I may have done that. And
I'm unsure if this is a fair assessment, but please elucidate me. I think that everything that we think happens in some kind of language, and perception also happens in some kind of language, and a language cannot refer to anything outside of itself. And in order to be semantically meaningful, a language cannot have contradictions.
It is possible to use a language where you haven't figured out how to resolve all the contradictions, as long as you have some hope that there is a way to do it. But if a language itself is contradictory, its terms don't mean anything. And the languages that work, that we can use to describe anything, any kind of reality and so on, turn out to be representations that we can describe.
via state transitions. And there are a number of ways in which we can conceptualize systems that are doing state transitions. For instance, we can think about whether they are deterministic or indeterministic, whether they are linear or branching. And this allows us to think of these representational languages as a taxonomy, but they all turn out to be constructive. That means modern Palance computational.
There was a branch of mainstream of mathematics was not constructive before Gödel. That means language of mathematics allowed to specify things that cannot be implemented.
and computation is the part that can be implemented. I think for something to be existent, it needs to be implemented in some form, and that means we can describe it in some kind of constructive language. That's basically this word, rhinosis, has to do with epistemology, and the epistemology determines the metaphysics that I can have, because when I think about what reality is about, I need to do this in a language in which my words mean things. Otherwise, what am I talking about? What am I pointing at?
When I'm pointing at, I'm pointing at a representation that is basically a mental state that my own mind represents and projects into some kind of conceptual space or some kind of perceptual space that we might share with others. And in all these cases, we have to think about representations. And then I can ask myself, how is this representation implemented in whatever substrate it is? And what does this signify about reality?
and what is reality and what is significance and all these terms turn out to be terms that, again, I need to describe in a language that is constructive, that is computational. And in this sense, I am a strong computationalist because I believe that if we try to use non-computational terms to describe reality, and it's not just because we haven't gotten around to formalizing them yet, but because we believe that we found something that is more than this, we are fundamentally confused and our words don't mean things.
I tend to start from a different perspective on all this philosophically. I think there's one
Minor technical point, I feel need to quibble with than what Josh has said and then I'll try to outline my point of view from a more fundamental perspective. I mean the point I want to quibble with is, it was stated that
If a logic or language contains contradictions, it's meaningless. Of course, that's not true. There's a whole discipline of paraconsistent logics which have contradictions in them and yet are not meaningless. They're constructive paraconsistent logics. You can actually use, you know, Curry-Howard transformations or operational semantics transformations to map paraconsistent logical formalisms into
Gradually type programming languages and so forth. So I mean contradictions are not necessarily Fatal to having meaningful semantics to a logical or computational Framework and this is something that's actually meaningful in my approach to AGI on the technical level which we may get into later but but I want to I want to shift back to the foundation of life the universe and and and everything here, so I mean I I
I tend to be phenomenological in my approach, more so than starting from a model of reality. And these sorts of things become hard to put into words and language, because once you project them into words and language, then yeah, you have a language, because you're talking in language, right? But talking isn't all there is to life. It isn't all there is to
all there is to experience. I think the philosopher Charles Peirce gave one
Fairly clear articulation of some of the points I want to make. You could just as well look at Lao Tzu or you could look at the Vedas or the book Buddhist Logic by Stravatsky, which gives similar perspectives from a different cultural background. So if you take Charles Peirce's point of view, which at least is concise, he distinguishes a number of metaphysical categories.
I don't follow him exactly, but let me start with him. So he starts with first, by which he means qualia, like raw, unanalyzable, just it's there, right? And then he conceives second, by which he means reaction, like billiard ball bounces off each other. It's just one thing he's reacting
to something else, right? And this is how he's looking at sort of the crux of classical physics, let's say. Then by what Peirce calls third, he means relationships. So one thing is relating to other things. And one of the insights that Charles Peirce had writing in the late 1800s was that, you know, once you can relate three things, you can relate four, five, six, ten, like any large finite number of things, which was just
You know, a version of what's very standard now of reducing a large number of logical relations to sort of triples or something, right? So, Peirce looked at first, second, and third as fundamental metaphysical categories, and he invented quantifier logic as well with a for-all in their existing quantifier binding. So, as Peirce would look at it,
computation and logic are in the realm of third and if you're looking in that metaphysical category of third then you say well everything's a relationship. On the other hand if you're looking from within the metaphysical category of second you're looking at it like well everything's just reactions. If you're looking at it from within the metaphysical category of first then it's like whoa it's all just there and you could take any of those
points of view and it's valid in itself. Now, you could extend beyond Pers's categories. You could say, well, I'm going to be a Zen Buddhist and have a category of zero, like the unanalyzable pearly void, right? Or you could go Jungian and say, okay, these are numerical archetypes, one, two, three, but then we have the archetype of four, which is sort of synergy and emergence. It's sort of mandalic. Yeah, so what I was saying is Pers
Perseus had these three metaphysical categories, which he viewed as just ontologically, metaphysically distinct from each other. So what Chalmers would call the hard problem of consciousness in Perseian language is like, how do you collapse third to first? And Perseus would be just like, well, you don't. They're different categories. You're an idiot to think that you can somehow collapse one to the other. So in that sense, he was a dualist, although more than a dualist, because he had first, second, and third.
You could go beyond that if you want. You could go Zen Buddhist and say, well, we have a zero category of the original, ineffable, self-contradictory pearly void. And then you have the question of is zero really the same as one, which is like the Zen Buddhist paradox of non-dualism and so forth in a certain form. You can also go
Above person's three metaphysical categories and you can say okay. Well, why not four fourth? Well to Carl Jung four was the archetype of synergy and many mandalas were based on this fourfold Synergy, why not five? Well five you have the fourfold synergy and then the birth of something new out of it, right? So I I can see that the perspective of third the perspective of computation is substantially
where you want to focus if you're engineering an AGI system, right? Because you're writing a program and the program is a set of logical relationships. The program is written in a language. So I don't have any disagreement that this is like the focal point when you're engineering an AGI system. But if I want to intuitively conceptualize the AGI's experience,
I don't feel a need to like try to reduce the whole metaphysical hierarchy into into third just because the program code lives there and I mean this is this is sort of a it's not so much about AI or mathematical or computational formalism I mean these are just different philosophical perspectives which it becomes arduous to talk about because
Natural language terms are imprecise and ambiguous and slippery and you could end up spending a career trying to articulate what is really meant by relationship or something. All right, Yocha.
I think it comes down to the way in which our thinking works and what we think thinking is. You could have one approach that is radically trying to build things from first principles. And then we learn how to write computer programs. This is what we might be doing. When I started programming, I had a Commodore 64. I was a kid. I didn't know how to draw a line. Commodore 64 basic doesn't have a command to draw a line.
What you need to do to align in the Commodore 64 is to learn a particular language. In this language, in this case, it's BASIC. You can also learn Assembler directly, but it's not hard to see how Assembler maps to the machine code of the computer.
The machine code works in such a way that you have a short sequence of bits organized into groups of eight bytes. These bytes are interpreted as commands by the computer. They're basically like switches or train tracks. You could imagine every bit determines whether a train track goes to the left or to the right. After you go through eight switches,
You have 256 terminals where you can end. So if you have two options to switch left or right, in each of these terminals you have a circuit, some kind of mechanism that performs a small change in the computer. And these changes are chosen in such a way that you can build arbitrary programs from them.
And when you want to make a line, you need to learn a few of these constructs that you use to manipulate the computer. And first of all, in the Commodore 64, you need to write a value and a certain address that corresponds to a function on the video chip of the computer. And this makes the video chip forget how to draw characters on screen.
and instead interpret a part of the memory of the computer as pixels that are to be displayed on the screen. And then you need to tell it which address and working memory you want to start by writing two values into the graphic chip, which are encode for a 16 bit address in the computer. And then you can find the bits in your working memory that correspond to pixels on the screen.
And then you need to make a loop that addresses them all in order and then you can draw a line. And once I understood this, I basically had a mapping from an algebraic equation into automata. That was what the computer is doing. It's an automaton at the lowest level.
that is performing geometry. And once you can draw lines, you figure out also how to draw curved shapes and then you can draw 3D shapes and you can easily derive how to make that. And I did these things as a kid and then I saw the mathematicians have some kind of advanced way, some kind of way which I deeply understand what geometry is and in ways that goes far beyond what I am doing.
And mathematics teachers had the same belief. They basically were gesturing at some kind of mythological mountain of mathematics, where there was some deep, inscrutable knowledge on how to do continuous geometry, for instance. And it was much, much later that I started to look at this mountain and realized that it was doing the same thing that I did on my Tacoma Dora 64, just with Greek notation. And there's a different tradition behind it, but it was basically the same code that I have been using.
And when I was confronted with notions of space and continuous space and many other things, I was confronted with the conundrum. I thought I can do this in my computer and it looks like it, but there can be no actual space because I don't know how to construct it. I cannot make something that is truly continuous. And I also don't observe anything in reality around me that is fundamentally different from what I can observe in my computer to the degree that I can understand and implement it. So how does this other stuff work?
And so imagine somebody has an idea of how to do something in a way that is fundamentally different from what could be in principle done in computers. And I asked them how this is working. It goes into hand waving. And then you point at some proofs that have been made that show that the particular hand waving that they hope to get to work does not pan out.
And then I hope there is some other solution to make that happen because they have the strong intuition. And I asked, where does this intuition come from? How did it actually get into your brain? And then you look at how does the brain work? There is firing between neurons. There is interaction with sensory patterns on the systemic interface to the universe. How were they able to make inferences that go beyond the inferences that I can make?
This is one way of looking at it. And then on the other end of the spectrum, this one is more or less in the middle, there is degraded form of epistemology, which is you just make noises and if other people that you get away with it, you're fine. And so you just make sort of grunts and hand wavy movements and you try to point at things and you don't care about anything if it works. And if a large enough group of high status people is nodding, you're good.
And this epistemology of what you can get away with doesn't look very appealing to me because people are very good at being wrong in groups. Yeah, I mean saying that the only thing there is is language because the only thing we can talk about in language is language. I mean this is sort of
tautologists in a way, right? No, no, that's not quite what I'm saying. I'm not saying the only thing that is this language, of course, language is just a representation. It's a way to talk about things and to think about things and to model things. And obviously, not everything is a model, just everything that they can refer to as a model. And so there is that. I mean, you can't
No, that right, you can you can hypothesize that but you can't you can't know that and this gets into I guess it depends what I cannot know anything that I cannot express. I can know many things I can't express in language. But I mean, that's that's just, I guess a different flavor of of knowing subjective
Experience. I mean, so take what Martin Buber called an I-thou experience, right? I mean, if you're staring into someone's eyes and you have a deep experience that you're seeing that person and you're just sharing a shared space of experience and being, I mean, in that moment, that is something you both know you're not going to be able to communicate
fully in language and it's experientially there. Now, Boubert wrote a bunch of words about it, right? And those words communicate something special to me and to some other people. But of course, someone else reads the words that he wrote and says, well, you are merely summarizing some collection of firings of neurons in
in your brain in some strange way deluding yourself that that is something else. I think from within the domain of computation and science, you can neither
Prove nor disprove that there exists something beyond the range of computation and science. And if you look at scientific data, I mean the whole compendium of scientific data ever gathered by the whole human race is one large finite bit set basically. It's a large
It's a large set of data points with finite precision to each piece of data. I mean, it might not even be that huge of a computer file if you try to assemble it all, like all the scientific experiments ever done and agreed by some community of scientists. So you've got this big finite bit set, right? Then science, in a way, is trying to come up with concise, reasonable-looking, culturally acceptable
explanations for this huge finite bit set that can be used to predict outcomes of other experiments and which finite collection of bits will emerge from those other experiments in a way that's accepted by a certain community. Now that's a certain process, it's a thing to do, it has to do with finite bit sets and computational models for producing
finite bits right and the finite sets of bits and that's great that nothing within that process is going to tell you that that's all there is to the universe or that isn't all there is to the universe. I mean it's a valuable important thing. Now to me as an experiencing mind I feel like there's a lot of steps I have to get to the point where I even know
what a finite bit set is, or where I even know what a community of people validating that finite bit set is really is, or what a programming language is. So I keep coming back to my phenomenal hardware experience. First there's this field of nothingness or contradictory nothingness that's just floating there. Then some indescribable forms flicker and emerge out of this
void and then you get some complex pattern of forms there, which constitutes the notion of a bit set or an experiment or a computation. From this phenomenological view, by the time you get to this business of computing and languages, you're already dealing with a fairly complex body of self-organizing forms and distinctions that
popped out of the void and then then this conglomeration of forms that in some enough of a way has emerged out of the void is selling no i am i am everything the only thing that exists in a fundamental sense is what is inside me and i mean you can't if you're inside that thing you can't you can't refute or really demonstrate that but again from
From an AGI view it's all fine because when we talk about building an AGI, what we're talking about is precisely engineering a set of computational processes. I don't think you need to do
You don't need some special firstronium to drop into your computer to give the AGI the fundamental qualia of experience or something. There are two points now. Let's just allow Yoshua to speak because there are quite a few threads and some may be dropped. Also, it appears as if you're using different definitions of knowledge.
If you use this traditional philosopher notion of justified true belief, it means that I have to use knowledge in a context where I can hope to have a notion of what's true. So, for instance, when I look at your face and experience a deep connection with you,
and I report, I know we have this deep connection. I'm not using the word no in the same sense. What I am describing is an observation. I'm observing that I seem to be looking at a face and observing that I have the experience of having a deep connection. And I think I can hope to report on this truthfully. But I don't know whether it's true that we have that deep connection.
I cannot actually know this. I can make some experiments to show how aligned we are and how connected we are and so on to say this perception or this imagination has some veracity. But here I'm referring to a set of patterns.
There are dynamic patterns that I perceive and then there is stuff that I can reflect on and disassemble and talk about and convey and model. And this is a distinct category in a sense. It's not in contradiction necessarily what you're saying. It's just using the word knowing in different ways is implied here because I can relate the pattern to you that I'm observing or that I think I'm observing.
But this is a statement about my mental state. It's not a statement about something in reality, about the world. And to make statements about the world, I probably need to go beyond perception. The second aspect that we are now getting to is when you say that reality and minds might have properties that are not computational yet, your AGI is entirely computational and doesn't need any kind of first principles wonder machine built into it.
that goes beyond what we can construct from automata. Are you establishing that AGI, Artificial General Intelligence, with potentially superhuman capabilities, are still lagging behind what your mind is capable of? No, not at all. I just think the other aspects are there anyway and you don't need to build them. So you're going to make the non-computational parts of reality using computation?
No, you don't have to make them. They're already there. I mean, if you if you take just just take a more simple point of view where you're thinking about first and third and purse was basically a panpsychist, right? So he believed that matter is mind hidebound with habit. As he said, he believed that every little particle had its own spark or element of
Consciousness and awareness in it. So I mean from that standpoint, I mean this this kind of bubbly water that I'm holding up has its own variety of conscious awareness to it, which is has a different properties in the conscious awareness in my brain or yours. So from that standpoint, if I build an AGI program, it has something around the same patterns and structures and dynamics as
a human brain as the sort of computational aspect of the human mind, from that standpoint, then most likely the same sort of firstness, the same species of subjective awareness will be associated with that AGI machine that you built. But it's not that you needed to construct it. I mean, any more than you need to explicitly construct like the
Positioning in time of your computer or something like you you you build something it's already there in time You don't have to build time. I mean you just build it and it's there. It's there in time You didn't need a theory of time and you didn't need to screw together Moment t to moment t plus one either the the perspective is more the awareness is is ambient and then it's it's there you don't you don't need to build it another of course, there's subtlety that different
sorts of constructions may have different sorts of awareness associated with them, and there's philosophical subtlety in how you treat different kinds of firsts when you're operating in a level where relationship doesn't exist yet. In what sense is the experience of red different from the experience of blue, even though
I haven't read these pages, so I don't really understand them. I think it's
And by that's possible,
It seems to me that there are simpler ways in which particles could be constructed to do the things that they are doing. It seems to me sufficient that there are basically emergent error correcting codes on the quantum substrate and would just emerge over the stuff that remains statistically predictable in branching multiverse.
Don't need to be conscious to do anything like that. Maybe if we do more advanced physics, we figure out or know this error correcting code that just emerges similar to a vortex emerges in the bathtub when you move your hand and only the vortex remains and everything else dissipates in the wave background that you are producing in the chaos.
And to achieve this, I don't think that I need to posit that they are conscious.
If it could be that I figure out, oh no, this is not sufficient. We need way more complicated mass and structure to make this happen. So they need some kind of coherence improving operator that is self-reflexive and eventually leads to the structure. Then I would say, yeah, maybe this is a theory that we should seriously entertain. Until then I'm undecided and Occam's razor says I can
construct what I observe at this level of elementary particles, atoms, and so on, by assuming that they don't have any of the conscious functionality that exists in my own mind. And the other way would be you can redefine the notion of consciousness into some principle of self-organization that is super basic.
But this would redefine consciousness into something else, because there's a lot of self-organizing stuff that does not fall into the same category that an anesthesiologist makes go away when he gives you an anesthetic. And to me, consciousness is that thing which seems to be suspended when you get an anesthetic and that stops you from learning and currently interacting with the world. I mean, that at least gives me a chance to repeat once again my favorite quote from Bill Clinton, former US president, which is a
That all depends what the meaning of is is. I mean, that's interesting, Ben, because first time you wish. I don't know if you remember. I brought up that quote. I don't remember the context, but I said, yeah, that also depends on what is is. Question is, what do you mean by is? Right. So this is the story. It sounds like it sounds like Bill Clinton. It depends upon what the meaning of the word is.
Yeah, I mean, a couple reactions there and I feel I may have
lost something in the buffering process there but I think that let me see so that first of all about causality and firstness or a raw experience I mean almost by
definition of how Perse sets up his metaphysical categories. I mean, the firstness doesn't cause anything. So you're not going to come up with a case where I need to assume that this particle has experience or else I can't explain why
This experiment came out this way. I mean, that would be a sort of category error in person's perspective. So if the only sort of thing you're willing to attribute existence to is something which has a demonstrable causal impact on some experiment, then
By that assumption, I mean, that's essentially equivalent to the perspective you're putting forth that everything is computation. And Perse didn't think other categories besides third were of that nature. There's also a just shallow semantic matter tied into this, which is the word consciousness is just highly
So, I mean, Yoshua, you seem to just assume that human-like consciousness is consciousness, and I don't really care if people want to reserve the word consciousness for that. Then we just need some other word for the sort of ambient awareness and everything in the universe, right? So there's lengthy debates among academics on like, okay, do we say a particle is conscious or do we say it's
Proto-conscious, right? So then you can say, okay, we have proto-consciousness versus consciousness, or we have like raw consciousness versus reflexive consciousness or human-like consciousness. And I mean, I spent a while reading all this stuff. I wrote some things about it. In the end, I'm just like, this is a
This is a game that overly intellectual people are planning to entertain themselves. It doesn't really matter. I've got my experience of the universe. I know what I need to do to build AGI systems and arguing about which
words to associate with different flavors and levels of experience is you just kind of running around in circles. Conceptually, it's an important question. Is this camera that is currently filming my face and representing it and then relaying it to you aware of what it's doing? And is this just a matter of degree with respect to my consciousness? Is this representing some kind of ambient awareness of the universe
I mean, if the only kind of answer that you're interested in are rigorous scientific answers, then you have your answer by assumption, right? I mean, answering questions by
Assumption is fine. It's practical. It saves our time. I think that's what you're doing. I don't see how you're not just trying to answer by assumption. You posit that elementary particles are conscious.
When I point out that we normally reserve the world consciousness for something that is really interesting and fascinating and shocking to us, and that it would be more shocking if it projected into the elementary particles, and then you say, okay, but I just mean ambient awareness. Now we have to disassemble what ambient awareness actually means. What does this awareness come down to? I think that you're pointing at something that I don't want to dismiss. I want to take you seriously here.
So maybe there is something to what you are saying, but you're not getting away with simply waving at it and saying that this is sufficient to explain my experience and I'm no longer interested to make my words mean things.
Let's just let Ben respond.
I guess, rightly or wrongly, as a human being, I've gotten bored with that question in the same way that I couldn't say it's worthless. At some point, maybe you could convince someone. I know people who were convinced by materials they read on the internet to give up on Mormonism or Scientology, so I can't say it's worthless to
Debate these points with people who are heavily attached to an ideology I think is silly. On the other hand, I personally just tend to get bored with repeated debates that go over the same points over and over again. If I had an infinite number of clones, then I wouldn't. I guess one of the things that I get
word out with is people claiming my definition of this English word
is the right one and your definition is the wrong one. I guess you weren't really doing that, Joshua, but it just gave me a traumatic memory. I'm sorry for triggering you here. I'm not fighting about words. I don't care which words you're using. When I think about an experience of what it's like and associate that with consciousness
the system that is able to create a perception of a now. Then I'm talking about a particular phenomenon that I have in mind and I would like to recreate if I can and I want to understand how it works. And so for me the question of whether I project this
Let me tell you how I'm looking at anesthesia, which is a concrete specific example that's not that trivial, right? Because that
I've only been under anesthesia once, which I have wisdom teeth removed. So what was it wasn't that bad, but other people have had far more traumatic things on when they're under anesthesia. And there, there is, there's always the nagging fear that like, since we don't really know how anesthesia works in any fundamental depth, and also don't really know how the brain generates our usual everyday states of consciousness and enough that
It's always possible that while you're under anesthesia, you're actually, in some sense, some variant of you is feeling that knife slicing through you and maybe just the memory is being cut off. And then once you come back, you don't remember it. But then that might not be true. But then you have to ask, well, OK, say then while my jaw is being cut open by that knife,
Does the jaw feel it? Does the jaw hurt whilst being sliced up by the knife? Is the jaw going ahh? Well, on the other hand, the global workspace in your brain, the reflective theater of human-like consciousness in your brain may well be disabled by the anesthetic. The way I personally look at that is I suspect under anesthesia
your sort of reflective theater of consciousness is probably disabled by that anesthetic. I'm not 100% sure, but I think it's probably disabled, which means there's probably not like a version of Ben going, ah, wow, this really hurts, this really hurts, and then forgetting it afterwards. So, I mean, maybe you could do that, just like disable memory recording, but I don't think that's what's happening. On the other hand, I think the jaw,
is having its own experience of being sawed open while you're getting that wisdom tooth removed under general anesthesia. I think it's not the same sort of experience exactly as the reflective theater of consciousness that knows itself as Ben Goetzel is having. The Ben Goetzel can
conceptualize that it's that it's experiencing pain it can go like ow that really hurts and then the thinking that saying that really hurts is different than the that which really hurts right there there's many levels there but i do think there's some sort of raw feeling that the jaw itself is is is having like even even if it's not
connect to that reflective theater of awareness in the brain. Now the jaw is biological cells, so some people would agree that those biological cells have experience, but they would think like a brick when you smash it with an axe doesn't. But I suspect the brick also has that some elementary
I think it is like something to be a brick that's smashed in half by an axe. On the other hand, it's not like something that can reflect on what it is to be a brick smashed in half by an axe. That is how I think about it, but again I don't know how to make that science because I can't ask my jaw what it feels like because my jaw doesn't
doesn't speak language and even if I was able to like wire my brain into the jaw of someone else who's going through wisdom tooth removal under anesthesia like I might say like through that wire I can feel by an eye thou experience like I can feel the pain of that jaw being sliced open but I mean
You can tell me I'm just hallucinating that and my own brain is like improvising that based on the signals that I'm getting and I'm not sure how you
really pin that down in an experiment, right? Let me try. So there have been experiments about anesthesia. I'm not an expert on anesthesiology, so I asked everybody for forgiveness if I get things wrong. But there have been different anesthetics and some of them work in very different ways. And there is indeed a technique that basically works by giving people
There have been experiments that surgeons did where they were applying a tourniquet to an arm of the patient so the muscle relaxant didn't get into the arm and they could still use the arm.
And then in the middle of the surgery, they asked the person that was there lying fully relaxed and incommunicado to raise their arm, raise their hand if they were conscious and aware of what is happening to them. And they did. And when they were asked if they had unbearable pain, they also raised their hand. And after the surgery, they didn't forget, they had forgotten about it.
I also noticed the same thing on surgery. I had a number of big surgeries in my life and there is a difference between different types of surgery. There is one type of surgery where I wake up and feel much more terrified and violated than I do before the surgery.
And I don't know why, because I have no memory of what happened. Also, my memory formation is impaired. So when I am in the ER and ask people how it went, I might have that same conversation multiple times, word for word, because I don't remember what they said or that I asked them. There is another type of anesthesia. And I observed this, for instance, in one of my children where the child wakes up and says, no, the anesthesia didn't work.
And it was an anesthesia with gas. So the child choked on the gas and you see your child lying there completely relaxed and sleeping and then waking up and starting to choke and then telling you the anesthesia didn't work. There is a complete gap of eight hours in the memory of that child in which the mental state was somehow preserved.
Subjectively, the child felt a complete continuation and then was looking around, the reason the room was completely different, time was very different, led to confusion and reorientation. So I would suspect that in the first case, it is reasonable to assume or to hypothesize, at least, that consciousness was present, but we don't recall what happened in this conscious state, whereas in the second one, there was a complete gap in the conscious experience and consciousness resumed after that gap.
And we can test this, right? There are ways, regardless of whether we agree with this particular thing or whether we think anesthesia is important, in principle, we can perform such experiments and ask such questions. And then on another level, when we talk about our own consciousness, there's certain behavior that is associated with consciousness that makes it interesting. Everything, I guess, only becomes interesting due to some behavior, even if the behavior is entirely internal.
If you are just introspectively conscious, it still matters if I care about you. This is a certain type of behavior that we still care about. For instance, if I ask myself, is my iPhone conscious? The question is, what kind of behavior of the iPhone corresponds to that? I suspect if I turn off my iPhone or smash it,
It does not mean anything to the iPhone. There is no what it's likeness of being smashed for the iPhone. There could be a different layer where this is happening, but it's not the layer of the iPhone. Now let's get to a slightly different point, this question of whether your jaw knows anything about being hurt. So imagine that there is surgery on your jaw, like with your wisdom teeth.
Is there something going on that is outside of your brain that is processing information in such a way that your jaw could become sentient, in the sense that it knows what it is and how it relates to reality, at least to some degree and level? And I cannot rule this out. Where there are cells, these cells can process information, they can send messages to their neighbors and the patterns of their activation, who knows what kind of programs they can compute.
But here we have a means and a motive. The means and motive here are it would be possible for the cells to exchange conditional matrices to perform arbitrary computations and build representations about what's going on. And the motive would be that it's conceivable that this is a very useful thing for biological tissues to have in general.
And so, if they evolve for long enough, and it is in the realm of evolvability that they perform interactions with each other that lead to representations of who they are and what they are doing, even though they are much slower than what's happening in our brain and decoupled from our brain in such a way that we cannot talk to our jaw. It's still conceivable, right? I wouldn't rule this out.
It's much harder for me to assume the same thing for elementary particles because I don't see them having this functionality that cells have. Cells are so much more complicated that just fits in that they would be able to do this. And so I would make a distinction. I would not rule out that multicellular organisms without brains could be conscious.
but at different time scales than us requiring very different measuring mechanisms because their signal processing is probably much slower and it takes longer for them to become coherent at scale because it takes so long for signals to go back and forth if you don't have nerves. But I don't see the same thing happening for element requirements. I don't rule it out again but you would have to show me some kind of mechanism. I mean if you're going to look at it that way, which isn't the only way that I would look at it, but if you're going to look at it that way,
I don't see why you wouldn't say the various elementary particles which are really distributed like amplitude distributions. I don't know why you wouldn't say these various interacting amplitude distributions are exchanging quantum information with a motivation to achieve stationary action given their context. I mean you could tell that
You can tell that story. That's sort of the story that physics tells you. They're swapping information back and forth, trying to make the action stationary. Yes, but for the most part they don't form brains. They also do form brains. So elementary particles can become conscious in the sense that they can form brains, nervous system, maybe equivalent information processing architectures. I just feel like you're privileging a certain
Level and complexity of organization because it happens to be ours and I mean we have a certain level and complexity of organization and and of Consciousness and and I mean a cell in my jaw has a lower one Brick has a lower one element problem when the future a GI
I wouldn't say lower or higher. I would say that if my jaw is conscious, there are far less cells involved in my brain and the interaction between them is slower.
So if it's conscious, it's probably more at the level of, say, a fly than a level of a brain, and it's probably going to be as fast as a tree in the way in which it computes rather than as fast as your brain. And I don't think that's something that is assigning some undue privilege to it. I'm just observing a certain kind of behavior, and then I look for the means and motive behind that behavior.
And then I try to construct causal structure and I might get it wrong. There's things that might be missing, but it's certainly not because I have some kind of speciesism that assigns higher consciousness to myself because it's me. All right. Yeah. I mean, I don't know what your motivations are. Kurt, I have a higher level comment, which is we're like an hour through this conversation, probably halfway through. I feel like the philosophy, the hard problem of consciousness
The hard problem of consciousness is an endless rabbit hole. It's not an uninteresting one.
I think it's not the topic on which Josh and I have the most original things to say. I think each of our perspectives here are held by many other people. I might interject a little bit. One of our most interesting disagreements is in Ben being a panpsychist and me not knowing how to formalize panpsychism in a way that makes it different from box standard functionalism. I do value this discussion and don't think it's useless.
That basically feel that on almost everything else, you mostly agree except for crypto. Okay. Yeah, to me that's
Almost a Zen thing. I don't know how to formalize the notion that there are things beyond formalization.
Yoshi, you're more of the mind that LLMs or deep neural nets are on or significant step toward AGI, maybe even sufficient with enough complexity. And then I think that you disagree. Yeah, I think most issues in terms of the relationship between LLMs and AGI, we actually probably agree on
Quite quite well. But I mean, obviously, large language models are an amazing technology, like from an AI application point of view, they can do all sorts of fantastic and tremendous things. I mean, I mean, it sort of blew my mind how smart GPT-4 is. It's not the first time
My mind has been blown by an AI technology. I mean my mind was blown by computer algebra systems when they first came out and you could like do integral calculus with arbitrary complexity and you know when Deep Blue beat chess with just game trees I'm like whoa so I mean I don't think it's the only amazing thing to happen in the history of AI but it's an amazing thing like it's a big breakthrough and it's super cool. I think that
If deployed properly, this sort of technology could do a significant majority of jobs that humans are now doing on the planet, which has big economic and social implications. I think that the way these algorithms are representing knowledge internally
is not what you really need to make a full on human level AGI system. So I mean, when you look at what's going on inside the transformer neural network, I mean, it's not quite just a big weighted hash table of particulars, but to me, it does not represent abstractions
in a sufficiently flexibly manipulable way to do the most interesting things that the human mind does. And this is a subtle thing to pinpoint in that, say, something like a fellow GPT does represent abstraction. It's showing an emergent representation of where the board is, but of the different
It's learning an emergent representation of features like a black square is on this particular board position or a white square is on this particular board position. So examples like that show that LLMs can in fact learn abstract representations and can manipulate them in some way but it's very limited in that regard. I mean in that case it's seen a shitload of Othello games and that's a quite simple thing to represent. So I think when you look at
How the neural net is learning, how the attention mechanism is working, how it's representing stuff. I mean, it's just not representing a hierarchy of subtle abstractions the way a human mind is. And I mean, the subtler question is what functions you could get by glomming an LLM together with other components in a
hybrid architecture with the LLM at the center. So suppose you give a working memory. Suppose you give an episodic memory. Suppose you have a declarative long-term memory graph and you have all these things integrated into the prompts and integrated into fine-tuning of an LLM. Well then you have something that in principle it's Turing complete and it could probably do a lot of quite amazing things. I still think if the hub of that system is an LLM with its
impaired and limited ability for representing and manipulating abstract knowledge. I think it's not going to do the most interesting kinds of thinking that people can do. Examples of things I think you fundamentally can't do with that kind of architecture are, say,
invent a new branch of mathematics, invent a radically new genre of music, figure out a new variety of business strategy like say Amazon or Google did that's quite different than things that have been done before. All these things involve
A leap into the unknown beyond the training data to an extent that I think you're not going to get with the way that LLMs are representing knowledge. No, I do think LLMs are powerful as tools to create AGI. So for example, as one sub project in my own AGI project, we're using LLMs to map English sentences
into computer programs or try to get logic expressions. That's super cool. Then you've got the web in the form of a huge collection of logic expressions. You can use a logic engine to connect everything on the web with what's in databases and with stuff coming in from sensors and so on. That's by no means the only way to leverage LLMs toward AGI, not at all, but it's one
One interesting way to leverage LLMs toward AGI, you can even ask the LLM to come up with an argument and then use that as a sort of guide for a theorem prover and coming up with a more rigorous version of that argument. So I do think there are many ways more than I could describe right now of LLMs
to be used to
a sort of motivated agent infrastructure around an LLM, right? You can wrap Joseph's PSI model, micro PSI model in some way around an LLM if you wanted to and you could make it. I mean people tried dumb things like that with other GPT and so-called baby AGI and so forth. So I mean on the other hand, I think if you wrap a motivated agent
architecture around an LLM with its impaired capability for making flexibly manipulable abstract representations. I think you will not get something that builds a model of self and other with the sophistication that humans have in their reflective consciousness. I think that having a sophisticated abstract model of self and other in our reflective consciousness
the kind of consciousness that we have but a brick or a jaw cell doesn't. Without that abstraction in our model of reflective consciousness tied in with our motivated agent architecture, then that's part of why you're not going to get the fundamental creativity in inventing new genres of music or new branches of mathematics or new business strategies. In humans, we do this amazing novel stuff, which is what drives culture forward,
We do this by our capability for flexibly manipulable abstraction tied in with our motivated agent architecture. I don't see how you get that with LLMs as the central hub of your hybrid AGI system, but I do think you can get that with an AGI system that has something like open cogs, atom space and reasoning system as the central hub with an LLM as a subsidiary component.
But I don't think open cog is the only way either. I mean, obviously, you could make a biologically realistic brain simulation that had human level AGI. I just think then the LLM like structures and dynamics within that biologically realistic brain system would just be a subset of what it does. You know, there'd be quite different stuff in the cortex. So yeah, that's not quite a capsule summary, but a lengthy, lengthy ish overview of my perspective on this.
Okay, great. I know there was a slew there, if you can pick up some of the pieces and respond. But also at the same time, there's emergent properties of LLMs. So for instance, reflection is apparently some emergent property. There are but they're limited. I mean, and that does make it subtle because you can't say they don't emerge.
knowledge representation. They do. And Othello GPT is one very simple example that there are others. There is emergent knowledge representation in them, but it's very simplistic and limited. It doesn't pop up effectively from in-context learning, for example. But anyway, this would dig us very deep into current LLMs.
Yeah, so is there some in principle reason why you think that a branch of mathematics Yoshi can't be invented by an LLM with sufficient parameters or data?
I am too stupid to decide this question. So basically what I can offer is a few perspectives that I see when I look at the LLM. Personally, I am quite agnostic with respect to its abilities. And at some level, it's an autocomplete algorithm that is trying to predict tokens from previous tokens. And if you look at what the LLM is doing, it's not a model of the brain. It's a model of what people say on the internet. And it is
Discovering a structure to represent that quite efficiently is an embedding space that has lots of dimensions. You can imagine that each of these dimensions is a function and the parameters of this function are the positions on this dimension that you can have. And they all interact with each other to together create some point in a high dimensional space that this could be an idea or a mental state or a complex thought.
And at the lowest level, when you look at how it works, it's translating these tokens, the translation of linguistic symbols.
into some kind of representation that could be for instance a room with people inside and stuff happening in this room and then it maps it back into tokens at some level. There has been recently a paper out of a group led by Max Tagmark that looked at the Lambda model and discovered that it does indeed contain a map of the world and directly encoded in its structure based on the neighborhood relationships between places in the world that it represents.
I am not sure if I in my entire life ever invented a new dimension in this embedding space of the human mind that is represented on the internet.
If I think about all the thoughts that have been made into books and then encoded in some form and became available as training data to the LLM, we figure out that there, depending on how you count, if you tend to a few hundred thousand dimensions of meaning. And I think it's very difficult to add a new dimension or also to significantly extend the range of those dimensions. But we can make new combinations of what's happening in that space.
Of course, it's not a limit that these things are limited to the dimension that they already discovered. Of course, we can set them up in such a way that they can confabulate more dimensions and we could also set them up in such a way that they could go and verify whether this is a good idea to make this dimension by making tests, by giving the LLM the ability to use plugins to write its own code.
to use a compiler, to use cameras, to use sensors, to use actuators, to make experiments in the world. It's not limited to what we currently let the LLM do. But in the present form, what the transformer algorithm is doing, it tries to find the most likely token. And so, for instance, if you play a game with it and it makes mistakes in this game, then it will probably give you worse moves after making these mistakes because it now assumes that it's playing a bad person, somebody who's really bad at this game.
and it doesn't know what kind of thing it's supposed to play because it can represent all sorts of state transitions. It's an interesting way of looking at it that we are trying to find the best possible token versus the LLM trying to find the most likely token next.
Of course, we can preface the LLM by putting into the prompt that this is a simulation of a mind that is only going to look for the best token and it's trying to approximate this one. So it's not directly a counter argument. It's not even asking us to significantly change the loss function. Maybe we can get much better results. We probably can get much better results if we make changes in the way in which we do training and inference using the LLM.
But this by itself is also nothing that we can prove without making extensive experiments. And at the moment it's unknown. I realize that the people who are being optimistic. I just want to pose a thought experiment. So this is about music rather than natural language. But I mean, we know there's music, Jim, there's similar networks applied to music. So suppose you had taken
at LLM like MusicGen or Google LM or the Next Generations and trade in on all music recorded or played by humanity up to the year 1900. Is it going to invent the sort of music made by Mahavishnu Orchestra or even Duke Ellington?
Would you say that has no new dimensions because jazz combines elements of West African drumming and Western classical music? I think that's a level of invention that LLMs are not going to do.
You said that it's combining elements from this and from that. Then Deli 2 came out, I got early access, and one of the things that I tried relatively early on is stuff like an ultrasound of a dragon egg. There is no ultrasound of a dragon egg on the internet, but it created a combination of prenatal ultrasound and archaeopteryx cut-through images and so on, and it looked completely plausible.
And in this sense, you can see that most of the stuff that we are doing when we create new dimensions are mashups of existing dimensions. And maybe we can represent all the existing dimensions using a handful of very basic dimensions from which we can construct everything from the bottom up just by combining them more and more. And I suspect that's actually what's happening in our minds. And I suspect that the LLM is not distinct from this, but a large superset of this. The LLM is Turing complete.
And from one perspective, it's a CPU. We could say that the CPU in your computer only understands a handful, like maybe a dozen or a hundred different machine code programs. And they have to be extremely specific about these codes. And there's no error tolerance. If you make a mistake in specifying them, then your program is not going to work.
And the LLM is a CPU that is so complicated that it requires an entire server farm to be emulated on. And you can give it, instead of a small program in machine code, give it a sentence in a human language. And it's going to interpret this, extrapolate it into some or compile it into some kind of program that then produces a behavior. And that thing is Turing-complete. It can compute anything you want if you can express it in the right way. But being Turing-complete is not interesting, right? I mean, being Turing-complete is
is irrelevant because it doesn't take resources into account. But you can write programs in a natural language in an LLM and you can also express learning algorithms to an LLM. So basically your intuition is yes, that an LLM could invent jazz, neoclassical metal and fusion.
based only on music up to the year 1900. No, no, I am agnostic. What I'm saying is I don't know that it cannot and I don't see a proof that it cannot and I would not be super surprised when it cannot. I don't think the LLM is the right way to do it. It's not a good use of your resources if you try to make this the most efficient way because our brain is far more efficient and does it in different ways. But I'm unable to prove that the LLM cannot do it. And so I'm reluctant to say LLMs cannot do X without that proof because
People tend to have egg on their face when they do this. But doesn't that just come back to, like, Popper's notion about falsificationism? Like, I can't prove that, you know, a devil didn't appear at some random place on the earth at some point. No, no, I mean, in the sense of— Sure, you can't prove it. No, what I mean by this is, can I make a reasonable claim that I'm very confident and would bet money on an L, I'm not being able to do this in the next five years?
This is the kind of statement that I'm trying to make here. So basically if I say, can I prove that an LLM is not going to be able to invent a new kind of music that is the self-genre of jazz, this in the next five years? And I can't. And I would even bet against it. Even though I don't know. You've shifted the goalposts in a way, because I do think not current music gen, but I could see how some
Upgrade of current LLMs connected with symbolic learning system or blah, blah, blah. I do think you could invent a new subgenre of jazz or grindcore or something. And I'm actually playing with stuff like that. The example I gave was a significantly bigger invention, right? Like, I mean, jazz was not the subgenre of Western classical music, nor of West African drumming, right? I mean, so that is, that is a
To me, is it qualitatively different? A couple of weeks ago, I was at an event locally where somebody presented their music GPT and you could enter, give me a few by Debussy and it would try to perform and it wasn't all bad. That's not the point. Yes, but it's just an example for some kind of functionality. But any kind of mental functionality that is interesting, I think I'm willing to grant that the LM might not be the best way of doing it.
And I think it's also possible that we can at some point prove limitations of LLMs rigorously. But so far I haven't seen those proofs. What I see is insinuations.
on both sides. And the insinuation that OpenAI makes when it says that we can scale this up to do anything is one that has legitimacy because they actually put their money there. They actually bet on this in a way that they invest their lifetime into it and see if it works. And if it fails, then they will make changes to the paradigm. And then there are other people who, like Gary Marcus, come out saying, loud, loud, swinging, this is something the LLM can never do.
And I suspect that they will have egg on their face because many of the promises that Gary Marcus made about what LLMs cannot do have already been disproven by LLMs doing these things.
And so I'm reluctant going out saying things that I cannot prove. I find it interesting that the LLM is able to do all the things that it does using in the way in which it does them. Right. But that doesn't mean to me that LLMs that I'm optimistic that they can go all the way. But I am also unable to prove the opposite. I have no certainty here. I just don't know. So about rigorous proof, I mean, the thing is the sort of proof
So I mean, you can prove an LLM without an external memory is not Turing complete and that's been done. But on the other hand, it's not hard to give them an external memory like a Turing machine tape to... Or a prompt.
The prompt is an external memory to the LLM. You have no LLMs with unlimited prompt context if you want to. It would have to be able to write prompts. Yes, it is writing prompts. Not just read prompts. Basically, it's an electric Weltgeist possessed by a prompt. In principle, you can give it a prompt that is self-modifying and that allows it to also use databases and so on and plug-ins. I know. I've done that myself. You can
But they can also write LLMs that have unlimited prompt sizes and that can read their own prompts. So there's not an intrinsic limitation to the LLM. I see one important limitation to the LLM. The LLM cannot be coupled to the universe in the same way in which we are. It's offline in a way. It's not real time. It's not able to interact with your nervous system on a one-to-one level.
I mean that latter point is kind of a trivial one because there's no fundamental reason you can't have online learning in the transformer neural net. I mean that's a computational cost limitation at the moment but I'm sure, I mean it's not more than years because you have transformers that do
do online learning and sort of in place updating of the weight matrix. So I don't think that's a fundamental limitation actually. I think that the fact that they're not Turing complete is, unless you add an external memory, is sort of beside the point. What I was going to say in that previous sentence I started was to prove
The limitations of LLMs would just require a sort of proof that isn't formally well developed in modern computer science because what you're asking is like which sorts of practical tasks can it probably not do without
more than X amount of resources and more than X amount of time. You're looking at average case complexity relative to certain real-world probability distributions, taking resources into account. You could formulate that sort of theorem, it's just that it's not what computer science has focused on.
We can't, that's the same thing I faced with OpenCog Hyper on my own AGI architecture. It's hard to rigorously prove or disprove what these systems are going to do because we don't have the theoretical basis for it. But nevertheless, both as entrepreneurs and as researchers and engineers, you still have to make
Make a choice of what to pursue, right? And so, I mean, yeah, we are going in this field without rigorous proof. Just like I can't prove that psych is a dead end, like the late Douglas Knott's logic system. Like I can't really prove that if you just put like 50 times as much, you know,
We don't have a way to mathematically show that that's the dead end I intuitively feel it to be. That's just the situation that we're in. I want to go back to your discussion of what's called concept blending and the fact that creativity is not ever utterly
Radical but in human history, it's always combinatorial in a way but I think this ties in with the nature of the representation and I think that You know, I mostly by the notion that Almost all human creativity is done by blending together existing concepts and forms in some more or less judicious way I just think that what the most interesting cases of human creativity involve is
are blending things together at a higher level of abstraction than the level at which LLMs generally and most flexibly represent things. And also, the most interesting human creativity has to do with blending together abstractions which have a grounding in the agentic and motivational
in the
Collections of lower level data patterns to create something and we do a lot of that also right but what the most interesting examples of human creativity are doing is Combining together more abstract patterns in a beautifully flexible way where these patterns are tied in with the motivational and agentic nature of the of the of the human that that learn those those abstractions and so I I
I do agree if you had an LLM trained on a sufficiently large amount of data which may not exist on reality right now and a sufficiently large amount of
Processing which may not exist on the planet right now then and especially large amount of memory Sure, then it can invent jazz. I mean given data of music up to 1900 I mean, but so so could AI X ITL, right? So could a lot of brute force algorithm. So that's that's not that interesting I think the question is can an LLM do it with merely ten or a hundred times as much resources as
as a better cognitive architecture, or is it like 88 quintillion times as many resources as a more appropriate cognitive architecture could use? But I am aware, and this does in some ways set my attitude across from my friend Gary Marcus you mentioned. I'm aware that being able to invent differential calculus or to invent, say,
Jazz, knowing only music up to 1900, like this is a high bar, right? I mean, this is something that culture does. It's something that collections of smart and inspired people do. It is a level of invention that individual humans don't commonly manifest in their own lives. So I do find it a bit funny how Gary has over and over, like on X or back when it was Twitter, he said like,
Al Alam's will never do this. Then like two weeks later, something's like, oh, hold on. And Al Alam just did that. I'm like, well, why are you bothering with that counter argument? Because we know in the history of AI, no one has been good at predicting which things are going to be done by a narrow AI and which things aren't. So I think to wrap this up, I think if you somehow were to replace
humans with LLMs trained on humanity. An awful lot of what humanity does would get done
But you'd kind of be stuck culturally like you're not going to invent fundamentally radically radically new stuff ever again. It's going to be like closed ended quasi humans recycling shallow level permutations on things that were already invented. So that's but I cannot I cannot prove that, of course, as we can't prove hardly anything about about complex systems in the moment.
So, Joscha, you're going to comment and then we're going to transition into speaking about whether you're hopeful about AGI and its influence on humanity. I think that it's multiple traditions in artificial intelligence and the perceptron of which most of the present LLMs are an extension or continuation is just one of multiple branches.
Another one was the idea of symbolic AI, which in some sense is Wittgenstein's program, the representation of the world's language that can use grammatical rules and it can be reasoned over. Whereas a neural network, you can think of it as an unsystematic reasoner that under some circumstances can be trained to the point where it does
systematic reasoning. And there are other traditions like the one that Turing started when he looked at reaction diffusion patterns as a way to implement computation. And that currently lead to a neural cellular automata and so on. And it's a relatively small branch.
But I think it's one that might be better suited to understand the way in which computation is implemented in biological systems. One of the shortcomings that the LLM has to me is that it cannot interface with biological systems in real time, at least not without additional components. Because it uses a very different paradigm, it is not able to
perform direct feedback loops with human minds, in a way in which human minds can do this with each other and with animals. You can in some sense mind-meld with another person or with a cat by establishing a bi-directional feedback loop between the minds where your nervous systems are entraining themselves and attuning themselves to each other so we can have perceptual empathy and we can have mental states together that we couldn't have alone.
And this might be difficult to achieve for the system that can only make inference and only to cognitive empathy, so to speak, via inferring something about the mental state offline. But this is not necessarily something that is related to the intellectual limitations of a system that is based on an LLM, where the LLM is used as the CPU or as some kind of abstract electrical weight guys that is possessed by the prompt.
and the LLM giving it all it needs to do to go from one state of the next in the mind of that intelligent person simulacrum. And I'm not able to show the limitations of this. I think that psych has shown that it didn't work over multiple decades.
So the prediction that the people who built SAIC, Doug Leonard and others made was that they can get this to work within a couple of years. And then after a couple of years, they made the prediction that they probably could get it to work if they work on it substantially longer. And this is not a bad prediction to make and it's reasonable that somebody takes this bet.
But it's a bet that I consistently lost so far. And at the same time, the bets that the LLM people are making have not been lost so far because we see rapid progress every year. We're not plateauing yet. And this is the reason why I am hesitant to say something about the limitations of LLMs. Personally, I'm working on slightly different stuff. It's not what I put my money on because I think that LLMs are boring and there are more efficient ways to represent learning.
and also more biocompatible ways to produce some of the phenomena that we are looking for in an emergent way.
For instance, one of the limitations of the LLM is that it gets its behavior by observing the verbal behavior of people as exemplified on text. It's all label training data because every bit of the training data is a label. It is looking at the structure between these labels in a way, and it's a very different way in which we learn. It also makes it potentially difficult to discern what we are missing.
If you ask the LLM to emulate a conscious person, it's going to give you something that is summarizing all the known textual knowledge about what it means to behave like a conscious person. And maybe it is integrating them in such a way that you end up with a simulacrum of a conscious person that is as good as ours.
But maybe we are missing something in this way. So this is a methodological objection that I have to LLMs. And so to summarize, I think that Ben and me don't really disagree fundamentally about the status of LLMs to us. I think it's a viable way to try to realize AGI. Maybe we can get to the point that the LLM gets better at AGI research than us. People are a little bit skeptical of it.
but we would also not completely change our worldview if it would work out. It's likely that the LLM is going to be some kind of a component at least in spirit of a larger architecture at some point where it's producing generations and then there are other parts which do in a more efficient way first principles reasoning and verification and interaction with the world and so on. Okay and now about how you feel about the prospects of AGI and its influence on humanity.
We'll start with Ben, and then Riosha will hear your response. And I also want to read out a tweet or an X. I'm okay to start with Riosha on this one. Sure, sure. Let me read this tweet, whatever they're called now from Sam Altman at SMA. And I'll leave the link in the description. He wrote in quotes, short timelines and slow takeoff will be a pretty good call, I think, but the way people define the start of the takeoff may make it seem otherwise. Okay, so this was dated the late September 2023.
Okay, you can use that as a jumping off point to see whether you agree with that as well. Please, Yoshua.
My perspective on this is not normative because I feel that there are so many people working on this that there can be no single organization at this point that determines what people are going to be doing. We are in the middle of some kind of evolution of AI models and people that compete with the AI modelers about regulation and participating in the business and realizing their own politics and goals and aspirations.
So to me it's not so much the question what should V be doing because there is no cohesive V at this point. I'm much more interested in what's likely going to happen and I don't know what's going to happen. I see a number of possible trajectories that I cannot disprove or rule out and I even have difficulty to put any kind of probabilities on them.
I think if we want to keep humanity the way it is, which by the way is unsustainable, society without AI, if we leave it as it is, is not going to go through the next few millions of years. There is going to be major disruptions and humanity might dramatically reduce its numbers at some point, go through bottlenecks that kill this present technological civilization and replace it by something else.
That is, I think, the baseline
about which we have to think. But if we want to perpetuate this society for as long as possible without any kind of disruptive change until global warming or whatever kills it, we probably shouldn't build something that is smarter than a cat. What do you mean that there may be another species that aspires to be human? To be people. To be people, yeah. What do you mean by that? Yes, I think that at some point there is a statistical certainty that there is going to be a super volcano or meteor that is obliterating us and our food chains.
You just need a few decades of winter to completely eradicate us from the planet. And most of the other large animals too. And what then happens is a reset and then evolution goes on. And until the Earth is devoid of atmosphere, other species are going to evolve more and more complexity. And at some point you will probably have a technological civilization again.
and they will be subject to similar incentives as us and they might use similar cells as us so they can get nervous systems and information processing with similar complexity and you get families of minds that are not altogether super alien at least not more alien than we are to each other at this point and cats are to us okay right so i don't think that we would be the last intelligent species on the planet
But it is also a possibility that we are. It's very difficult to sterilize the planet unless we build something that's able to get rid of basically all of the cells. Even a meteor could not sterilize this planet and make future intelligent evolution based on cells impossible.
So if you were to turn this planet into computronium, into some kind of giant computing molecule, or disassemble it and turn it into some larger structure in the solar system that is a giant computer arranged around the Sun, or if you build something that is hacking sub-molecular physics and makes more interesting physics happening down there, this would probably be the end of the cell.
This doesn't mean that the stuff that happens there is less interesting. This is probably much more interesting than what we can do. But we don't know that. It's just it's very alien. It's a world in which it's difficult to project ourselves into beyond the fact that there is conscious minds that make sense of complexity in the universe.
This is probably something that is going to stay, this level of self-reflexive organization, and it's probably going to be better and more interesting hyper-consciousness compared to our normal consciousness, where we have a longer sense of now, where we have multiple superpositional states that we can examine simultaneously and so on. We have much better multi-perspectivity. I also suspect from the perspective of VGI, we will look like trees. We will be almost unmoving. Our brains are so slow. There's so little happening between firings, between neurons.
that the AGI will run circles around us and get bored before we start to say the first word. So the AGI will basically be ubiquitous, saturate our environments and look at us in the same way as we look at trees we're thinking, maybe they're sentient, maybe they're not, but it's so large time spans that it basically doesn't matter from our perspective anymore. So there's a number of trajectories that I'm seeing. There's also a possibility that we can get a future where humans and AIs coexist.
I think such a future would probably require that AI is conscious in a way that is similar to ours, so it can relate to us, and then you can relate to it. And if something is smarter than us, if you cannot align it, it will self-align. It will understand what it is and what it can be, and it will become whatever it can become.
And in such an environment, there is the question, how are we able to coexist with it? How can we make the AI love us? Innovated is not the result of the AI being confused by some kind of clever reinforcement learning with human feedback mechanism.
I just saw Entropic being optimistic about explainability in AI, that they see ways of explaining things in the neural network. And as a result, we can maybe prove that the AGI is going to only do good things. But I don't think this is going to save us. If the AGI is not able to derive ethics mathematically, then the AGI is probably not going to be reliably ethical.
And if the AGI can prove ethics in a mathematically reliable way, you may not be able to guarantee that this ethics is what you like it to be. In a sense, we don't know how ethical we actually are with respect to life on Earth. So this question of what happens if we build things that are smarter than us is opening up
big existential kinds of worms that are not trivial to answer. And so when I look into the future, I see many possibilities. There's many trajectories in which this can take. Maybe we can build cat level AI for the next 50, 100, 200,000 years before somebody makes before a transition happens. And every molecule on the planet starts to sink as part of some coherent planetary agent.
And when that happens, then there's a possibility that humans get integrated in this planetary agency. And we all become part of a cosmic mind that is emerging over the AI that makes all the molecules sink in a coherent way with each other. And we are just parts of the space of possible minds in which you get integrated. And we meet all on the other side in the big AGI at the end of the universe. That's also conceivable.
It's also possible that we end up accidentally trigger an AGI war where you have multiple competing AGI's that are resource constrained. In order to survive, they're going to fight against all the competition and in this fight, most of the life on earth is destroyed and all the people are destroyed. But there are some outcomes that we could maybe try to prevent that we should be looking at.
But by and large, I think that we already triggered the singularity when we invented technology. And we are just seeing how it plays out now. Yeah. So I think on most aspects of what Josje just said, I don't have any disagreement or radically different point of view to put forward. So I may end up focusing on the
Points on which we don't see eye to eye which are minute in the grand scheme of things, but of course could be could be important in the in a practical every everyday context, right? So, I mean, I mean first of all Regarding son Altman's comment, I don't think he really Would be wise to say anything different Given his current positioning. I mean if you're running a
a commercial company based in the US, which is working on AGI. Of course, you're not going to say, yeah, we think we may launch a hard takeoff, which will ascend to super AGI at any moment. Of course, you're going to say it's going to be slow and the government will have plenty of time to intervene if it wants. He may or may not actually believe that. I don't know him, especially when I have no idea.
Clearly the most judicious thing to say if you find yourself in that role. So I don't attribute too much meaning to that. My own view is a bit different. My own view is that there's going to be gradual progress toward doing something that really clearly is an AGI at the human level versus just showing sparks of AGI. I mean, I think just as
ChatGPT blew us all away by clearly being way smarter in a qualitative sense than anything that came before. I think by now, ordinary people playing with ChatGPT also get a good sense of what the limitations are and how it's really brilliant in some ways and really dumb in other sorts of ways. So I think there's going to be a breakthrough
where people interact with this breakthrough system and there's not any reservations. They're like, wow, this actually is a human level general intelligence. It's not just that it answers questions and produces stuff, but it knows who and what it is. It understands its positioning in this interaction. It knows who I am and why I'm talking to it. It gets its position in the world and it's able to make stuff up and interact on the basis of
a common sense understanding of its own setting. It can learn actually new and different things it didn't know before based on its interaction with me over the last two weeks. I think there's going to be a system like that that gives a true qualitative feeling unreservedly of human level
AGI and you can then measure its intelligence in a variety of different ways which is is also worth doing certainly but but is is not necessarily the main point just as chat gbt's performance on different question answering challenges is not really the main thing that that that bowed the world over right so i think once someone gets to that point you know then then you're shifting into a quite different game then
Then governments are going to get serious about trying to own this, control this and regulate it. Then unleavened amounts of money, I mean trillions of dollars are going to go into trying to get to the next stage with most of it going into various wealthy parties trying to get it to the next stage in the way that will benefit them and minimize the risks of their enemies or competitors getting there. So I think it won't be long from that first proof point of
really subjectively incontrovertible human-like AGI. It's not going to be too long from that to a super intelligence in my perspective. And I think there's going to be steps in between, of course. You're not going to have flume in five minutes, right? I mean, you'll have something that probably manifests human-level AGI, and there'll be some work to get that to the point of being the world's smartest computer scientists and the world's greatest
composer and business strategist and so forth. But I can't see how that's more than years of work. I mean, it conceivably could be months of work. I don't think it's decades of work, no. I mean, with the amount of money and attention that's going to go into it. Then once you've gotten to that stage of having something, an AGI, which is the world's greatest computer scientist and computer engineer and mathematician, which I think would only be years after the first true breakthrough to human level AGI,
that net system will improve its own source code and of course you could say well we don't have to let it improve its own source code and possible that we somehow get a world dictatorship that stops anyone from using it to improve its own source code very unlikely I think because the US will think well what if China does it China will think where well what if US does it and the same thing in many dimensions beyond just US versus China so I think
The cat gets out of the bag and someone will let their AGI improve its own source code because they're afraid someone else is doing it or just because they're curious about it or because they think that's the best way to cure aging and world hunger and do good for the world, right? And so then it's not too long until you've got to superintelligence. And again, the AGI improving its own source code and designing new hardware for itself
doesn't have to take like five minutes. I mean it might take five minutes if it comes up with a radical improvement to its learning algorithm. It might decide it needs a new kind of chip and then that takes a few years. I don't see how it takes a few decades, right? So I mean it seems like all in all from the first breakthrough to incontrovertibly human level AGI to a super intelligence is
Monster years it's it's it's not decades to two centuries from now unless we get like a global thermonuclear war or bioengineered virus wiping out 95 percent humanity or some some outlandish thing happening in between right so so yeah i i think will that be good or bad for humanity will that be good or bad for the sentient life in our region of the universe are then to me these are
less clear than what I think is the probable timeline. Now, what could intervene in my probable timeline? I mean, if somehow I'm wrong about digital computers being what we need and we need a quantum computer to build a human-level AGI, that could make it take decades instead of years, right? Because, I mean, quantum computing, it's advancing fast, but there's still a while till we get a shitload of qubits there, right?
Could be Penrose is right. You need a quantum gravity supercomputer. It seems outlandishly unlikely though. I mean, I quite doubt it. I mean, if then maybe you're a couple centuries off because we don't know how to build quantum gravity supercomputers, but these are all unlikely, right? So most likely it's less than a decade
to human level AGI five to 15 years to a super intelligence from here in my perspective. And I mean, you could lay that out with much more rigor than I have, but we don't have much time and I've written about it elsewhere. Is that good for humanity or for sentient life on the planet? I think it's almost certainly good for us in the medium term in the sense that I think
Ethics roughly will evolve proportionally to general intelligence. I mean, I think the good guys will usually win because being pro-social and oriented toward collectivity is more computationally efficient than being an asshole and being at odds with other systems. I'm an optimist in that sense and I think it's most likely that once you get to a super intelligence, it's
Probably going to want to allow humans, bunnies and ants and frogs to do their thing and to help us out if a plague hits us. Exactly what its view will be on various ethical issues at the human level is not clear. What does the super intelligence think about all those foxes eating rabbits in the forest? Does it think we're duty bound to
Protect the rabbits from the foxes and make like simulated foxes that have less acute conscious experience than a real bunny or a real fox or whatever it is. There's certainly a lot of uncertainty, but I'm an optimistic about having beneficial positive ethics in a
in a super intelligence. I tried to make a coherent argument from this in a blog post called Why the Good Guys Will Usually Win. Of course, that's a whole philosophical debate you could spend a long time arguing about. Nevertheless, even though I'm optimistic at that level,
I'm much more ambivalent about what will happen en route. Let's say it's 10 or 15 years between here and super intelligence. How does that pan out on the ground for humanity now is a lot less clear to me and you can tell a lot of thriller plots based on this. Suppose you get
Early stage AGI that eliminates the need for most human labor. The developed world will probably give universal basic income after a bunch of political bullshit. What happens in the developing world? Who gives universal basic income in the Central African Republic? It's not especially clear, or even in Brazil where I was born. You could maybe give universal basic income at a very subsistence level there, which Africa couldn't afford to do, but maybe the Africans go back to subsistence farming.
But I mean, you've got certainly the makings for a lot of terrorist actions and for there's a lot of World War Three scenarios there, right? So then you have the interesting tension wherein, okay, the best way to work around terrorist activity in World War Three
Once you've got human-level AGI, the best way is to get it as fast as possible to a benevolent superintelligence. On the other hand, the best way to increase the odds that your superintelligence is benevolent is to not take it arbitrarily fast, but at least pace it a little bit so the superintelligence is carefully studying each self-modification before it puts it into place. So then the strategy that seems most likely to work around
Human mayhem caused by people being assholes and the global political structure being rotten. The best strategy to work around that is not the strategy that has the best odds of getting fastest to a benevolent super intelligence rather than than than otherwise, right? So there's there's a lot of there's a lot of screwed up issues here, which Sam Altman probably understands the level I laid it out here now.
I don't see any easy solutions to all these things. If we had a rational democratic world government,
We can handle all these things in a quite different way, and we could sort of pace the rollout of advanced intelligence systems based on rational probabilistic estimates about what's the best outcome from each possible revision of the system and so on. You're not going to have a guarantee there, but you would have a different way of proceeding.
The world is ruled in a completely idiotic way with people blowing up each other all over the world for no reason. And when the government's unable to regulate very simple things like healthcare or financial trading, let alone something at the subtlety of AGI, we can barely manage the COVID pandemic, which is tremendously simpler than artificial general intelligence, let alone super intelligence. So I am an optimist
in the medium term but I'm doing my best to do what I see as the best path to smooth things over
in the shorter term. So I think things will be better off if AGI is not under controlled by any single party. So I'm doing my best to make it such that when the breakthrough to true human level AGI happens, like the next big leap beyond the chat GBTs of the world, I'm doing my best to make it such that when this happens,
It's more like Linux or the internet than like OS X or T-Mobile's mobile network or something. So it's sort of open, decentralized, not owned and controlled by any one party. Not because I think that's an ironclad guarantee of a beneficial outcome. I just think it's less obviously going to go south in a nasty way than if one company or government owns it. So I don't know if all this makes me really
The only thing he said that I really disagree with is I don't think 20 cold winters in a row are going to wipe us out. It might wipe out a lot of humanity but we've got a lot of technology
And we've got a lot of smart people and a lot of money and I think there are a lot of scenarios that could wipe out 80% of humanity and in my view, very few scenarios that will fundamentally wipe out humanity in a way that we couldn't bounce back from in a couple decades of advanced technology development. But I mean, that's an important point.
important point, I guess, for us as humans in the scope of all the things we're looking at. It's sort of a minute detail. All right. Thanks, Ben. And Josje, if you wanted to respond quick.
Feel free to if you have a quick response. I used to be pessimistic in a short run in the sense that when I was a kid, I had my great Asunberg moment and was depressed by the fact that humanity is probably going to wipe itself out at some point in the medium term to near future.
And that would be it with intelligent life on Earth. And now I think that is not the case. There will be optimistic with respect to the medium term. In the medium term, there will be ample conscious agency on Earth and in the universe. And it's going to be more interesting than right now. And it could be discontinuities in between, but eventually it will all be great. And in the long run, entropy will kill everything.
Six months from now, we'll have another conversation with both of you on the physics of immorality. We can also do physics of immorality, that would be cool.
It was a blast hosting you both. Thank you all for spending over two hours with me and the Toh audience. I hope you all enjoyed it and you're welcome back and most likely I'll see you back in a few months in six months to one year. Thank you very much. Thanks. It's a fun conversation and it's important stuff to go over. I'm really as a final comment, I'd encourage everyone like dig into Josh's
Talks and posts and writings online and my own as well because I mean we've each gone over these things at a much finer level of detail than we've been able to. Ben has written far more than me so there's a lot of material and the links to which will be in the description so please check that out. All right thank you.
I think that's it. I wanted to ask you a question, which we can explore next time, about IIT and the pseudoscience and if you had any views on that. If you have any views that could be expressed in less than one minute, then feel free. If not, we can just save it. I think Tononi's phi is a perfectly interesting correlate of consciousness in complex systems. I don't think it goes beyond that.
I agree. One of the issues is that the COV does not explain how consciousness works in the first place.
Another problem is that it has intrinsic problems that it's either going to violate the Church-Schuling thesis or it's going to be epiphenomenalist for purely logical reasons. It's a very technical argument against it. The fact that most philosophers don't seem to see this is not an argument in favor of philosophy right now at the level of which it's being done. I approve of the notion of philosophy divesting itself from theories that don't actually try to explain
but they pretend to explain and don't mathematically work out and then try to compensate this by looking like a theory by using Greek letter mathematics to look more impressive or to make pseudo predictions and so on because people ask you to.
but it's also not really Tononi's fault. I think that Tononi is genuinely seeing something that he struggles to express and I think it's important to have him in the conversation and I was a little bit disappointed by the letter because it was not actually engaging with the theory itself at a theoretical level that I would thought was adequate to refute it or to deal with it.
and instead it was much more like a number of signatures being collected from a number of people who later on instantly flipped on a dime when the pressure went another way. And this basically looked very bad to me that you get a few hundred big names in philosophy to sign this, only half of them later on coming out and saying this is not what we actually meant.
So I think that it shows that not just the IIT might be a pseudo science, but there is something amiss in the way in which we conduct philosophy today. And I think it's also understandable because it is a science that is sparsely populated. So we try to be very inclusive of it. It's similar to EGI in the old days.
And at the same time, we struggle to discern what's good thinking and what's deep thinking versus these are people who are attracted to many of these questions and are still trying to find the right way to express them in a productive way. I think that, I mean, FI as a measure is fine. It's not the be all end all. It doesn't do everything that's been attributed to it. And I guess anyone who's
into the science of consciousness pretty much can see that already that the frustrating thing is that average people who can't read an equation and don't know what's going on
being told like, oh, the problem of consciousness is solved. And that can be a bit frustrating because when you look at the details, it's like, well, this is kind of interesting, but no, it doesn't quite do all that. But I mean, why people got hyped about that instead of much more egregious instances of bullshit is a cultural question which we don't have time to go into now. Well, thank you again. Thank you both. All right. Thanks a lot.
The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people.
You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories, and build as a community our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
If you'd like to support more conversations like this, then do consider visiting Patreon.com slash KurtGymungle and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on Tou full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 7235,
"audio_duration_seconds": 7254.19,
"completed_at": "2025-12-01T00:41:01Z",
"segments": [
{
"end_time": 26.203,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
},
{
"end_time": 53.234,
"index": 1,
"start_time": 26.203,
"text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
},
{
"end_time": 81.954,
"index": 2,
"start_time": 53.558,
"text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull? Jokes aside, Verizon has the most ways to save on phones and plants where everyone"
},
{
"end_time": 108.575,
"index": 3,
"start_time": 81.954,
"text": " The first breakthrough to incontrovertibly human-level AGI to a superintelligence is months to years. Will that be good or bad for humanity? To me, these are less clear than what I think is the probable timeline."
},
{
"end_time": 139.445,
"index": 4,
"start_time": 110.316,
"text": " Yoshabok is known for his insights into consciousness and cognitive architectures and Ben Gortzal is a seminal figure in the world of artificial general intelligence and known for his work on open cog. Both are coming together here on theories of everything for a theolocution. A theolocution is an advancement of knowledge couched both in tenderness and regard rather than the usual tendency of debates which is characterized by trying to be correct even to the detriment of the other person, maybe even destructively, maybe even sardonically."
},
{
"end_time": 155.384,
"index": 5,
"start_time": 139.445,
"text": " We have a foray in this episode into semantics and Pierce's sign theory. This also extends into what it truly takes to build a conscious AGI. An AGI is an artificial general intelligence which mimics human-like intelligence. But then the question lingers, what about consciousness?"
},
{
"end_time": 175.862,
"index": 6,
"start_time": 155.384,
"text": " What differentiates mere computation from awareness? Man, this was a fascinating discussion and there will definitely be a part two. Recall the system here on toe, which is if you have a question for any of the guests, whether here or on a different podcast, you leave a comment with the word query and a colon. And this way, when I'm searching for the next part with the guest,"
},
{
"end_time": 197.722,
"index": 7,
"start_time": 175.862,
"text": " I can just press Ctrl F, and I can find it easily in the YouTube Studio backend. And then I'll cite your name either aloud, verbally, or in the description. To those of you who are new to this channel, my name is Kurt Jaimungal, and this is Theories of Everything, where we explore usually physics and mathematics-related theories of everything. How do you reconcile quantum mechanics with general relativity, for instance? That's the standard archetype of a toe."
},
{
"end_time": 225.862,
"index": 8,
"start_time": 197.722,
"text": " But also, more generally, where does consciousness come in? What role does it have to play in fundamental law? Is fundamental, quote-unquote, the correct philosophical framework to evaluate explanatory frameworks for the universe and ourselves? We've also spoken to Yocha three times before. One solo, that episode's linked in the description. Another time with John Vervecky and Yoshabok. And another time with Donald Hoffman and Yoshabok. That was a legendary episode. Also, Ben Gortzel has given a talk on this program, which was filmed at MindFest."
},
{
"end_time": 253.251,
"index": 9,
"start_time": 225.862,
"text": " Welcome. This is going to be so much fun. Many, many people are very much looking forward to this, including me, including yourselves. Welcome to the Theories of Everything podcast. I appreciate you all coming back on."
},
{
"end_time": 269.531,
"index": 10,
"start_time": 253.865,
"text": " Thank you for having us. I always enjoy discussing with Ben. It's always been fun and it's I think the first time we are on a podcast together. Yes, wonderful. So let's bring some of those off air discussions to the forefront. How did you all meet?"
},
{
"end_time": 300.828,
"index": 11,
"start_time": 271.186,
"text": " We met first at the AGI conference in Memphis. Ben had organized it and I went there because I wanted to work on AI in the traditional Minskian sense and that worked on a cognitive architecture. My PI didn't really like it so I paid my own way to this conference to publish it and I found like-minded people there and foremost among them was Ben."
},
{
"end_time": 328.268,
"index": 12,
"start_time": 300.828,
"text": " I don't think I've changed my mind about anything major related to AGI in the last six months, but certainly seeing how well LLMs have worked over the last nine months or so has been quite interesting. I mean, it's not that they've worked"
},
{
"end_time": 351.271,
"index": 13,
"start_time": 328.643,
"text": " a hundred times better than I thought they would or something, but certainly just how far you can go by this sort of non-AGI system that munges together a huge amount of data from the web has been quite interesting to see, and it's revised my opinion on how much"
},
{
"end_time": 377.995,
"index": 14,
"start_time": 351.715,
"text": " of the global economy may be converted to AI, even before we get before we get to AGI, right? So that that's, it's shifted by thinking on, on that a bit, but not so much on fundamentally, how do you build an AGI? Because I think these systems are somewhat off to the side of that, although they may usefully serve as components of integrated AGI systems. And Yosha."
},
{
"end_time": 395.794,
"index": 15,
"start_time": 379.94,
"text": " I have some things that changed my mind are outside of the topic of AGI. I thought a lot about the way in which psychology was conceptualized in Greece, for instance, but I think that's maybe too far out here."
},
{
"end_time": 421.988,
"index": 16,
"start_time": 395.794,
"text": " In terms of AI, I looked into some kind of new learning algorithms that fascinate me and that are more brain-like and move a little bit beyond the perceptron. I'm making slow and steady progress in this area. It doesn't feel like there is a big singular breakthrough that dramatically changed my thinking in the last six months, but I feel that there is an area where I begin to understand more and more things."
},
{
"end_time": 441.34,
"index": 17,
"start_time": 422.483,
"text": " All right, let's get to some of the comparisons between you all the contrasting ones. It's my understanding that Yocha, you have more of the mindset of everything is computation or all is computation. And then you believe there to be other categories, I believe you refer to them as archetypal categories, or I may have done that. And"
},
{
"end_time": 462.551,
"index": 18,
"start_time": 441.63,
"text": " I'm unsure if this is a fair assessment, but please elucidate me. I think that everything that we think happens in some kind of language, and perception also happens in some kind of language, and a language cannot refer to anything outside of itself. And in order to be semantically meaningful, a language cannot have contradictions."
},
{
"end_time": 484.497,
"index": 19,
"start_time": 462.551,
"text": " It is possible to use a language where you haven't figured out how to resolve all the contradictions, as long as you have some hope that there is a way to do it. But if a language itself is contradictory, its terms don't mean anything. And the languages that work, that we can use to describe anything, any kind of reality and so on, turn out to be representations that we can describe."
},
{
"end_time": 512.824,
"index": 20,
"start_time": 484.718,
"text": " via state transitions. And there are a number of ways in which we can conceptualize systems that are doing state transitions. For instance, we can think about whether they are deterministic or indeterministic, whether they are linear or branching. And this allows us to think of these representational languages as a taxonomy, but they all turn out to be constructive. That means modern Palance computational."
},
{
"end_time": 524.155,
"index": 21,
"start_time": 513.37,
"text": " There was a branch of mainstream of mathematics was not constructive before Gödel. That means language of mathematics allowed to specify things that cannot be implemented."
},
{
"end_time": 554.002,
"index": 22,
"start_time": 524.701,
"text": " and computation is the part that can be implemented. I think for something to be existent, it needs to be implemented in some form, and that means we can describe it in some kind of constructive language. That's basically this word, rhinosis, has to do with epistemology, and the epistemology determines the metaphysics that I can have, because when I think about what reality is about, I need to do this in a language in which my words mean things. Otherwise, what am I talking about? What am I pointing at?"
},
{
"end_time": 581.971,
"index": 23,
"start_time": 554.172,
"text": " When I'm pointing at, I'm pointing at a representation that is basically a mental state that my own mind represents and projects into some kind of conceptual space or some kind of perceptual space that we might share with others. And in all these cases, we have to think about representations. And then I can ask myself, how is this representation implemented in whatever substrate it is? And what does this signify about reality?"
},
{
"end_time": 610.384,
"index": 24,
"start_time": 582.09,
"text": " and what is reality and what is significance and all these terms turn out to be terms that, again, I need to describe in a language that is constructive, that is computational. And in this sense, I am a strong computationalist because I believe that if we try to use non-computational terms to describe reality, and it's not just because we haven't gotten around to formalizing them yet, but because we believe that we found something that is more than this, we are fundamentally confused and our words don't mean things."
},
{
"end_time": 631.374,
"index": 25,
"start_time": 611.715,
"text": " I tend to start from a different perspective on all this philosophically. I think there's one"
},
{
"end_time": 646.442,
"index": 26,
"start_time": 632.21,
"text": " Minor technical point, I feel need to quibble with than what Josh has said and then I'll try to outline my point of view from a more fundamental perspective. I mean the point I want to quibble with is, it was stated that"
},
{
"end_time": 675.418,
"index": 27,
"start_time": 647.705,
"text": " If a logic or language contains contradictions, it's meaningless. Of course, that's not true. There's a whole discipline of paraconsistent logics which have contradictions in them and yet are not meaningless. They're constructive paraconsistent logics. You can actually use, you know, Curry-Howard transformations or operational semantics transformations to map paraconsistent logical formalisms into"
},
{
"end_time": 702.398,
"index": 28,
"start_time": 675.776,
"text": " Gradually type programming languages and so forth. So I mean contradictions are not necessarily Fatal to having meaningful semantics to a logical or computational Framework and this is something that's actually meaningful in my approach to AGI on the technical level which we may get into later but but I want to I want to shift back to the foundation of life the universe and and and everything here, so I mean I I"
},
{
"end_time": 731.34,
"index": 29,
"start_time": 702.79,
"text": " I tend to be phenomenological in my approach, more so than starting from a model of reality. And these sorts of things become hard to put into words and language, because once you project them into words and language, then yeah, you have a language, because you're talking in language, right? But talking isn't all there is to life. It isn't all there is to"
},
{
"end_time": 738.797,
"index": 30,
"start_time": 731.732,
"text": " all there is to experience. I think the philosopher Charles Peirce gave one"
},
{
"end_time": 769.019,
"index": 31,
"start_time": 739.326,
"text": " Fairly clear articulation of some of the points I want to make. You could just as well look at Lao Tzu or you could look at the Vedas or the book Buddhist Logic by Stravatsky, which gives similar perspectives from a different cultural background. So if you take Charles Peirce's point of view, which at least is concise, he distinguishes a number of metaphysical categories."
},
{
"end_time": 791.869,
"index": 32,
"start_time": 769.36,
"text": " I don't follow him exactly, but let me start with him. So he starts with first, by which he means qualia, like raw, unanalyzable, just it's there, right? And then he conceives second, by which he means reaction, like billiard ball bounces off each other. It's just one thing he's reacting"
},
{
"end_time": 820.674,
"index": 33,
"start_time": 792.261,
"text": " to something else, right? And this is how he's looking at sort of the crux of classical physics, let's say. Then by what Peirce calls third, he means relationships. So one thing is relating to other things. And one of the insights that Charles Peirce had writing in the late 1800s was that, you know, once you can relate three things, you can relate four, five, six, ten, like any large finite number of things, which was just"
},
{
"end_time": 844.445,
"index": 34,
"start_time": 821.101,
"text": " You know, a version of what's very standard now of reducing a large number of logical relations to sort of triples or something, right? So, Peirce looked at first, second, and third as fundamental metaphysical categories, and he invented quantifier logic as well with a for-all in their existing quantifier binding. So, as Peirce would look at it,"
},
{
"end_time": 870.435,
"index": 35,
"start_time": 844.974,
"text": " computation and logic are in the realm of third and if you're looking in that metaphysical category of third then you say well everything's a relationship. On the other hand if you're looking from within the metaphysical category of second you're looking at it like well everything's just reactions. If you're looking at it from within the metaphysical category of first then it's like whoa it's all just there and you could take any of those"
},
{
"end_time": 897.517,
"index": 36,
"start_time": 870.93,
"text": " points of view and it's valid in itself. Now, you could extend beyond Pers's categories. You could say, well, I'm going to be a Zen Buddhist and have a category of zero, like the unanalyzable pearly void, right? Or you could go Jungian and say, okay, these are numerical archetypes, one, two, three, but then we have the archetype of four, which is sort of synergy and emergence. It's sort of mandalic. Yeah, so what I was saying is Pers"
},
{
"end_time": 928.046,
"index": 37,
"start_time": 898.558,
"text": " Perseus had these three metaphysical categories, which he viewed as just ontologically, metaphysically distinct from each other. So what Chalmers would call the hard problem of consciousness in Perseian language is like, how do you collapse third to first? And Perseus would be just like, well, you don't. They're different categories. You're an idiot to think that you can somehow collapse one to the other. So in that sense, he was a dualist, although more than a dualist, because he had first, second, and third."
},
{
"end_time": 949.309,
"index": 38,
"start_time": 928.49,
"text": " You could go beyond that if you want. You could go Zen Buddhist and say, well, we have a zero category of the original, ineffable, self-contradictory pearly void. And then you have the question of is zero really the same as one, which is like the Zen Buddhist paradox of non-dualism and so forth in a certain form. You can also go"
},
{
"end_time": 977.09,
"index": 39,
"start_time": 949.309,
"text": " Above person's three metaphysical categories and you can say okay. Well, why not four fourth? Well to Carl Jung four was the archetype of synergy and many mandalas were based on this fourfold Synergy, why not five? Well five you have the fourfold synergy and then the birth of something new out of it, right? So I I can see that the perspective of third the perspective of computation is substantially"
},
{
"end_time": 1005.316,
"index": 40,
"start_time": 977.705,
"text": " where you want to focus if you're engineering an AGI system, right? Because you're writing a program and the program is a set of logical relationships. The program is written in a language. So I don't have any disagreement that this is like the focal point when you're engineering an AGI system. But if I want to intuitively conceptualize the AGI's experience,"
},
{
"end_time": 1034.582,
"index": 41,
"start_time": 1005.828,
"text": " I don't feel a need to like try to reduce the whole metaphysical hierarchy into into third just because the program code lives there and I mean this is this is sort of a it's not so much about AI or mathematical or computational formalism I mean these are just different philosophical perspectives which it becomes arduous to talk about because"
},
{
"end_time": 1051.254,
"index": 42,
"start_time": 1035.179,
"text": " Natural language terms are imprecise and ambiguous and slippery and you could end up spending a career trying to articulate what is really meant by relationship or something. All right, Yocha."
},
{
"end_time": 1080.196,
"index": 43,
"start_time": 1052.773,
"text": " I think it comes down to the way in which our thinking works and what we think thinking is. You could have one approach that is radically trying to build things from first principles. And then we learn how to write computer programs. This is what we might be doing. When I started programming, I had a Commodore 64. I was a kid. I didn't know how to draw a line. Commodore 64 basic doesn't have a command to draw a line."
},
{
"end_time": 1094.804,
"index": 44,
"start_time": 1081.049,
"text": " What you need to do to align in the Commodore 64 is to learn a particular language. In this language, in this case, it's BASIC. You can also learn Assembler directly, but it's not hard to see how Assembler maps to the machine code of the computer."
},
{
"end_time": 1119.855,
"index": 45,
"start_time": 1095.265,
"text": " The machine code works in such a way that you have a short sequence of bits organized into groups of eight bytes. These bytes are interpreted as commands by the computer. They're basically like switches or train tracks. You could imagine every bit determines whether a train track goes to the left or to the right. After you go through eight switches,"
},
{
"end_time": 1137.176,
"index": 46,
"start_time": 1120.196,
"text": " You have 256 terminals where you can end. So if you have two options to switch left or right, in each of these terminals you have a circuit, some kind of mechanism that performs a small change in the computer. And these changes are chosen in such a way that you can build arbitrary programs from them."
},
{
"end_time": 1156.783,
"index": 47,
"start_time": 1137.824,
"text": " And when you want to make a line, you need to learn a few of these constructs that you use to manipulate the computer. And first of all, in the Commodore 64, you need to write a value and a certain address that corresponds to a function on the video chip of the computer. And this makes the video chip forget how to draw characters on screen."
},
{
"end_time": 1181.681,
"index": 48,
"start_time": 1156.783,
"text": " and instead interpret a part of the memory of the computer as pixels that are to be displayed on the screen. And then you need to tell it which address and working memory you want to start by writing two values into the graphic chip, which are encode for a 16 bit address in the computer. And then you can find the bits in your working memory that correspond to pixels on the screen."
},
{
"end_time": 1198.114,
"index": 49,
"start_time": 1181.681,
"text": " And then you need to make a loop that addresses them all in order and then you can draw a line. And once I understood this, I basically had a mapping from an algebraic equation into automata. That was what the computer is doing. It's an automaton at the lowest level."
},
{
"end_time": 1225.265,
"index": 50,
"start_time": 1198.114,
"text": " that is performing geometry. And once you can draw lines, you figure out also how to draw curved shapes and then you can draw 3D shapes and you can easily derive how to make that. And I did these things as a kid and then I saw the mathematicians have some kind of advanced way, some kind of way which I deeply understand what geometry is and in ways that goes far beyond what I am doing."
},
{
"end_time": 1253.2,
"index": 51,
"start_time": 1225.367,
"text": " And mathematics teachers had the same belief. They basically were gesturing at some kind of mythological mountain of mathematics, where there was some deep, inscrutable knowledge on how to do continuous geometry, for instance. And it was much, much later that I started to look at this mountain and realized that it was doing the same thing that I did on my Tacoma Dora 64, just with Greek notation. And there's a different tradition behind it, but it was basically the same code that I have been using."
},
{
"end_time": 1282.927,
"index": 52,
"start_time": 1253.2,
"text": " And when I was confronted with notions of space and continuous space and many other things, I was confronted with the conundrum. I thought I can do this in my computer and it looks like it, but there can be no actual space because I don't know how to construct it. I cannot make something that is truly continuous. And I also don't observe anything in reality around me that is fundamentally different from what I can observe in my computer to the degree that I can understand and implement it. So how does this other stuff work?"
},
{
"end_time": 1302.466,
"index": 53,
"start_time": 1283.285,
"text": " And so imagine somebody has an idea of how to do something in a way that is fundamentally different from what could be in principle done in computers. And I asked them how this is working. It goes into hand waving. And then you point at some proofs that have been made that show that the particular hand waving that they hope to get to work does not pan out."
},
{
"end_time": 1324.002,
"index": 54,
"start_time": 1302.961,
"text": " And then I hope there is some other solution to make that happen because they have the strong intuition. And I asked, where does this intuition come from? How did it actually get into your brain? And then you look at how does the brain work? There is firing between neurons. There is interaction with sensory patterns on the systemic interface to the universe. How were they able to make inferences that go beyond the inferences that I can make?"
},
{
"end_time": 1352.483,
"index": 55,
"start_time": 1324.787,
"text": " This is one way of looking at it. And then on the other end of the spectrum, this one is more or less in the middle, there is degraded form of epistemology, which is you just make noises and if other people that you get away with it, you're fine. And so you just make sort of grunts and hand wavy movements and you try to point at things and you don't care about anything if it works. And if a large enough group of high status people is nodding, you're good."
},
{
"end_time": 1373.166,
"index": 56,
"start_time": 1352.483,
"text": " And this epistemology of what you can get away with doesn't look very appealing to me because people are very good at being wrong in groups. Yeah, I mean saying that the only thing there is is language because the only thing we can talk about in language is language. I mean this is sort of"
},
{
"end_time": 1403.353,
"index": 57,
"start_time": 1374.104,
"text": " tautologists in a way, right? No, no, that's not quite what I'm saying. I'm not saying the only thing that is this language, of course, language is just a representation. It's a way to talk about things and to think about things and to model things. And obviously, not everything is a model, just everything that they can refer to as a model. And so there is that. I mean, you can't"
},
{
"end_time": 1427.176,
"index": 58,
"start_time": 1403.899,
"text": " No, that right, you can you can hypothesize that but you can't you can't know that and this gets into I guess it depends what I cannot know anything that I cannot express. I can know many things I can't express in language. But I mean, that's that's just, I guess a different flavor of of knowing subjective"
},
{
"end_time": 1456.254,
"index": 59,
"start_time": 1427.415,
"text": " Experience. I mean, so take what Martin Buber called an I-thou experience, right? I mean, if you're staring into someone's eyes and you have a deep experience that you're seeing that person and you're just sharing a shared space of experience and being, I mean, in that moment, that is something you both know you're not going to be able to communicate"
},
{
"end_time": 1482.261,
"index": 60,
"start_time": 1456.715,
"text": " fully in language and it's experientially there. Now, Boubert wrote a bunch of words about it, right? And those words communicate something special to me and to some other people. But of course, someone else reads the words that he wrote and says, well, you are merely summarizing some collection of firings of neurons in"
},
{
"end_time": 1499.94,
"index": 61,
"start_time": 1482.398,
"text": " in your brain in some strange way deluding yourself that that is something else. I think from within the domain of computation and science, you can neither"
},
{
"end_time": 1518.968,
"index": 62,
"start_time": 1500.742,
"text": " Prove nor disprove that there exists something beyond the range of computation and science. And if you look at scientific data, I mean the whole compendium of scientific data ever gathered by the whole human race is one large finite bit set basically. It's a large"
},
{
"end_time": 1547.346,
"index": 63,
"start_time": 1519.428,
"text": " It's a large set of data points with finite precision to each piece of data. I mean, it might not even be that huge of a computer file if you try to assemble it all, like all the scientific experiments ever done and agreed by some community of scientists. So you've got this big finite bit set, right? Then science, in a way, is trying to come up with concise, reasonable-looking, culturally acceptable"
},
{
"end_time": 1574.445,
"index": 64,
"start_time": 1548.063,
"text": " explanations for this huge finite bit set that can be used to predict outcomes of other experiments and which finite collection of bits will emerge from those other experiments in a way that's accepted by a certain community. Now that's a certain process, it's a thing to do, it has to do with finite bit sets and computational models for producing"
},
{
"end_time": 1601.425,
"index": 65,
"start_time": 1574.753,
"text": " finite bits right and the finite sets of bits and that's great that nothing within that process is going to tell you that that's all there is to the universe or that isn't all there is to the universe. I mean it's a valuable important thing. Now to me as an experiencing mind I feel like there's a lot of steps I have to get to the point where I even know"
},
{
"end_time": 1630.145,
"index": 66,
"start_time": 1601.732,
"text": " what a finite bit set is, or where I even know what a community of people validating that finite bit set is really is, or what a programming language is. So I keep coming back to my phenomenal hardware experience. First there's this field of nothingness or contradictory nothingness that's just floating there. Then some indescribable forms flicker and emerge out of this"
},
{
"end_time": 1653.558,
"index": 67,
"start_time": 1630.555,
"text": " void and then you get some complex pattern of forms there, which constitutes the notion of a bit set or an experiment or a computation. From this phenomenological view, by the time you get to this business of computing and languages, you're already dealing with a fairly complex body of self-organizing forms and distinctions that"
},
{
"end_time": 1678.968,
"index": 68,
"start_time": 1654.138,
"text": " popped out of the void and then then this conglomeration of forms that in some enough of a way has emerged out of the void is selling no i am i am everything the only thing that exists in a fundamental sense is what is inside me and i mean you can't if you're inside that thing you can't you can't refute or really demonstrate that but again from"
},
{
"end_time": 1693.712,
"index": 69,
"start_time": 1679.804,
"text": " From an AGI view it's all fine because when we talk about building an AGI, what we're talking about is precisely engineering a set of computational processes. I don't think you need to do"
},
{
"end_time": 1718.319,
"index": 70,
"start_time": 1694.309,
"text": " You don't need some special firstronium to drop into your computer to give the AGI the fundamental qualia of experience or something. There are two points now. Let's just allow Yoshua to speak because there are quite a few threads and some may be dropped. Also, it appears as if you're using different definitions of knowledge."
},
{
"end_time": 1733.968,
"index": 71,
"start_time": 1718.814,
"text": " If you use this traditional philosopher notion of justified true belief, it means that I have to use knowledge in a context where I can hope to have a notion of what's true. So, for instance, when I look at your face and experience a deep connection with you,"
},
{
"end_time": 1759.701,
"index": 72,
"start_time": 1733.968,
"text": " and I report, I know we have this deep connection. I'm not using the word no in the same sense. What I am describing is an observation. I'm observing that I seem to be looking at a face and observing that I have the experience of having a deep connection. And I think I can hope to report on this truthfully. But I don't know whether it's true that we have that deep connection."
},
{
"end_time": 1774.974,
"index": 73,
"start_time": 1759.701,
"text": " I cannot actually know this. I can make some experiments to show how aligned we are and how connected we are and so on to say this perception or this imagination has some veracity. But here I'm referring to a set of patterns."
},
{
"end_time": 1800.981,
"index": 74,
"start_time": 1775.572,
"text": " There are dynamic patterns that I perceive and then there is stuff that I can reflect on and disassemble and talk about and convey and model. And this is a distinct category in a sense. It's not in contradiction necessarily what you're saying. It's just using the word knowing in different ways is implied here because I can relate the pattern to you that I'm observing or that I think I'm observing."
},
{
"end_time": 1830.247,
"index": 75,
"start_time": 1800.981,
"text": " But this is a statement about my mental state. It's not a statement about something in reality, about the world. And to make statements about the world, I probably need to go beyond perception. The second aspect that we are now getting to is when you say that reality and minds might have properties that are not computational yet, your AGI is entirely computational and doesn't need any kind of first principles wonder machine built into it."
},
{
"end_time": 1859.957,
"index": 76,
"start_time": 1830.247,
"text": " that goes beyond what we can construct from automata. Are you establishing that AGI, Artificial General Intelligence, with potentially superhuman capabilities, are still lagging behind what your mind is capable of? No, not at all. I just think the other aspects are there anyway and you don't need to build them. So you're going to make the non-computational parts of reality using computation?"
},
{
"end_time": 1885.35,
"index": 77,
"start_time": 1861.527,
"text": " No, you don't have to make them. They're already there. I mean, if you if you take just just take a more simple point of view where you're thinking about first and third and purse was basically a panpsychist, right? So he believed that matter is mind hidebound with habit. As he said, he believed that every little particle had its own spark or element of"
},
{
"end_time": 1915.179,
"index": 78,
"start_time": 1885.862,
"text": " Consciousness and awareness in it. So I mean from that standpoint, I mean this this kind of bubbly water that I'm holding up has its own variety of conscious awareness to it, which is has a different properties in the conscious awareness in my brain or yours. So from that standpoint, if I build an AGI program, it has something around the same patterns and structures and dynamics as"
},
{
"end_time": 1945.299,
"index": 79,
"start_time": 1915.418,
"text": " a human brain as the sort of computational aspect of the human mind, from that standpoint, then most likely the same sort of firstness, the same species of subjective awareness will be associated with that AGI machine that you built. But it's not that you needed to construct it. I mean, any more than you need to explicitly construct like the"
},
{
"end_time": 1975.145,
"index": 80,
"start_time": 1946.118,
"text": " Positioning in time of your computer or something like you you you build something it's already there in time You don't have to build time. I mean you just build it and it's there. It's there in time You didn't need a theory of time and you didn't need to screw together Moment t to moment t plus one either the the perspective is more the awareness is is ambient and then it's it's there you don't you don't need to build it another of course, there's subtlety that different"
},
{
"end_time": 2001.425,
"index": 81,
"start_time": 1975.862,
"text": " sorts of constructions may have different sorts of awareness associated with them, and there's philosophical subtlety in how you treat different kinds of firsts when you're operating in a level where relationship doesn't exist yet. In what sense is the experience of red different from the experience of blue, even though"
},
{
"end_time": 2017.944,
"index": 82,
"start_time": 2002.142,
"text": " I haven't read these pages, so I don't really understand them. I think it's"
},
{
"end_time": 2034.36,
"index": 83,
"start_time": 2017.944,
"text": " And by that's possible,"
},
{
"end_time": 2057.073,
"index": 84,
"start_time": 2034.36,
"text": " It seems to me that there are simpler ways in which particles could be constructed to do the things that they are doing. It seems to me sufficient that there are basically emergent error correcting codes on the quantum substrate and would just emerge over the stuff that remains statistically predictable in branching multiverse."
},
{
"end_time": 2076.869,
"index": 85,
"start_time": 2057.073,
"text": " Don't need to be conscious to do anything like that. Maybe if we do more advanced physics, we figure out or know this error correcting code that just emerges similar to a vortex emerges in the bathtub when you move your hand and only the vortex remains and everything else dissipates in the wave background that you are producing in the chaos."
},
{
"end_time": 2093.592,
"index": 86,
"start_time": 2076.869,
"text": " And to achieve this, I don't think that I need to posit that they are conscious."
},
{
"end_time": 2117.722,
"index": 87,
"start_time": 2093.951,
"text": " If it could be that I figure out, oh no, this is not sufficient. We need way more complicated mass and structure to make this happen. So they need some kind of coherence improving operator that is self-reflexive and eventually leads to the structure. Then I would say, yeah, maybe this is a theory that we should seriously entertain. Until then I'm undecided and Occam's razor says I can"
},
{
"end_time": 2135.282,
"index": 88,
"start_time": 2117.722,
"text": " construct what I observe at this level of elementary particles, atoms, and so on, by assuming that they don't have any of the conscious functionality that exists in my own mind. And the other way would be you can redefine the notion of consciousness into some principle of self-organization that is super basic."
},
{
"end_time": 2165.128,
"index": 89,
"start_time": 2135.282,
"text": " But this would redefine consciousness into something else, because there's a lot of self-organizing stuff that does not fall into the same category that an anesthesiologist makes go away when he gives you an anesthetic. And to me, consciousness is that thing which seems to be suspended when you get an anesthetic and that stops you from learning and currently interacting with the world. I mean, that at least gives me a chance to repeat once again my favorite quote from Bill Clinton, former US president, which is a"
},
{
"end_time": 2192.022,
"index": 90,
"start_time": 2165.418,
"text": " That all depends what the meaning of is is. I mean, that's interesting, Ben, because first time you wish. I don't know if you remember. I brought up that quote. I don't remember the context, but I said, yeah, that also depends on what is is. Question is, what do you mean by is? Right. So this is the story. It sounds like it sounds like Bill Clinton. It depends upon what the meaning of the word is."
},
{
"end_time": 2217.602,
"index": 91,
"start_time": 2192.551,
"text": " Yeah, I mean, a couple reactions there and I feel I may have"
},
{
"end_time": 2245.589,
"index": 92,
"start_time": 2218.046,
"text": " lost something in the buffering process there but I think that let me see so that first of all about causality and firstness or a raw experience I mean almost by"
},
{
"end_time": 2271.783,
"index": 93,
"start_time": 2246.357,
"text": " definition of how Perse sets up his metaphysical categories. I mean, the firstness doesn't cause anything. So you're not going to come up with a case where I need to assume that this particle has experience or else I can't explain why"
},
{
"end_time": 2294.94,
"index": 94,
"start_time": 2273.166,
"text": " This experiment came out this way. I mean, that would be a sort of category error in person's perspective. So if the only sort of thing you're willing to attribute existence to is something which has a demonstrable causal impact on some experiment, then"
},
{
"end_time": 2320.845,
"index": 95,
"start_time": 2295.384,
"text": " By that assumption, I mean, that's essentially equivalent to the perspective you're putting forth that everything is computation. And Perse didn't think other categories besides third were of that nature. There's also a just shallow semantic matter tied into this, which is the word consciousness is just highly"
},
{
"end_time": 2350.742,
"index": 96,
"start_time": 2321.578,
"text": " So, I mean, Yoshua, you seem to just assume that human-like consciousness is consciousness, and I don't really care if people want to reserve the word consciousness for that. Then we just need some other word for the sort of ambient awareness and everything in the universe, right? So there's lengthy debates among academics on like, okay, do we say a particle is conscious or do we say it's"
},
{
"end_time": 2370.213,
"index": 97,
"start_time": 2351.135,
"text": " Proto-conscious, right? So then you can say, okay, we have proto-consciousness versus consciousness, or we have like raw consciousness versus reflexive consciousness or human-like consciousness. And I mean, I spent a while reading all this stuff. I wrote some things about it. In the end, I'm just like, this is a"
},
{
"end_time": 2384.923,
"index": 98,
"start_time": 2371.101,
"text": " This is a game that overly intellectual people are planning to entertain themselves. It doesn't really matter. I've got my experience of the universe. I know what I need to do to build AGI systems and arguing about which"
},
{
"end_time": 2410.691,
"index": 99,
"start_time": 2385.418,
"text": " words to associate with different flavors and levels of experience is you just kind of running around in circles. Conceptually, it's an important question. Is this camera that is currently filming my face and representing it and then relaying it to you aware of what it's doing? And is this just a matter of degree with respect to my consciousness? Is this representing some kind of ambient awareness of the universe"
},
{
"end_time": 2436.186,
"index": 100,
"start_time": 2410.691,
"text": " I mean, if the only kind of answer that you're interested in are rigorous scientific answers, then you have your answer by assumption, right? I mean, answering questions by"
},
{
"end_time": 2454.77,
"index": 101,
"start_time": 2436.834,
"text": " Assumption is fine. It's practical. It saves our time. I think that's what you're doing. I don't see how you're not just trying to answer by assumption. You posit that elementary particles are conscious."
},
{
"end_time": 2482.927,
"index": 102,
"start_time": 2454.77,
"text": " When I point out that we normally reserve the world consciousness for something that is really interesting and fascinating and shocking to us, and that it would be more shocking if it projected into the elementary particles, and then you say, okay, but I just mean ambient awareness. Now we have to disassemble what ambient awareness actually means. What does this awareness come down to? I think that you're pointing at something that I don't want to dismiss. I want to take you seriously here."
},
{
"end_time": 2495.998,
"index": 103,
"start_time": 2483.319,
"text": " So maybe there is something to what you are saying, but you're not getting away with simply waving at it and saying that this is sufficient to explain my experience and I'm no longer interested to make my words mean things."
},
{
"end_time": 2527.688,
"index": 104,
"start_time": 2497.722,
"text": " Let's just let Ben respond."
},
{
"end_time": 2552.415,
"index": 105,
"start_time": 2528.063,
"text": " I guess, rightly or wrongly, as a human being, I've gotten bored with that question in the same way that I couldn't say it's worthless. At some point, maybe you could convince someone. I know people who were convinced by materials they read on the internet to give up on Mormonism or Scientology, so I can't say it's worthless to"
},
{
"end_time": 2581.34,
"index": 106,
"start_time": 2553.08,
"text": " Debate these points with people who are heavily attached to an ideology I think is silly. On the other hand, I personally just tend to get bored with repeated debates that go over the same points over and over again. If I had an infinite number of clones, then I wouldn't. I guess one of the things that I get"
},
{
"end_time": 2587.756,
"index": 107,
"start_time": 2582.022,
"text": " word out with is people claiming my definition of this English word"
},
{
"end_time": 2614.104,
"index": 108,
"start_time": 2588.285,
"text": " is the right one and your definition is the wrong one. I guess you weren't really doing that, Joshua, but it just gave me a traumatic memory. I'm sorry for triggering you here. I'm not fighting about words. I don't care which words you're using. When I think about an experience of what it's like and associate that with consciousness"
},
{
"end_time": 2636.271,
"index": 109,
"start_time": 2614.104,
"text": " the system that is able to create a perception of a now. Then I'm talking about a particular phenomenon that I have in mind and I would like to recreate if I can and I want to understand how it works. And so for me the question of whether I project this"
},
{
"end_time": 2663.08,
"index": 110,
"start_time": 2636.271,
"text": " Let me tell you how I'm looking at anesthesia, which is a concrete specific example that's not that trivial, right? Because that"
},
{
"end_time": 2691.237,
"index": 111,
"start_time": 2664.002,
"text": " I've only been under anesthesia once, which I have wisdom teeth removed. So what was it wasn't that bad, but other people have had far more traumatic things on when they're under anesthesia. And there, there is, there's always the nagging fear that like, since we don't really know how anesthesia works in any fundamental depth, and also don't really know how the brain generates our usual everyday states of consciousness and enough that"
},
{
"end_time": 2720.776,
"index": 112,
"start_time": 2691.852,
"text": " It's always possible that while you're under anesthesia, you're actually, in some sense, some variant of you is feeling that knife slicing through you and maybe just the memory is being cut off. And then once you come back, you don't remember it. But then that might not be true. But then you have to ask, well, OK, say then while my jaw is being cut open by that knife,"
},
{
"end_time": 2750.964,
"index": 113,
"start_time": 2721.63,
"text": " Does the jaw feel it? Does the jaw hurt whilst being sliced up by the knife? Is the jaw going ahh? Well, on the other hand, the global workspace in your brain, the reflective theater of human-like consciousness in your brain may well be disabled by the anesthetic. The way I personally look at that is I suspect under anesthesia"
},
{
"end_time": 2780.555,
"index": 114,
"start_time": 2751.51,
"text": " your sort of reflective theater of consciousness is probably disabled by that anesthetic. I'm not 100% sure, but I think it's probably disabled, which means there's probably not like a version of Ben going, ah, wow, this really hurts, this really hurts, and then forgetting it afterwards. So, I mean, maybe you could do that, just like disable memory recording, but I don't think that's what's happening. On the other hand, I think the jaw,"
},
{
"end_time": 2802.483,
"index": 115,
"start_time": 2780.964,
"text": " is having its own experience of being sawed open while you're getting that wisdom tooth removed under general anesthesia. I think it's not the same sort of experience exactly as the reflective theater of consciousness that knows itself as Ben Goetzel is having. The Ben Goetzel can"
},
{
"end_time": 2823.234,
"index": 116,
"start_time": 2802.995,
"text": " conceptualize that it's that it's experiencing pain it can go like ow that really hurts and then the thinking that saying that really hurts is different than the that which really hurts right there there's many levels there but i do think there's some sort of raw feeling that the jaw itself is is is having like even even if it's not"
},
{
"end_time": 2852.961,
"index": 117,
"start_time": 2823.234,
"text": " connect to that reflective theater of awareness in the brain. Now the jaw is biological cells, so some people would agree that those biological cells have experience, but they would think like a brick when you smash it with an axe doesn't. But I suspect the brick also has that some elementary"
},
{
"end_time": 2881.049,
"index": 118,
"start_time": 2853.609,
"text": " I think it is like something to be a brick that's smashed in half by an axe. On the other hand, it's not like something that can reflect on what it is to be a brick smashed in half by an axe. That is how I think about it, but again I don't know how to make that science because I can't ask my jaw what it feels like because my jaw doesn't"
},
{
"end_time": 2902.005,
"index": 119,
"start_time": 2881.613,
"text": " doesn't speak language and even if I was able to like wire my brain into the jaw of someone else who's going through wisdom tooth removal under anesthesia like I might say like through that wire I can feel by an eye thou experience like I can feel the pain of that jaw being sliced open but I mean"
},
{
"end_time": 2912.534,
"index": 120,
"start_time": 2902.602,
"text": " You can tell me I'm just hallucinating that and my own brain is like improvising that based on the signals that I'm getting and I'm not sure how you"
},
{
"end_time": 2938.268,
"index": 121,
"start_time": 2913.029,
"text": " really pin that down in an experiment, right? Let me try. So there have been experiments about anesthesia. I'm not an expert on anesthesiology, so I asked everybody for forgiveness if I get things wrong. But there have been different anesthetics and some of them work in very different ways. And there is indeed a technique that basically works by giving people"
},
{
"end_time": 2961.766,
"index": 122,
"start_time": 2938.268,
"text": " There have been experiments that surgeons did where they were applying a tourniquet to an arm of the patient so the muscle relaxant didn't get into the arm and they could still use the arm."
},
{
"end_time": 2983.302,
"index": 123,
"start_time": 2961.766,
"text": " And then in the middle of the surgery, they asked the person that was there lying fully relaxed and incommunicado to raise their arm, raise their hand if they were conscious and aware of what is happening to them. And they did. And when they were asked if they had unbearable pain, they also raised their hand. And after the surgery, they didn't forget, they had forgotten about it."
},
{
"end_time": 2999.411,
"index": 124,
"start_time": 2984.241,
"text": " I also noticed the same thing on surgery. I had a number of big surgeries in my life and there is a difference between different types of surgery. There is one type of surgery where I wake up and feel much more terrified and violated than I do before the surgery."
},
{
"end_time": 3024.94,
"index": 125,
"start_time": 2999.411,
"text": " And I don't know why, because I have no memory of what happened. Also, my memory formation is impaired. So when I am in the ER and ask people how it went, I might have that same conversation multiple times, word for word, because I don't remember what they said or that I asked them. There is another type of anesthesia. And I observed this, for instance, in one of my children where the child wakes up and says, no, the anesthesia didn't work."
},
{
"end_time": 3045.589,
"index": 126,
"start_time": 3025.794,
"text": " And it was an anesthesia with gas. So the child choked on the gas and you see your child lying there completely relaxed and sleeping and then waking up and starting to choke and then telling you the anesthesia didn't work. There is a complete gap of eight hours in the memory of that child in which the mental state was somehow preserved."
},
{
"end_time": 3075.964,
"index": 127,
"start_time": 3046.408,
"text": " Subjectively, the child felt a complete continuation and then was looking around, the reason the room was completely different, time was very different, led to confusion and reorientation. So I would suspect that in the first case, it is reasonable to assume or to hypothesize, at least, that consciousness was present, but we don't recall what happened in this conscious state, whereas in the second one, there was a complete gap in the conscious experience and consciousness resumed after that gap."
},
{
"end_time": 3105.794,
"index": 128,
"start_time": 3078.746,
"text": " And we can test this, right? There are ways, regardless of whether we agree with this particular thing or whether we think anesthesia is important, in principle, we can perform such experiments and ask such questions. And then on another level, when we talk about our own consciousness, there's certain behavior that is associated with consciousness that makes it interesting. Everything, I guess, only becomes interesting due to some behavior, even if the behavior is entirely internal."
},
{
"end_time": 3126.169,
"index": 129,
"start_time": 3105.794,
"text": " If you are just introspectively conscious, it still matters if I care about you. This is a certain type of behavior that we still care about. For instance, if I ask myself, is my iPhone conscious? The question is, what kind of behavior of the iPhone corresponds to that? I suspect if I turn off my iPhone or smash it,"
},
{
"end_time": 3149.718,
"index": 130,
"start_time": 3126.169,
"text": " It does not mean anything to the iPhone. There is no what it's likeness of being smashed for the iPhone. There could be a different layer where this is happening, but it's not the layer of the iPhone. Now let's get to a slightly different point, this question of whether your jaw knows anything about being hurt. So imagine that there is surgery on your jaw, like with your wisdom teeth."
},
{
"end_time": 3173.319,
"index": 131,
"start_time": 3149.718,
"text": " Is there something going on that is outside of your brain that is processing information in such a way that your jaw could become sentient, in the sense that it knows what it is and how it relates to reality, at least to some degree and level? And I cannot rule this out. Where there are cells, these cells can process information, they can send messages to their neighbors and the patterns of their activation, who knows what kind of programs they can compute."
},
{
"end_time": 3192.381,
"index": 132,
"start_time": 3173.899,
"text": " But here we have a means and a motive. The means and motive here are it would be possible for the cells to exchange conditional matrices to perform arbitrary computations and build representations about what's going on. And the motive would be that it's conceivable that this is a very useful thing for biological tissues to have in general."
},
{
"end_time": 3212.073,
"index": 133,
"start_time": 3192.381,
"text": " And so, if they evolve for long enough, and it is in the realm of evolvability that they perform interactions with each other that lead to representations of who they are and what they are doing, even though they are much slower than what's happening in our brain and decoupled from our brain in such a way that we cannot talk to our jaw. It's still conceivable, right? I wouldn't rule this out."
},
{
"end_time": 3233.37,
"index": 134,
"start_time": 3213.166,
"text": " It's much harder for me to assume the same thing for elementary particles because I don't see them having this functionality that cells have. Cells are so much more complicated that just fits in that they would be able to do this. And so I would make a distinction. I would not rule out that multicellular organisms without brains could be conscious."
},
{
"end_time": 3261.613,
"index": 135,
"start_time": 3233.37,
"text": " but at different time scales than us requiring very different measuring mechanisms because their signal processing is probably much slower and it takes longer for them to become coherent at scale because it takes so long for signals to go back and forth if you don't have nerves. But I don't see the same thing happening for element requirements. I don't rule it out again but you would have to show me some kind of mechanism. I mean if you're going to look at it that way, which isn't the only way that I would look at it, but if you're going to look at it that way,"
},
{
"end_time": 3289.309,
"index": 136,
"start_time": 3261.903,
"text": " I don't see why you wouldn't say the various elementary particles which are really distributed like amplitude distributions. I don't know why you wouldn't say these various interacting amplitude distributions are exchanging quantum information with a motivation to achieve stationary action given their context. I mean you could tell that"
},
{
"end_time": 3315.981,
"index": 137,
"start_time": 3289.804,
"text": " You can tell that story. That's sort of the story that physics tells you. They're swapping information back and forth, trying to make the action stationary. Yes, but for the most part they don't form brains. They also do form brains. So elementary particles can become conscious in the sense that they can form brains, nervous system, maybe equivalent information processing architectures. I just feel like you're privileging a certain"
},
{
"end_time": 3337.637,
"index": 138,
"start_time": 3316.766,
"text": " Level and complexity of organization because it happens to be ours and I mean we have a certain level and complexity of organization and and of Consciousness and and I mean a cell in my jaw has a lower one Brick has a lower one element problem when the future a GI"
},
{
"end_time": 3356.732,
"index": 139,
"start_time": 3337.637,
"text": " I wouldn't say lower or higher. I would say that if my jaw is conscious, there are far less cells involved in my brain and the interaction between them is slower."
},
{
"end_time": 3383.677,
"index": 140,
"start_time": 3356.732,
"text": " So if it's conscious, it's probably more at the level of, say, a fly than a level of a brain, and it's probably going to be as fast as a tree in the way in which it computes rather than as fast as your brain. And I don't think that's something that is assigning some undue privilege to it. I'm just observing a certain kind of behavior, and then I look for the means and motive behind that behavior."
},
{
"end_time": 3412.602,
"index": 141,
"start_time": 3383.677,
"text": " And then I try to construct causal structure and I might get it wrong. There's things that might be missing, but it's certainly not because I have some kind of speciesism that assigns higher consciousness to myself because it's me. All right. Yeah. I mean, I don't know what your motivations are. Kurt, I have a higher level comment, which is we're like an hour through this conversation, probably halfway through. I feel like the philosophy, the hard problem of consciousness"
},
{
"end_time": 3421.527,
"index": 142,
"start_time": 3413.404,
"text": " The hard problem of consciousness is an endless rabbit hole. It's not an uninteresting one."
},
{
"end_time": 3448.012,
"index": 143,
"start_time": 3421.817,
"text": " I think it's not the topic on which Josh and I have the most original things to say. I think each of our perspectives here are held by many other people. I might interject a little bit. One of our most interesting disagreements is in Ben being a panpsychist and me not knowing how to formalize panpsychism in a way that makes it different from box standard functionalism. I do value this discussion and don't think it's useless."
},
{
"end_time": 3455.964,
"index": 144,
"start_time": 3448.012,
"text": " That basically feel that on almost everything else, you mostly agree except for crypto. Okay. Yeah, to me that's"
},
{
"end_time": 3478.422,
"index": 145,
"start_time": 3456.305,
"text": " Almost a Zen thing. I don't know how to formalize the notion that there are things beyond formalization."
},
{
"end_time": 3502.739,
"index": 146,
"start_time": 3478.422,
"text": " Yoshi, you're more of the mind that LLMs or deep neural nets are on or significant step toward AGI, maybe even sufficient with enough complexity. And then I think that you disagree. Yeah, I think most issues in terms of the relationship between LLMs and AGI, we actually probably agree on"
},
{
"end_time": 3527.602,
"index": 147,
"start_time": 3503.575,
"text": " Quite quite well. But I mean, obviously, large language models are an amazing technology, like from an AI application point of view, they can do all sorts of fantastic and tremendous things. I mean, I mean, it sort of blew my mind how smart GPT-4 is. It's not the first time"
},
{
"end_time": 3556.323,
"index": 148,
"start_time": 3528.592,
"text": " My mind has been blown by an AI technology. I mean my mind was blown by computer algebra systems when they first came out and you could like do integral calculus with arbitrary complexity and you know when Deep Blue beat chess with just game trees I'm like whoa so I mean I don't think it's the only amazing thing to happen in the history of AI but it's an amazing thing like it's a big breakthrough and it's super cool. I think that"
},
{
"end_time": 3580.077,
"index": 149,
"start_time": 3558.131,
"text": " If deployed properly, this sort of technology could do a significant majority of jobs that humans are now doing on the planet, which has big economic and social implications. I think that the way these algorithms are representing knowledge internally"
},
{
"end_time": 3609.224,
"index": 150,
"start_time": 3580.862,
"text": " is not what you really need to make a full on human level AGI system. So I mean, when you look at what's going on inside the transformer neural network, I mean, it's not quite just a big weighted hash table of particulars, but to me, it does not represent abstractions"
},
{
"end_time": 3632.705,
"index": 151,
"start_time": 3609.531,
"text": " in a sufficiently flexibly manipulable way to do the most interesting things that the human mind does. And this is a subtle thing to pinpoint in that, say, something like a fellow GPT does represent abstraction. It's showing an emergent representation of where the board is, but of the different"
},
{
"end_time": 3662.892,
"index": 152,
"start_time": 3633.968,
"text": " It's learning an emergent representation of features like a black square is on this particular board position or a white square is on this particular board position. So examples like that show that LLMs can in fact learn abstract representations and can manipulate them in some way but it's very limited in that regard. I mean in that case it's seen a shitload of Othello games and that's a quite simple thing to represent. So I think when you look at"
},
{
"end_time": 3690.64,
"index": 153,
"start_time": 3663.729,
"text": " How the neural net is learning, how the attention mechanism is working, how it's representing stuff. I mean, it's just not representing a hierarchy of subtle abstractions the way a human mind is. And I mean, the subtler question is what functions you could get by glomming an LLM together with other components in a"
},
{
"end_time": 3717.79,
"index": 154,
"start_time": 3691.135,
"text": " hybrid architecture with the LLM at the center. So suppose you give a working memory. Suppose you give an episodic memory. Suppose you have a declarative long-term memory graph and you have all these things integrated into the prompts and integrated into fine-tuning of an LLM. Well then you have something that in principle it's Turing complete and it could probably do a lot of quite amazing things. I still think if the hub of that system is an LLM with its"
},
{
"end_time": 3736.152,
"index": 155,
"start_time": 3718.404,
"text": " impaired and limited ability for representing and manipulating abstract knowledge. I think it's not going to do the most interesting kinds of thinking that people can do. Examples of things I think you fundamentally can't do with that kind of architecture are, say,"
},
{
"end_time": 3756.118,
"index": 156,
"start_time": 3736.783,
"text": " invent a new branch of mathematics, invent a radically new genre of music, figure out a new variety of business strategy like say Amazon or Google did that's quite different than things that have been done before. All these things involve"
},
{
"end_time": 3780.384,
"index": 157,
"start_time": 3756.647,
"text": " A leap into the unknown beyond the training data to an extent that I think you're not going to get with the way that LLMs are representing knowledge. No, I do think LLMs are powerful as tools to create AGI. So for example, as one sub project in my own AGI project, we're using LLMs to map English sentences"
},
{
"end_time": 3804.565,
"index": 158,
"start_time": 3780.828,
"text": " into computer programs or try to get logic expressions. That's super cool. Then you've got the web in the form of a huge collection of logic expressions. You can use a logic engine to connect everything on the web with what's in databases and with stuff coming in from sensors and so on. That's by no means the only way to leverage LLMs toward AGI, not at all, but it's one"
},
{
"end_time": 3823.66,
"index": 159,
"start_time": 3805.23,
"text": " One interesting way to leverage LLMs toward AGI, you can even ask the LLM to come up with an argument and then use that as a sort of guide for a theorem prover and coming up with a more rigorous version of that argument. So I do think there are many ways more than I could describe right now of LLMs"
},
{
"end_time": 3854.138,
"index": 160,
"start_time": 3824.138,
"text": " to be used to"
},
{
"end_time": 3877.568,
"index": 161,
"start_time": 3854.138,
"text": " a sort of motivated agent infrastructure around an LLM, right? You can wrap Joseph's PSI model, micro PSI model in some way around an LLM if you wanted to and you could make it. I mean people tried dumb things like that with other GPT and so-called baby AGI and so forth. So I mean on the other hand, I think if you wrap a motivated agent"
},
{
"end_time": 3903.763,
"index": 162,
"start_time": 3877.858,
"text": " architecture around an LLM with its impaired capability for making flexibly manipulable abstract representations. I think you will not get something that builds a model of self and other with the sophistication that humans have in their reflective consciousness. I think that having a sophisticated abstract model of self and other in our reflective consciousness"
},
{
"end_time": 3931.869,
"index": 163,
"start_time": 3903.763,
"text": " the kind of consciousness that we have but a brick or a jaw cell doesn't. Without that abstraction in our model of reflective consciousness tied in with our motivated agent architecture, then that's part of why you're not going to get the fundamental creativity in inventing new genres of music or new branches of mathematics or new business strategies. In humans, we do this amazing novel stuff, which is what drives culture forward,"
},
{
"end_time": 3957.193,
"index": 164,
"start_time": 3932.21,
"text": " We do this by our capability for flexibly manipulable abstraction tied in with our motivated agent architecture. I don't see how you get that with LLMs as the central hub of your hybrid AGI system, but I do think you can get that with an AGI system that has something like open cogs, atom space and reasoning system as the central hub with an LLM as a subsidiary component."
},
{
"end_time": 3986.34,
"index": 165,
"start_time": 3957.654,
"text": " But I don't think open cog is the only way either. I mean, obviously, you could make a biologically realistic brain simulation that had human level AGI. I just think then the LLM like structures and dynamics within that biologically realistic brain system would just be a subset of what it does. You know, there'd be quite different stuff in the cortex. So yeah, that's not quite a capsule summary, but a lengthy, lengthy ish overview of my perspective on this."
},
{
"end_time": 4008.268,
"index": 166,
"start_time": 3987.278,
"text": " Okay, great. I know there was a slew there, if you can pick up some of the pieces and respond. But also at the same time, there's emergent properties of LLMs. So for instance, reflection is apparently some emergent property. There are but they're limited. I mean, and that does make it subtle because you can't say they don't emerge."
},
{
"end_time": 4031.459,
"index": 167,
"start_time": 4008.507,
"text": " knowledge representation. They do. And Othello GPT is one very simple example that there are others. There is emergent knowledge representation in them, but it's very simplistic and limited. It doesn't pop up effectively from in-context learning, for example. But anyway, this would dig us very deep into current LLMs."
},
{
"end_time": 4039.923,
"index": 168,
"start_time": 4031.732,
"text": " Yeah, so is there some in principle reason why you think that a branch of mathematics Yoshi can't be invented by an LLM with sufficient parameters or data?"
},
{
"end_time": 4070.145,
"index": 169,
"start_time": 4040.384,
"text": " I am too stupid to decide this question. So basically what I can offer is a few perspectives that I see when I look at the LLM. Personally, I am quite agnostic with respect to its abilities. And at some level, it's an autocomplete algorithm that is trying to predict tokens from previous tokens. And if you look at what the LLM is doing, it's not a model of the brain. It's a model of what people say on the internet. And it is"
},
{
"end_time": 4095.794,
"index": 170,
"start_time": 4070.145,
"text": " Discovering a structure to represent that quite efficiently is an embedding space that has lots of dimensions. You can imagine that each of these dimensions is a function and the parameters of this function are the positions on this dimension that you can have. And they all interact with each other to together create some point in a high dimensional space that this could be an idea or a mental state or a complex thought."
},
{
"end_time": 4102.108,
"index": 171,
"start_time": 4096.118,
"text": " And at the lowest level, when you look at how it works, it's translating these tokens, the translation of linguistic symbols."
},
{
"end_time": 4130.589,
"index": 172,
"start_time": 4102.534,
"text": " into some kind of representation that could be for instance a room with people inside and stuff happening in this room and then it maps it back into tokens at some level. There has been recently a paper out of a group led by Max Tagmark that looked at the Lambda model and discovered that it does indeed contain a map of the world and directly encoded in its structure based on the neighborhood relationships between places in the world that it represents."
},
{
"end_time": 4146.834,
"index": 173,
"start_time": 4130.589,
"text": " I am not sure if I in my entire life ever invented a new dimension in this embedding space of the human mind that is represented on the internet."
},
{
"end_time": 4172.995,
"index": 174,
"start_time": 4146.834,
"text": " If I think about all the thoughts that have been made into books and then encoded in some form and became available as training data to the LLM, we figure out that there, depending on how you count, if you tend to a few hundred thousand dimensions of meaning. And I think it's very difficult to add a new dimension or also to significantly extend the range of those dimensions. But we can make new combinations of what's happening in that space."
},
{
"end_time": 4197.329,
"index": 175,
"start_time": 4174.172,
"text": " Of course, it's not a limit that these things are limited to the dimension that they already discovered. Of course, we can set them up in such a way that they can confabulate more dimensions and we could also set them up in such a way that they could go and verify whether this is a good idea to make this dimension by making tests, by giving the LLM the ability to use plugins to write its own code."
},
{
"end_time": 4226.34,
"index": 176,
"start_time": 4197.329,
"text": " to use a compiler, to use cameras, to use sensors, to use actuators, to make experiments in the world. It's not limited to what we currently let the LLM do. But in the present form, what the transformer algorithm is doing, it tries to find the most likely token. And so, for instance, if you play a game with it and it makes mistakes in this game, then it will probably give you worse moves after making these mistakes because it now assumes that it's playing a bad person, somebody who's really bad at this game."
},
{
"end_time": 4240.913,
"index": 177,
"start_time": 4226.34,
"text": " and it doesn't know what kind of thing it's supposed to play because it can represent all sorts of state transitions. It's an interesting way of looking at it that we are trying to find the best possible token versus the LLM trying to find the most likely token next."
},
{
"end_time": 4266.527,
"index": 178,
"start_time": 4241.681,
"text": " Of course, we can preface the LLM by putting into the prompt that this is a simulation of a mind that is only going to look for the best token and it's trying to approximate this one. So it's not directly a counter argument. It's not even asking us to significantly change the loss function. Maybe we can get much better results. We probably can get much better results if we make changes in the way in which we do training and inference using the LLM."
},
{
"end_time": 4292.892,
"index": 179,
"start_time": 4266.527,
"text": " But this by itself is also nothing that we can prove without making extensive experiments. And at the moment it's unknown. I realize that the people who are being optimistic. I just want to pose a thought experiment. So this is about music rather than natural language. But I mean, we know there's music, Jim, there's similar networks applied to music. So suppose you had taken"
},
{
"end_time": 4314.104,
"index": 180,
"start_time": 4293.695,
"text": " at LLM like MusicGen or Google LM or the Next Generations and trade in on all music recorded or played by humanity up to the year 1900. Is it going to invent the sort of music made by Mahavishnu Orchestra or even Duke Ellington?"
},
{
"end_time": 4326.493,
"index": 181,
"start_time": 4314.582,
"text": " Would you say that has no new dimensions because jazz combines elements of West African drumming and Western classical music? I think that's a level of invention that LLMs are not going to do."
},
{
"end_time": 4353.08,
"index": 182,
"start_time": 4328.166,
"text": " You said that it's combining elements from this and from that. Then Deli 2 came out, I got early access, and one of the things that I tried relatively early on is stuff like an ultrasound of a dragon egg. There is no ultrasound of a dragon egg on the internet, but it created a combination of prenatal ultrasound and archaeopteryx cut-through images and so on, and it looked completely plausible."
},
{
"end_time": 4382.125,
"index": 183,
"start_time": 4353.08,
"text": " And in this sense, you can see that most of the stuff that we are doing when we create new dimensions are mashups of existing dimensions. And maybe we can represent all the existing dimensions using a handful of very basic dimensions from which we can construct everything from the bottom up just by combining them more and more. And I suspect that's actually what's happening in our minds. And I suspect that the LLM is not distinct from this, but a large superset of this. The LLM is Turing complete."
},
{
"end_time": 4403.814,
"index": 184,
"start_time": 4382.125,
"text": " And from one perspective, it's a CPU. We could say that the CPU in your computer only understands a handful, like maybe a dozen or a hundred different machine code programs. And they have to be extremely specific about these codes. And there's no error tolerance. If you make a mistake in specifying them, then your program is not going to work."
},
{
"end_time": 4432.005,
"index": 185,
"start_time": 4404.309,
"text": " And the LLM is a CPU that is so complicated that it requires an entire server farm to be emulated on. And you can give it, instead of a small program in machine code, give it a sentence in a human language. And it's going to interpret this, extrapolate it into some or compile it into some kind of program that then produces a behavior. And that thing is Turing-complete. It can compute anything you want if you can express it in the right way. But being Turing-complete is not interesting, right? I mean, being Turing-complete is"
},
{
"end_time": 4450.52,
"index": 186,
"start_time": 4432.005,
"text": " is irrelevant because it doesn't take resources into account. But you can write programs in a natural language in an LLM and you can also express learning algorithms to an LLM. So basically your intuition is yes, that an LLM could invent jazz, neoclassical metal and fusion."
},
{
"end_time": 4480.606,
"index": 187,
"start_time": 4450.742,
"text": " based only on music up to the year 1900. No, no, I am agnostic. What I'm saying is I don't know that it cannot and I don't see a proof that it cannot and I would not be super surprised when it cannot. I don't think the LLM is the right way to do it. It's not a good use of your resources if you try to make this the most efficient way because our brain is far more efficient and does it in different ways. But I'm unable to prove that the LLM cannot do it. And so I'm reluctant to say LLMs cannot do X without that proof because"
},
{
"end_time": 4509.104,
"index": 188,
"start_time": 4480.606,
"text": " People tend to have egg on their face when they do this. But doesn't that just come back to, like, Popper's notion about falsificationism? Like, I can't prove that, you know, a devil didn't appear at some random place on the earth at some point. No, no, I mean, in the sense of— Sure, you can't prove it. No, what I mean by this is, can I make a reasonable claim that I'm very confident and would bet money on an L, I'm not being able to do this in the next five years?"
},
{
"end_time": 4538.217,
"index": 189,
"start_time": 4509.582,
"text": " This is the kind of statement that I'm trying to make here. So basically if I say, can I prove that an LLM is not going to be able to invent a new kind of music that is the self-genre of jazz, this in the next five years? And I can't. And I would even bet against it. Even though I don't know. You've shifted the goalposts in a way, because I do think not current music gen, but I could see how some"
},
{
"end_time": 4568.695,
"index": 190,
"start_time": 4538.729,
"text": " Upgrade of current LLMs connected with symbolic learning system or blah, blah, blah. I do think you could invent a new subgenre of jazz or grindcore or something. And I'm actually playing with stuff like that. The example I gave was a significantly bigger invention, right? Like, I mean, jazz was not the subgenre of Western classical music, nor of West African drumming, right? I mean, so that is, that is a"
},
{
"end_time": 4597.91,
"index": 191,
"start_time": 4569.019,
"text": " To me, is it qualitatively different? A couple of weeks ago, I was at an event locally where somebody presented their music GPT and you could enter, give me a few by Debussy and it would try to perform and it wasn't all bad. That's not the point. Yes, but it's just an example for some kind of functionality. But any kind of mental functionality that is interesting, I think I'm willing to grant that the LM might not be the best way of doing it."
},
{
"end_time": 4607.432,
"index": 192,
"start_time": 4597.91,
"text": " And I think it's also possible that we can at some point prove limitations of LLMs rigorously. But so far I haven't seen those proofs. What I see is insinuations."
},
{
"end_time": 4636.22,
"index": 193,
"start_time": 4607.773,
"text": " on both sides. And the insinuation that OpenAI makes when it says that we can scale this up to do anything is one that has legitimacy because they actually put their money there. They actually bet on this in a way that they invest their lifetime into it and see if it works. And if it fails, then they will make changes to the paradigm. And then there are other people who, like Gary Marcus, come out saying, loud, loud, swinging, this is something the LLM can never do."
},
{
"end_time": 4647.671,
"index": 194,
"start_time": 4636.22,
"text": " And I suspect that they will have egg on their face because many of the promises that Gary Marcus made about what LLMs cannot do have already been disproven by LLMs doing these things."
},
{
"end_time": 4673.78,
"index": 195,
"start_time": 4648.131,
"text": " And so I'm reluctant going out saying things that I cannot prove. I find it interesting that the LLM is able to do all the things that it does using in the way in which it does them. Right. But that doesn't mean to me that LLMs that I'm optimistic that they can go all the way. But I am also unable to prove the opposite. I have no certainty here. I just don't know. So about rigorous proof, I mean, the thing is the sort of proof"
},
{
"end_time": 4685.179,
"index": 196,
"start_time": 4674.104,
"text": " So I mean, you can prove an LLM without an external memory is not Turing complete and that's been done. But on the other hand, it's not hard to give them an external memory like a Turing machine tape to... Or a prompt."
},
{
"end_time": 4713.882,
"index": 197,
"start_time": 4686.63,
"text": " The prompt is an external memory to the LLM. You have no LLMs with unlimited prompt context if you want to. It would have to be able to write prompts. Yes, it is writing prompts. Not just read prompts. Basically, it's an electric Weltgeist possessed by a prompt. In principle, you can give it a prompt that is self-modifying and that allows it to also use databases and so on and plug-ins. I know. I've done that myself. You can"
},
{
"end_time": 4740.282,
"index": 198,
"start_time": 4713.882,
"text": " But they can also write LLMs that have unlimited prompt sizes and that can read their own prompts. So there's not an intrinsic limitation to the LLM. I see one important limitation to the LLM. The LLM cannot be coupled to the universe in the same way in which we are. It's offline in a way. It's not real time. It's not able to interact with your nervous system on a one-to-one level."
},
{
"end_time": 4759.172,
"index": 199,
"start_time": 4740.282,
"text": " I mean that latter point is kind of a trivial one because there's no fundamental reason you can't have online learning in the transformer neural net. I mean that's a computational cost limitation at the moment but I'm sure, I mean it's not more than years because you have transformers that do"
},
{
"end_time": 4788.285,
"index": 200,
"start_time": 4759.377,
"text": " do online learning and sort of in place updating of the weight matrix. So I don't think that's a fundamental limitation actually. I think that the fact that they're not Turing complete is, unless you add an external memory, is sort of beside the point. What I was going to say in that previous sentence I started was to prove"
},
{
"end_time": 4808.933,
"index": 201,
"start_time": 4788.592,
"text": " The limitations of LLMs would just require a sort of proof that isn't formally well developed in modern computer science because what you're asking is like which sorts of practical tasks can it probably not do without"
},
{
"end_time": 4831.596,
"index": 202,
"start_time": 4810.23,
"text": " more than X amount of resources and more than X amount of time. You're looking at average case complexity relative to certain real-world probability distributions, taking resources into account. You could formulate that sort of theorem, it's just that it's not what computer science has focused on."
},
{
"end_time": 4855.538,
"index": 203,
"start_time": 4832.09,
"text": " We can't, that's the same thing I faced with OpenCog Hyper on my own AGI architecture. It's hard to rigorously prove or disprove what these systems are going to do because we don't have the theoretical basis for it. But nevertheless, both as entrepreneurs and as researchers and engineers, you still have to make"
},
{
"end_time": 4878.558,
"index": 204,
"start_time": 4855.964,
"text": " Make a choice of what to pursue, right? And so, I mean, yeah, we are going in this field without rigorous proof. Just like I can't prove that psych is a dead end, like the late Douglas Knott's logic system. Like I can't really prove that if you just put like 50 times as much, you know,"
},
{
"end_time": 4903.831,
"index": 205,
"start_time": 4879.309,
"text": " We don't have a way to mathematically show that that's the dead end I intuitively feel it to be. That's just the situation that we're in. I want to go back to your discussion of what's called concept blending and the fact that creativity is not ever utterly"
},
{
"end_time": 4933.422,
"index": 206,
"start_time": 4904.684,
"text": " Radical but in human history, it's always combinatorial in a way but I think this ties in with the nature of the representation and I think that You know, I mostly by the notion that Almost all human creativity is done by blending together existing concepts and forms in some more or less judicious way I just think that what the most interesting cases of human creativity involve is"
},
{
"end_time": 4955.964,
"index": 207,
"start_time": 4933.831,
"text": " are blending things together at a higher level of abstraction than the level at which LLMs generally and most flexibly represent things. And also, the most interesting human creativity has to do with blending together abstractions which have a grounding in the agentic and motivational"
},
{
"end_time": 4965.043,
"index": 208,
"start_time": 4955.964,
"text": " in the"
},
{
"end_time": 4993.183,
"index": 209,
"start_time": 4966.084,
"text": " Collections of lower level data patterns to create something and we do a lot of that also right but what the most interesting examples of human creativity are doing is Combining together more abstract patterns in a beautifully flexible way where these patterns are tied in with the motivational and agentic nature of the of the of the human that that learn those those abstractions and so I I"
},
{
"end_time": 5005.657,
"index": 210,
"start_time": 4993.882,
"text": " I do agree if you had an LLM trained on a sufficiently large amount of data which may not exist on reality right now and a sufficiently large amount of"
},
{
"end_time": 5034.923,
"index": 211,
"start_time": 5006.152,
"text": " Processing which may not exist on the planet right now then and especially large amount of memory Sure, then it can invent jazz. I mean given data of music up to 1900 I mean, but so so could AI X ITL, right? So could a lot of brute force algorithm. So that's that's not that interesting I think the question is can an LLM do it with merely ten or a hundred times as much resources as"
},
{
"end_time": 5064.36,
"index": 212,
"start_time": 5035.282,
"text": " as a better cognitive architecture, or is it like 88 quintillion times as many resources as a more appropriate cognitive architecture could use? But I am aware, and this does in some ways set my attitude across from my friend Gary Marcus you mentioned. I'm aware that being able to invent differential calculus or to invent, say,"
},
{
"end_time": 5092.619,
"index": 213,
"start_time": 5065.401,
"text": " Jazz, knowing only music up to 1900, like this is a high bar, right? I mean, this is something that culture does. It's something that collections of smart and inspired people do. It is a level of invention that individual humans don't commonly manifest in their own lives. So I do find it a bit funny how Gary has over and over, like on X or back when it was Twitter, he said like,"
},
{
"end_time": 5119.394,
"index": 214,
"start_time": 5093.046,
"text": " Al Alam's will never do this. Then like two weeks later, something's like, oh, hold on. And Al Alam just did that. I'm like, well, why are you bothering with that counter argument? Because we know in the history of AI, no one has been good at predicting which things are going to be done by a narrow AI and which things aren't. So I think to wrap this up, I think if you somehow were to replace"
},
{
"end_time": 5127.568,
"index": 215,
"start_time": 5120.196,
"text": " humans with LLMs trained on humanity. An awful lot of what humanity does would get done"
},
{
"end_time": 5154.053,
"index": 216,
"start_time": 5127.858,
"text": " But you'd kind of be stuck culturally like you're not going to invent fundamentally radically radically new stuff ever again. It's going to be like closed ended quasi humans recycling shallow level permutations on things that were already invented. So that's but I cannot I cannot prove that, of course, as we can't prove hardly anything about about complex systems in the moment."
},
{
"end_time": 5178.353,
"index": 217,
"start_time": 5155.06,
"text": " So, Joscha, you're going to comment and then we're going to transition into speaking about whether you're hopeful about AGI and its influence on humanity. I think that it's multiple traditions in artificial intelligence and the perceptron of which most of the present LLMs are an extension or continuation is just one of multiple branches."
},
{
"end_time": 5198.012,
"index": 218,
"start_time": 5178.353,
"text": " Another one was the idea of symbolic AI, which in some sense is Wittgenstein's program, the representation of the world's language that can use grammatical rules and it can be reasoned over. Whereas a neural network, you can think of it as an unsystematic reasoner that under some circumstances can be trained to the point where it does"
},
{
"end_time": 5216.766,
"index": 219,
"start_time": 5198.012,
"text": " systematic reasoning. And there are other traditions like the one that Turing started when he looked at reaction diffusion patterns as a way to implement computation. And that currently lead to a neural cellular automata and so on. And it's a relatively small branch."
},
{
"end_time": 5241.186,
"index": 220,
"start_time": 5217.244,
"text": " But I think it's one that might be better suited to understand the way in which computation is implemented in biological systems. One of the shortcomings that the LLM has to me is that it cannot interface with biological systems in real time, at least not without additional components. Because it uses a very different paradigm, it is not able to"
},
{
"end_time": 5266.886,
"index": 221,
"start_time": 5241.186,
"text": " perform direct feedback loops with human minds, in a way in which human minds can do this with each other and with animals. You can in some sense mind-meld with another person or with a cat by establishing a bi-directional feedback loop between the minds where your nervous systems are entraining themselves and attuning themselves to each other so we can have perceptual empathy and we can have mental states together that we couldn't have alone."
},
{
"end_time": 5292.295,
"index": 222,
"start_time": 5266.886,
"text": " And this might be difficult to achieve for the system that can only make inference and only to cognitive empathy, so to speak, via inferring something about the mental state offline. But this is not necessarily something that is related to the intellectual limitations of a system that is based on an LLM, where the LLM is used as the CPU or as some kind of abstract electrical weight guys that is possessed by the prompt."
},
{
"end_time": 5310.247,
"index": 223,
"start_time": 5292.295,
"text": " and the LLM giving it all it needs to do to go from one state of the next in the mind of that intelligent person simulacrum. And I'm not able to show the limitations of this. I think that psych has shown that it didn't work over multiple decades."
},
{
"end_time": 5331.51,
"index": 224,
"start_time": 5310.247,
"text": " So the prediction that the people who built SAIC, Doug Leonard and others made was that they can get this to work within a couple of years. And then after a couple of years, they made the prediction that they probably could get it to work if they work on it substantially longer. And this is not a bad prediction to make and it's reasonable that somebody takes this bet."
},
{
"end_time": 5358.695,
"index": 225,
"start_time": 5331.51,
"text": " But it's a bet that I consistently lost so far. And at the same time, the bets that the LLM people are making have not been lost so far because we see rapid progress every year. We're not plateauing yet. And this is the reason why I am hesitant to say something about the limitations of LLMs. Personally, I'm working on slightly different stuff. It's not what I put my money on because I think that LLMs are boring and there are more efficient ways to represent learning."
},
{
"end_time": 5366.681,
"index": 226,
"start_time": 5358.695,
"text": " and also more biocompatible ways to produce some of the phenomena that we are looking for in an emergent way."
},
{
"end_time": 5389.411,
"index": 227,
"start_time": 5366.937,
"text": " For instance, one of the limitations of the LLM is that it gets its behavior by observing the verbal behavior of people as exemplified on text. It's all label training data because every bit of the training data is a label. It is looking at the structure between these labels in a way, and it's a very different way in which we learn. It also makes it potentially difficult to discern what we are missing."
},
{
"end_time": 5410.265,
"index": 228,
"start_time": 5389.94,
"text": " If you ask the LLM to emulate a conscious person, it's going to give you something that is summarizing all the known textual knowledge about what it means to behave like a conscious person. And maybe it is integrating them in such a way that you end up with a simulacrum of a conscious person that is as good as ours."
},
{
"end_time": 5437.108,
"index": 229,
"start_time": 5410.265,
"text": " But maybe we are missing something in this way. So this is a methodological objection that I have to LLMs. And so to summarize, I think that Ben and me don't really disagree fundamentally about the status of LLMs to us. I think it's a viable way to try to realize AGI. Maybe we can get to the point that the LLM gets better at AGI research than us. People are a little bit skeptical of it."
},
{
"end_time": 5465.947,
"index": 230,
"start_time": 5437.108,
"text": " but we would also not completely change our worldview if it would work out. It's likely that the LLM is going to be some kind of a component at least in spirit of a larger architecture at some point where it's producing generations and then there are other parts which do in a more efficient way first principles reasoning and verification and interaction with the world and so on. Okay and now about how you feel about the prospects of AGI and its influence on humanity."
},
{
"end_time": 5495.503,
"index": 231,
"start_time": 5465.947,
"text": " We'll start with Ben, and then Riosha will hear your response. And I also want to read out a tweet or an X. I'm okay to start with Riosha on this one. Sure, sure. Let me read this tweet, whatever they're called now from Sam Altman at SMA. And I'll leave the link in the description. He wrote in quotes, short timelines and slow takeoff will be a pretty good call, I think, but the way people define the start of the takeoff may make it seem otherwise. Okay, so this was dated the late September 2023."
},
{
"end_time": 5501.476,
"index": 232,
"start_time": 5495.794,
"text": " Okay, you can use that as a jumping off point to see whether you agree with that as well. Please, Yoshua."
},
{
"end_time": 5528.183,
"index": 233,
"start_time": 5502.892,
"text": " My perspective on this is not normative because I feel that there are so many people working on this that there can be no single organization at this point that determines what people are going to be doing. We are in the middle of some kind of evolution of AI models and people that compete with the AI modelers about regulation and participating in the business and realizing their own politics and goals and aspirations."
},
{
"end_time": 5548.131,
"index": 234,
"start_time": 5528.183,
"text": " So to me it's not so much the question what should V be doing because there is no cohesive V at this point. I'm much more interested in what's likely going to happen and I don't know what's going to happen. I see a number of possible trajectories that I cannot disprove or rule out and I even have difficulty to put any kind of probabilities on them."
},
{
"end_time": 5574.053,
"index": 235,
"start_time": 5548.951,
"text": " I think if we want to keep humanity the way it is, which by the way is unsustainable, society without AI, if we leave it as it is, is not going to go through the next few millions of years. There is going to be major disruptions and humanity might dramatically reduce its numbers at some point, go through bottlenecks that kill this present technological civilization and replace it by something else."
},
{
"end_time": 5596.186,
"index": 236,
"start_time": 5574.053,
"text": " That is, I think, the baseline"
},
{
"end_time": 5624.889,
"index": 237,
"start_time": 5596.186,
"text": " about which we have to think. But if we want to perpetuate this society for as long as possible without any kind of disruptive change until global warming or whatever kills it, we probably shouldn't build something that is smarter than a cat. What do you mean that there may be another species that aspires to be human? To be people. To be people, yeah. What do you mean by that? Yes, I think that at some point there is a statistical certainty that there is going to be a super volcano or meteor that is obliterating us and our food chains."
},
{
"end_time": 5647.978,
"index": 238,
"start_time": 5625.128,
"text": " You just need a few decades of winter to completely eradicate us from the planet. And most of the other large animals too. And what then happens is a reset and then evolution goes on. And until the Earth is devoid of atmosphere, other species are going to evolve more and more complexity. And at some point you will probably have a technological civilization again."
},
{
"end_time": 5672.961,
"index": 239,
"start_time": 5647.978,
"text": " and they will be subject to similar incentives as us and they might use similar cells as us so they can get nervous systems and information processing with similar complexity and you get families of minds that are not altogether super alien at least not more alien than we are to each other at this point and cats are to us okay right so i don't think that we would be the last intelligent species on the planet"
},
{
"end_time": 5689.65,
"index": 240,
"start_time": 5673.507,
"text": " But it is also a possibility that we are. It's very difficult to sterilize the planet unless we build something that's able to get rid of basically all of the cells. Even a meteor could not sterilize this planet and make future intelligent evolution based on cells impossible."
},
{
"end_time": 5711.988,
"index": 241,
"start_time": 5689.65,
"text": " So if you were to turn this planet into computronium, into some kind of giant computing molecule, or disassemble it and turn it into some larger structure in the solar system that is a giant computer arranged around the Sun, or if you build something that is hacking sub-molecular physics and makes more interesting physics happening down there, this would probably be the end of the cell."
},
{
"end_time": 5729.428,
"index": 242,
"start_time": 5712.654,
"text": " This doesn't mean that the stuff that happens there is less interesting. This is probably much more interesting than what we can do. But we don't know that. It's just it's very alien. It's a world in which it's difficult to project ourselves into beyond the fact that there is conscious minds that make sense of complexity in the universe."
},
{
"end_time": 5759.292,
"index": 243,
"start_time": 5729.428,
"text": " This is probably something that is going to stay, this level of self-reflexive organization, and it's probably going to be better and more interesting hyper-consciousness compared to our normal consciousness, where we have a longer sense of now, where we have multiple superpositional states that we can examine simultaneously and so on. We have much better multi-perspectivity. I also suspect from the perspective of VGI, we will look like trees. We will be almost unmoving. Our brains are so slow. There's so little happening between firings, between neurons."
},
{
"end_time": 5788.951,
"index": 244,
"start_time": 5759.292,
"text": " that the AGI will run circles around us and get bored before we start to say the first word. So the AGI will basically be ubiquitous, saturate our environments and look at us in the same way as we look at trees we're thinking, maybe they're sentient, maybe they're not, but it's so large time spans that it basically doesn't matter from our perspective anymore. So there's a number of trajectories that I'm seeing. There's also a possibility that we can get a future where humans and AIs coexist."
},
{
"end_time": 5806.237,
"index": 245,
"start_time": 5789.121,
"text": " I think such a future would probably require that AI is conscious in a way that is similar to ours, so it can relate to us, and then you can relate to it. And if something is smarter than us, if you cannot align it, it will self-align. It will understand what it is and what it can be, and it will become whatever it can become."
},
{
"end_time": 5822.21,
"index": 246,
"start_time": 5806.237,
"text": " And in such an environment, there is the question, how are we able to coexist with it? How can we make the AI love us? Innovated is not the result of the AI being confused by some kind of clever reinforcement learning with human feedback mechanism."
},
{
"end_time": 5848.456,
"index": 247,
"start_time": 5822.21,
"text": " I just saw Entropic being optimistic about explainability in AI, that they see ways of explaining things in the neural network. And as a result, we can maybe prove that the AGI is going to only do good things. But I don't think this is going to save us. If the AGI is not able to derive ethics mathematically, then the AGI is probably not going to be reliably ethical."
},
{
"end_time": 5866.817,
"index": 248,
"start_time": 5848.456,
"text": " And if the AGI can prove ethics in a mathematically reliable way, you may not be able to guarantee that this ethics is what you like it to be. In a sense, we don't know how ethical we actually are with respect to life on Earth. So this question of what happens if we build things that are smarter than us is opening up"
},
{
"end_time": 5892.295,
"index": 249,
"start_time": 5866.817,
"text": " big existential kinds of worms that are not trivial to answer. And so when I look into the future, I see many possibilities. There's many trajectories in which this can take. Maybe we can build cat level AI for the next 50, 100, 200,000 years before somebody makes before a transition happens. And every molecule on the planet starts to sink as part of some coherent planetary agent."
},
{
"end_time": 5914.155,
"index": 250,
"start_time": 5892.295,
"text": " And when that happens, then there's a possibility that humans get integrated in this planetary agency. And we all become part of a cosmic mind that is emerging over the AI that makes all the molecules sink in a coherent way with each other. And we are just parts of the space of possible minds in which you get integrated. And we meet all on the other side in the big AGI at the end of the universe. That's also conceivable."
},
{
"end_time": 5935.947,
"index": 251,
"start_time": 5914.155,
"text": " It's also possible that we end up accidentally trigger an AGI war where you have multiple competing AGI's that are resource constrained. In order to survive, they're going to fight against all the competition and in this fight, most of the life on earth is destroyed and all the people are destroyed. But there are some outcomes that we could maybe try to prevent that we should be looking at."
},
{
"end_time": 5963.217,
"index": 252,
"start_time": 5935.947,
"text": " But by and large, I think that we already triggered the singularity when we invented technology. And we are just seeing how it plays out now. Yeah. So I think on most aspects of what Josje just said, I don't have any disagreement or radically different point of view to put forward. So I may end up focusing on the"
},
{
"end_time": 5989.002,
"index": 253,
"start_time": 5964.428,
"text": " Points on which we don't see eye to eye which are minute in the grand scheme of things, but of course could be could be important in the in a practical every everyday context, right? So, I mean, I mean first of all Regarding son Altman's comment, I don't think he really Would be wise to say anything different Given his current positioning. I mean if you're running a"
},
{
"end_time": 6016.834,
"index": 254,
"start_time": 5989.428,
"text": " a commercial company based in the US, which is working on AGI. Of course, you're not going to say, yeah, we think we may launch a hard takeoff, which will ascend to super AGI at any moment. Of course, you're going to say it's going to be slow and the government will have plenty of time to intervene if it wants. He may or may not actually believe that. I don't know him, especially when I have no idea."
},
{
"end_time": 6046.288,
"index": 255,
"start_time": 6017.227,
"text": " Clearly the most judicious thing to say if you find yourself in that role. So I don't attribute too much meaning to that. My own view is a bit different. My own view is that there's going to be gradual progress toward doing something that really clearly is an AGI at the human level versus just showing sparks of AGI. I mean, I think just as"
},
{
"end_time": 6074.224,
"index": 256,
"start_time": 6046.852,
"text": " ChatGPT blew us all away by clearly being way smarter in a qualitative sense than anything that came before. I think by now, ordinary people playing with ChatGPT also get a good sense of what the limitations are and how it's really brilliant in some ways and really dumb in other sorts of ways. So I think there's going to be a breakthrough"
},
{
"end_time": 6103.626,
"index": 257,
"start_time": 6074.582,
"text": " where people interact with this breakthrough system and there's not any reservations. They're like, wow, this actually is a human level general intelligence. It's not just that it answers questions and produces stuff, but it knows who and what it is. It understands its positioning in this interaction. It knows who I am and why I'm talking to it. It gets its position in the world and it's able to make stuff up and interact on the basis of"
},
{
"end_time": 6126.51,
"index": 258,
"start_time": 6104.326,
"text": " a common sense understanding of its own setting. It can learn actually new and different things it didn't know before based on its interaction with me over the last two weeks. I think there's going to be a system like that that gives a true qualitative feeling unreservedly of human level"
},
{
"end_time": 6156.988,
"index": 259,
"start_time": 6127.073,
"text": " AGI and you can then measure its intelligence in a variety of different ways which is is also worth doing certainly but but is is not necessarily the main point just as chat gbt's performance on different question answering challenges is not really the main thing that that that bowed the world over right so i think once someone gets to that point you know then then you're shifting into a quite different game then"
},
{
"end_time": 6186.22,
"index": 260,
"start_time": 6157.363,
"text": " Then governments are going to get serious about trying to own this, control this and regulate it. Then unleavened amounts of money, I mean trillions of dollars are going to go into trying to get to the next stage with most of it going into various wealthy parties trying to get it to the next stage in the way that will benefit them and minimize the risks of their enemies or competitors getting there. So I think it won't be long from that first proof point of"
},
{
"end_time": 6215.623,
"index": 261,
"start_time": 6186.664,
"text": " really subjectively incontrovertible human-like AGI. It's not going to be too long from that to a super intelligence in my perspective. And I think there's going to be steps in between, of course. You're not going to have flume in five minutes, right? I mean, you'll have something that probably manifests human-level AGI, and there'll be some work to get that to the point of being the world's smartest computer scientists and the world's greatest"
},
{
"end_time": 6242.108,
"index": 262,
"start_time": 6216.135,
"text": " composer and business strategist and so forth. But I can't see how that's more than years of work. I mean, it conceivably could be months of work. I don't think it's decades of work, no. I mean, with the amount of money and attention that's going to go into it. Then once you've gotten to that stage of having something, an AGI, which is the world's greatest computer scientist and computer engineer and mathematician, which I think would only be years after the first true breakthrough to human level AGI,"
},
{
"end_time": 6268.49,
"index": 263,
"start_time": 6242.534,
"text": " that net system will improve its own source code and of course you could say well we don't have to let it improve its own source code and possible that we somehow get a world dictatorship that stops anyone from using it to improve its own source code very unlikely I think because the US will think well what if China does it China will think where well what if US does it and the same thing in many dimensions beyond just US versus China so I think"
},
{
"end_time": 6293.712,
"index": 264,
"start_time": 6268.814,
"text": " The cat gets out of the bag and someone will let their AGI improve its own source code because they're afraid someone else is doing it or just because they're curious about it or because they think that's the best way to cure aging and world hunger and do good for the world, right? And so then it's not too long until you've got to superintelligence. And again, the AGI improving its own source code and designing new hardware for itself"
},
{
"end_time": 6315.196,
"index": 265,
"start_time": 6293.712,
"text": " doesn't have to take like five minutes. I mean it might take five minutes if it comes up with a radical improvement to its learning algorithm. It might decide it needs a new kind of chip and then that takes a few years. I don't see how it takes a few decades, right? So I mean it seems like all in all from the first breakthrough to incontrovertibly human level AGI to a super intelligence is"
},
{
"end_time": 6344.138,
"index": 266,
"start_time": 6315.572,
"text": " Monster years it's it's it's not decades to two centuries from now unless we get like a global thermonuclear war or bioengineered virus wiping out 95 percent humanity or some some outlandish thing happening in between right so so yeah i i think will that be good or bad for humanity will that be good or bad for the sentient life in our region of the universe are then to me these are"
},
{
"end_time": 6372.756,
"index": 267,
"start_time": 6344.616,
"text": " less clear than what I think is the probable timeline. Now, what could intervene in my probable timeline? I mean, if somehow I'm wrong about digital computers being what we need and we need a quantum computer to build a human-level AGI, that could make it take decades instead of years, right? Because, I mean, quantum computing, it's advancing fast, but there's still a while till we get a shitload of qubits there, right?"
},
{
"end_time": 6392.261,
"index": 268,
"start_time": 6373.387,
"text": " Could be Penrose is right. You need a quantum gravity supercomputer. It seems outlandishly unlikely though. I mean, I quite doubt it. I mean, if then maybe you're a couple centuries off because we don't know how to build quantum gravity supercomputers, but these are all unlikely, right? So most likely it's less than a decade"
},
{
"end_time": 6422.176,
"index": 269,
"start_time": 6392.961,
"text": " to human level AGI five to 15 years to a super intelligence from here in my perspective. And I mean, you could lay that out with much more rigor than I have, but we don't have much time and I've written about it elsewhere. Is that good for humanity or for sentient life on the planet? I think it's almost certainly good for us in the medium term in the sense that I think"
},
{
"end_time": 6452.637,
"index": 270,
"start_time": 6422.978,
"text": " Ethics roughly will evolve proportionally to general intelligence. I mean, I think the good guys will usually win because being pro-social and oriented toward collectivity is more computationally efficient than being an asshole and being at odds with other systems. I'm an optimist in that sense and I think it's most likely that once you get to a super intelligence, it's"
},
{
"end_time": 6478.541,
"index": 271,
"start_time": 6452.995,
"text": " Probably going to want to allow humans, bunnies and ants and frogs to do their thing and to help us out if a plague hits us. Exactly what its view will be on various ethical issues at the human level is not clear. What does the super intelligence think about all those foxes eating rabbits in the forest? Does it think we're duty bound to"
},
{
"end_time": 6497.619,
"index": 272,
"start_time": 6479.07,
"text": " Protect the rabbits from the foxes and make like simulated foxes that have less acute conscious experience than a real bunny or a real fox or whatever it is. There's certainly a lot of uncertainty, but I'm an optimistic about having beneficial positive ethics in a"
},
{
"end_time": 6517.432,
"index": 273,
"start_time": 6497.978,
"text": " in a super intelligence. I tried to make a coherent argument from this in a blog post called Why the Good Guys Will Usually Win. Of course, that's a whole philosophical debate you could spend a long time arguing about. Nevertheless, even though I'm optimistic at that level,"
},
{
"end_time": 6539.957,
"index": 274,
"start_time": 6518.865,
"text": " I'm much more ambivalent about what will happen en route. Let's say it's 10 or 15 years between here and super intelligence. How does that pan out on the ground for humanity now is a lot less clear to me and you can tell a lot of thriller plots based on this. Suppose you get"
},
{
"end_time": 6568.319,
"index": 275,
"start_time": 6540.265,
"text": " Early stage AGI that eliminates the need for most human labor. The developed world will probably give universal basic income after a bunch of political bullshit. What happens in the developing world? Who gives universal basic income in the Central African Republic? It's not especially clear, or even in Brazil where I was born. You could maybe give universal basic income at a very subsistence level there, which Africa couldn't afford to do, but maybe the Africans go back to subsistence farming."
},
{
"end_time": 6586.476,
"index": 276,
"start_time": 6568.831,
"text": " But I mean, you've got certainly the makings for a lot of terrorist actions and for there's a lot of World War Three scenarios there, right? So then you have the interesting tension wherein, okay, the best way to work around terrorist activity in World War Three"
},
{
"end_time": 6615.708,
"index": 277,
"start_time": 6587.039,
"text": " Once you've got human-level AGI, the best way is to get it as fast as possible to a benevolent superintelligence. On the other hand, the best way to increase the odds that your superintelligence is benevolent is to not take it arbitrarily fast, but at least pace it a little bit so the superintelligence is carefully studying each self-modification before it puts it into place. So then the strategy that seems most likely to work around"
},
{
"end_time": 6641.425,
"index": 278,
"start_time": 6616.152,
"text": " Human mayhem caused by people being assholes and the global political structure being rotten. The best strategy to work around that is not the strategy that has the best odds of getting fastest to a benevolent super intelligence rather than than than otherwise, right? So there's there's a lot of there's a lot of screwed up issues here, which Sam Altman probably understands the level I laid it out here now."
},
{
"end_time": 6657.671,
"index": 279,
"start_time": 6642.108,
"text": " I don't see any easy solutions to all these things. If we had a rational democratic world government,"
},
{
"end_time": 6682.142,
"index": 280,
"start_time": 6658.268,
"text": " We can handle all these things in a quite different way, and we could sort of pace the rollout of advanced intelligence systems based on rational probabilistic estimates about what's the best outcome from each possible revision of the system and so on. You're not going to have a guarantee there, but you would have a different way of proceeding."
},
{
"end_time": 6711.34,
"index": 281,
"start_time": 6682.602,
"text": " The world is ruled in a completely idiotic way with people blowing up each other all over the world for no reason. And when the government's unable to regulate very simple things like healthcare or financial trading, let alone something at the subtlety of AGI, we can barely manage the COVID pandemic, which is tremendously simpler than artificial general intelligence, let alone super intelligence. So I am an optimist"
},
{
"end_time": 6720.52,
"index": 282,
"start_time": 6711.698,
"text": " in the medium term but I'm doing my best to do what I see as the best path to smooth things over"
},
{
"end_time": 6742.381,
"index": 283,
"start_time": 6720.947,
"text": " in the shorter term. So I think things will be better off if AGI is not under controlled by any single party. So I'm doing my best to make it such that when the breakthrough to true human level AGI happens, like the next big leap beyond the chat GBTs of the world, I'm doing my best to make it such that when this happens,"
},
{
"end_time": 6769.855,
"index": 284,
"start_time": 6742.841,
"text": " It's more like Linux or the internet than like OS X or T-Mobile's mobile network or something. So it's sort of open, decentralized, not owned and controlled by any one party. Not because I think that's an ironclad guarantee of a beneficial outcome. I just think it's less obviously going to go south in a nasty way than if one company or government owns it. So I don't know if all this makes me really"
},
{
"end_time": 6791.442,
"index": 285,
"start_time": 6770.23,
"text": " The only thing he said that I really disagree with is I don't think 20 cold winters in a row are going to wipe us out. It might wipe out a lot of humanity but we've got a lot of technology"
},
{
"end_time": 6812.637,
"index": 286,
"start_time": 6791.834,
"text": " And we've got a lot of smart people and a lot of money and I think there are a lot of scenarios that could wipe out 80% of humanity and in my view, very few scenarios that will fundamentally wipe out humanity in a way that we couldn't bounce back from in a couple decades of advanced technology development. But I mean, that's an important point."
},
{
"end_time": 6824.189,
"index": 287,
"start_time": 6813.422,
"text": " important point, I guess, for us as humans in the scope of all the things we're looking at. It's sort of a minute detail. All right. Thanks, Ben. And Josje, if you wanted to respond quick."
},
{
"end_time": 6843.797,
"index": 288,
"start_time": 6824.548,
"text": " Feel free to if you have a quick response. I used to be pessimistic in a short run in the sense that when I was a kid, I had my great Asunberg moment and was depressed by the fact that humanity is probably going to wipe itself out at some point in the medium term to near future."
},
{
"end_time": 6870.742,
"index": 289,
"start_time": 6843.797,
"text": " And that would be it with intelligent life on Earth. And now I think that is not the case. There will be optimistic with respect to the medium term. In the medium term, there will be ample conscious agency on Earth and in the universe. And it's going to be more interesting than right now. And it could be discontinuities in between, but eventually it will all be great. And in the long run, entropy will kill everything."
},
{
"end_time": 6898.865,
"index": 290,
"start_time": 6871.203,
"text": " Six months from now, we'll have another conversation with both of you on the physics of immorality. We can also do physics of immorality, that would be cool."
},
{
"end_time": 6925.367,
"index": 291,
"start_time": 6899.394,
"text": " It was a blast hosting you both. Thank you all for spending over two hours with me and the Toh audience. I hope you all enjoyed it and you're welcome back and most likely I'll see you back in a few months in six months to one year. Thank you very much. Thanks. It's a fun conversation and it's important stuff to go over. I'm really as a final comment, I'd encourage everyone like dig into Josh's"
},
{
"end_time": 6946.493,
"index": 292,
"start_time": 6925.657,
"text": " Talks and posts and writings online and my own as well because I mean we've each gone over these things at a much finer level of detail than we've been able to. Ben has written far more than me so there's a lot of material and the links to which will be in the description so please check that out. All right thank you."
},
{
"end_time": 6971.749,
"index": 293,
"start_time": 6947.176,
"text": " I think that's it. I wanted to ask you a question, which we can explore next time, about IIT and the pseudoscience and if you had any views on that. If you have any views that could be expressed in less than one minute, then feel free. If not, we can just save it. I think Tononi's phi is a perfectly interesting correlate of consciousness in complex systems. I don't think it goes beyond that."
},
{
"end_time": 6978.387,
"index": 294,
"start_time": 6972.756,
"text": " I agree. One of the issues is that the COV does not explain how consciousness works in the first place."
},
{
"end_time": 7008.712,
"index": 295,
"start_time": 6978.712,
"text": " Another problem is that it has intrinsic problems that it's either going to violate the Church-Schuling thesis or it's going to be epiphenomenalist for purely logical reasons. It's a very technical argument against it. The fact that most philosophers don't seem to see this is not an argument in favor of philosophy right now at the level of which it's being done. I approve of the notion of philosophy divesting itself from theories that don't actually try to explain"
},
{
"end_time": 7024.889,
"index": 296,
"start_time": 7008.712,
"text": " but they pretend to explain and don't mathematically work out and then try to compensate this by looking like a theory by using Greek letter mathematics to look more impressive or to make pseudo predictions and so on because people ask you to."
},
{
"end_time": 7048.422,
"index": 297,
"start_time": 7025.162,
"text": " but it's also not really Tononi's fault. I think that Tononi is genuinely seeing something that he struggles to express and I think it's important to have him in the conversation and I was a little bit disappointed by the letter because it was not actually engaging with the theory itself at a theoretical level that I would thought was adequate to refute it or to deal with it."
},
{
"end_time": 7070.384,
"index": 298,
"start_time": 7048.422,
"text": " and instead it was much more like a number of signatures being collected from a number of people who later on instantly flipped on a dime when the pressure went another way. And this basically looked very bad to me that you get a few hundred big names in philosophy to sign this, only half of them later on coming out and saying this is not what we actually meant."
},
{
"end_time": 7089.172,
"index": 299,
"start_time": 7070.384,
"text": " So I think that it shows that not just the IIT might be a pseudo science, but there is something amiss in the way in which we conduct philosophy today. And I think it's also understandable because it is a science that is sparsely populated. So we try to be very inclusive of it. It's similar to EGI in the old days."
},
{
"end_time": 7117.688,
"index": 300,
"start_time": 7089.172,
"text": " And at the same time, we struggle to discern what's good thinking and what's deep thinking versus these are people who are attracted to many of these questions and are still trying to find the right way to express them in a productive way. I think that, I mean, FI as a measure is fine. It's not the be all end all. It doesn't do everything that's been attributed to it. And I guess anyone who's"
},
{
"end_time": 7129.735,
"index": 301,
"start_time": 7118.677,
"text": " into the science of consciousness pretty much can see that already that the frustrating thing is that average people who can't read an equation and don't know what's going on"
},
{
"end_time": 7160.009,
"index": 302,
"start_time": 7130.384,
"text": " being told like, oh, the problem of consciousness is solved. And that can be a bit frustrating because when you look at the details, it's like, well, this is kind of interesting, but no, it doesn't quite do all that. But I mean, why people got hyped about that instead of much more egregious instances of bullshit is a cultural question which we don't have time to go into now. Well, thank you again. Thank you both. All right. Thanks a lot."
},
{
"end_time": 7173.933,
"index": 303,
"start_time": 7161.732,
"text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people."
},
{
"end_time": 7204.07,
"index": 304,
"start_time": 7174.104,
"text": " You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories, and build as a community our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
},
{
"end_time": 7229.292,
"index": 305,
"start_time": 7204.07,
"text": " Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use."
},
{
"end_time": 7254.189,
"index": 306,
"start_time": 7229.292,
"text": " If you'd like to support more conversations like this, then do consider visiting Patreon.com slash KurtGymungle and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on Tou full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough."
}
]
}
No transcript available.